id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_200
|
Client:
Version: 1.12.5
API version: 1.24
Go version: go1.6.4
Git commit: 7392c3b
Built: Fri Dec 16 06:14:34 2016
OS/Arch: linux/amd64
Server:
Version: 1.12.5
API version: 1.24
Go version: go1.6.4
Git commit: 7392c3b
Built: Fri Dec 16 06:14:34 2016
OS/Arch: linux/amd64
Thank you.
A: To change the IP address Docker will set on it's docker0 interface, you have to use the --bip option which defines the CIDR (eg. --bip=10.32.57.1/24), see "Customize the docker0 bridge" in Docker user guide.
Docker Toolbox uses Boot2Docker (running in a virtual machine) which is based on the Tiny Core Linux OS.
Docker daemon reads /var/lib/boot2docker/profile before starting (see "Local Customisation" in Boot2Docker's FAQ) where a EXTRA_ARGS variable is ready to be filled with your custom settings.
Just add your --bip=... in EXTRA_ARGS's value part and restart the daemon.
The following command (to type in the Docker Quickstart Terminal) will stop the Docker daemon, drop any existing rule, delete the interface, add a --bip option to /var/lib/boot2docker/profile and restart the daemon:
docker-machine ssh default "\
sudo /etc/init.d/docker stop ; \
sudo iptables -t nat -F POSTROUTING ; \
sudo ip link del docker0 ; \
sudo sed -i \"/^EXTRA_ARGS='\\$/a --bip=10.32.57.1/24\" /var/lib/boot2docker/profile ; \
sudo /etc/init.d/docker start \
"
(Content of /var/lib/boot2docker is persisted between Boot2Docker VM restarts so running this command once should suffice)
You can check with:
docker-machine ssh default "ip a show dev docker0"
If anyone needs the same manipulation on Debian (without Boot2Docker thus):
For Sysvinit:
cat >> /etc/default/docker <<EOT
# Change Docker network bridge:
DOCKER_OPTS="--bip=10.32.57.1/24" # "3257" = "dckr" on a phone keyboard
EOT
For systemd:
cat > /etc/systemd/system/docker.service <<'EOT'
[Service]
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// $OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY
EOT
mkdir /etc/sysconfig
cat > /etc/sysconfig/docker <<EOT
OPTIONS="--bip=10.32.57.1/24"
EOT
systemctl daemon-reload
Then (for both Sysvinit and systemd):
service docker stop
iptables -t nat -F POSTROUTING
ip link del docker0
service docker start
iptables -t nat -L -n # Check if POSTROUTING table is OK
| |
doc_201
|
eg www.google.com?client=android or www.google.com?client=iphone
Things I have tried:
1. cancel al request in shouldOverrideUrlLoading and request a new loadUrl on InAppWebViewController (App crashes on ios as soon as webview loads in this approach)
A: I have updated my plugin flutter_inappwebview! The latest release is 3.3.0+3 at the time of this writing.
You can achieve what you are asking through the shouldOverrideUrlLoading event (set the useShouldOverrideUrlLoading: true option) in this way:
import 'dart:async';
import 'dart:io';
import 'package:flutter/material.dart';
import 'package:flutter_inappwebview/flutter_inappwebview.dart';
Future main() async {
WidgetsFlutterBinding.ensureInitialized();
runApp(new MyApp());
}
class MyApp extends StatefulWidget {
@override
_MyAppState createState() => new _MyAppState();
}
class _MyAppState extends State<MyApp> {
InAppWebViewController webView;
@override
void initState() {
super.initState();
}
@override
void dispose() {
super.dispose();
}
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text('InAppWebView Example'),
),
body: Container(
child: Column(children: <Widget>[
Expanded(
child: InAppWebView(
initialUrl: "https://github.com/flutter?client=" + (Platform.isAndroid ? 'android': 'ios'),
initialHeaders: {},
initialOptions: InAppWebViewGroupOptions(
crossPlatform: InAppWebViewOptions(
debuggingEnabled: true,
useShouldOverrideUrlLoading: true
),
),
onWebViewCreated: (InAppWebViewController controller) {
webView = controller;
},
onLoadStart: (InAppWebViewController controller, String url) {
},
onLoadStop: (InAppWebViewController controller, String url) {
print(url);
},
shouldOverrideUrlLoading: (InAppWebViewController controller, ShouldOverrideUrlLoadingRequest shouldOverrideUrlLoadingRequest) async {
if (Platform.isAndroid || shouldOverrideUrlLoadingRequest.iosWKNavigationType == IOSWKNavigationType.LINK_ACTIVATED) {
var url = Uri.parse(shouldOverrideUrlLoadingRequest.url);
var queryParams = ((url.hasQuery) ? '&' : '?') + "client=" + (Platform.isAndroid ? 'android': 'ios');
var newUrl = shouldOverrideUrlLoadingRequest.url + queryParams;
await controller.loadUrl(url: newUrl);
return ShouldOverrideUrlLoadingAction.CANCEL;
}
return ShouldOverrideUrlLoadingAction.ALLOW;
},
))
])),
),
);
}
}
| |
doc_202
|
The values in the column "Mandant" and "Vertragspartner" are stored via term store management tool and not being fetched:
In the list view itself, I can see the values:
This is the important part of the JavaScript code, but I don't know how I have to change it to get the data, and to put it into the jqgrid since it just makes a simple Ajax call.
How come no one asked this question beforehand? I wasn't able to find a solution on the internet.
function loadSubTables() {
loadGrid("Vertragserstellung", "&$select=Id,Title,SPLUSCMGTClient,SPLUSCMGTArea,SPLUSCMGTContractType,bscomProcStatus,Vertrags,SPLUSCMGTEndDate,SPLUSCMGTStartDate,Created,Modified,Vertragsstatus&$orderby=Id desc&$top=9999", "gridmyopen", cnMyEntries, cmMyEntries, true);
}
function loadGrid(listname, query, divname, columns, columnModels, showFilter, showExcelExport, hideFooter) {
$("#" + divname).jqGrid({
rowNum: '',
footerrow: hideFooter,
datatype: function () {
loadGridData(listname, query, divname);
},
colNames: columns,
colModel: columnModels,
autowidth: true,
loadonce: true,
gridComplete: function () {
$("#" + divname).jqGrid('setGridParam', { datatype: 'local' });
$("#" + divname + "no").html(" [" + $("#" + divname).jqGrid('getGridParam', 'records') + "]");
$('.ui-jqgrid .ui-jqgrid-bdiv').css('overflow-x', 'hidden'); // hides horizontal scrollbar
},
ondblClickRow: function (rowid, iRow, iCol, e) {
onDoubleClickGrid(rowid, iRow, iCol, e, divname, listname);
}
});
if (showFilter) {
$("#" + divname).jqGrid('filterToolbar', {
autosearch: true,
stringResult: false,
searchOnEnter: true,
defaultSearch: "cn",
});
}
}
function loadGridData(listname, query, divname) {
$.ajax({
url: "https://company.de/sites/appContracts/_api/web/lists/getbytitle('" + listname + "')/Items?" + query,
type: "GET",
headers: { "Accept": "application/json;odata=verbose" },
success: function (data, textStatus, xhr) {
console.log("dat.d.results: ", data.d.results);
var thegrid = $("#" + divname)[0];
thegrid.addJSONData(data.d.results);
},
error: function (xhr, textStatus, errorThrown) {
alert("error:" + JSON.stringify(xhr));
$('#' + divname + 'records').html(" [0]");
}
});
}
A: Usually we use TaxCatchAll in SharePoint REST API to get the Managed Metadata column values.
$select=TaxCatchAll/ID,TaxCatchAll/Term&$expand=TaxCatchAll
You could check this article for more: https://sympmarc.com/2017/06/19/retrieving-multiple-sharepoint-managed-metadata-columns-via-rest/
| |
doc_203
|
traverse(Node node)
{
if(node == null)
return;
for(Node child : node.getChilds()) {
traverse(child);
}
}
Parent
|---child[1]
| child[2]
| child[3]
|---child[4]
child[3]
Right now I am traversing the graph one node at a time and the output produced is -
Node Immediate Parent
--------------------------
child[2] child[1]
child[3] child[2]
child[3] child[4]
The expected output is -
Node Immediate Parent
--------------------------
child[2] child[1]
child[3] child[2], child[4]
What would be the best way to search the nodes and produce the expected output for the graph? Any help would be appreciated.
A: If you have (or can add) a link back to the parents, you can list all the parents the first time you encounter a node, then skip it on recurring visits. You have multiple options to keep track of whether a node has been visited:
*
*maintain a set of visited nodes and check if the current one is in the set. If not, process it and add it to the set; otherwise skip.
Advantage: general approach
Disadvantage: might take a significant amount of memory to maintain the set if the graph is large
*add an isVisited member value to the node (set to false by default) and check it when encountering a node: if the value is false, process the node and set isVisited to true; otherwise skip.
Advantage: less additional memory
Disadvantage: intrusive, task-specific, the extra variable is there even when not needed, does not scale well for tasks that require multiple such "has-it-been-processed-yet" decisions simultaneously
If the parent-link option is not available, you can maintain the child-to-parent relationship in an extra map: you map from the child to the set of parents as you process the nodes. Once done with the initial processing (of building the map), you iterate the map and list each node and its parents.
The advantage over the direct parent links is that there is no extra maintenance when building/modifying the graph (unless you want to keep the mapping up-to-date as well)
The disadvantage is that you will have to re-build the map every time you want to process the graph after a series of modifications to the structure of the graph (unless -- see the note for adventage)
Note: traversing a general graph by traversing all the children can lead to infinite loops if there is a directed (parent-to-child) circle in the graph. I imagine this is not the case for your problem, but just to cover all bases: you can maintain a set of "visited" nodes as you process the graph. The discussion of the available options is identical to the one in the first ("link back to the parents") part
| |
doc_204
|
here is the message from Android Studio
A: it seems your settings.gradle is wrong.
settings.gradle should like this:
include ':app'
the include statement indicates that the app subdirectory is the only
additional subproject.if you add an android library project,it too
will be added to this file.
| |
doc_205
|
It currently displays as ios but i want to set it to be the flat ui theme all the time. I cant see anything in the demos or documentation which state how to do this.
Can somebody tell me how to change the default theme to the flat design?
<!doctype html>
<html>
<head>
<!-- Kendo UI Mobile CSS -->
<link href="styles/kendo.mobile.all.min.css" rel="stylesheet" />
<!-- jQuery JavaScript -->
<script src="js/jquery.min.js"></script>
<!-- Kendo UI Mobile combined JavaScript -->
<script src="js/kendo.mobile.min.js"></script>
<title>Kendo UI Examples</title>
</head>
<body>
<!-- Kendo Mobile View -->
<div data-role="view" data-title="test" id="index">
<!--Kendo Mobile Header -->
<header data-role="header">
<!--Kendo Mobile NavBar widget -->
<div data-role="navbar">
<span data-role="view-title"></span>
</div>
</header>
<!--Kendo Mobile ListView widget -->
<ul data-role="listview">
<li>Item 1</li>
<li>Item 2</li>
</ul>
<!--Kendo Mobile Footer -->
<footer data-role="footer">
<!-- Kendo Mobile TabStrip widget -->
<div data-role="tabstrip">
<a data-icon="home" href="#index">Home</a>
<a data-icon="settings" href="#settings">Settings</a>
</div>
</footer>
</div>
<script>
// Initialize a new Kendo Mobile Application
var app = new kendo.mobile.Application();
</script>
</body>
</html>
A: You can override the device/skin in the declaration of the app.
var app = new kendo.mobile.Application($(document.body), { skin: 'flat' });
Or you can force everyone into using a Windows Phone looking interface
var app = new kendo.mobile.Application($(document.body), { platform: 'wp8' });
See documentation...
http://docs.kendoui.com/api/mobile/application#configuration-platform
A: You need to add a selection of the selected style on your loading page.
<link href="styles/kendo.mobile.all.min.css" rel="stylesheet" />
<link href="styles/kendo.mobile.flat.min.css" rel="stylesheet" />
| |
doc_206
|
<?php
if ('a' == 0)
echo "YES";
?>
will print YES on the screen.
This bothers me. The question:
Why the string 'a' (or "a") is equal to the number 0?
I was thinking about automatic char->number conversion, but 'b' == 1 is false...
PHP Versions tested:
*
*5.4.4-14+deb7u14
*5.3.29
| |
doc_207
|
I have added the following dependencies to this microservice:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth-zipkin</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.amqp</groupId>
<artifactId>spring-rabbit</artifactId>
</dependency>
I have started the Zipkin server using the following commands:
SET RABBIT_URI=amqp://localhost
java -jar zipkin.jar
I then try to start up the microservice however I get the following error:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'rabbitListenerContainerFactory' defined in class path resource [org/springframework/boot/autoconfigure/amqp/RabbitAnnotationDrivenConfiguration.class]: Initialization of bean failed; nested exception is java.lang.NullPointerException
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:584) ~[spring-beans-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:498) ~[spring-beans-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) ~[spring-beans-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) ~[spring-beans-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:846) ~[spring-beans-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:863) ~[spring-context-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:546) ~[spring-context-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142) ~[spring-boot-2.1.1.RELEASE.jar:2.1.1.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775) [spring-boot-2.1.1.RELEASE.jar:2.1.1.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) [spring-boot-2.1.1.RELEASE.jar:2.1.1.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:316) [spring-boot-2.1.1.RELEASE.jar:2.1.1.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260) [spring-boot-2.1.1.RELEASE.jar:2.1.1.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248) [spring-boot-2.1.1.RELEASE.jar:2.1.1.RELEASE]
at com.shopping.sandbox.netflixzuulapigatewayserver.NetflixZuulApiGatewayServerApplication.main(NetflixZuulApiGatewayServerApplication.java:16) [classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_171]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_171]
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) [spring-boot-devtools-2.1.1.RELEASE.jar:2.1.1.RELEASE]
Caused by: java.lang.NullPointerException: null
at org.springframework.amqp.rabbit.config.AbstractRabbitListenerContainerFactory.getAdviceChain(AbstractRabbitListenerContainerFactory.java:198) ~[spring-rabbit-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at brave.spring.rabbit.SpringRabbitTracing.decorateSimpleRabbitListenerContainerFactory(SpringRabbitTracing.java:170) ~[brave-instrumentation-spring-rabbit-5.4.4.jar:na]
at org.springframework.cloud.sleuth.instrument.messaging.SleuthRabbitBeanPostProcessor.postProcessBeforeInitialization(TraceMessagingAutoConfiguration.java:186) ~[spring-cloud-sleuth-core-2.1.0.M2.jar:2.1.0.M2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:419) ~[spring-beans-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1737) ~[spring-beans-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:576) ~[spring-beans-5.1.3.RELEASE.jar:5.1.3.RELEASE]
... 20 common frames omitted
A: As there is a bug in Spring AMQP, which will be fixed in Release 2.1.3
Issue link
For a tempory fix, you can enable retry properties to create advice chain.
spring.rabbitmq.listener.direct.retry.enabled=true
spring.rabbitmq.listener.simple.retry.enabled=true
Hope this resolves your problem.
A: I had this same issue, changing Spring Boot version to 2.1.0.RELEASE did the trick for me. You should try it too. There must be something wrong with RabbitMQ in Spring Boot version 2.1.1.RELEASE.
A: add build.gradle
apply plugin: 'org.springframework.boot'
springBootVersion=2.1.3.RELEASE
springCloudVersion=Greenwich.RELEASE
A: Please replace
<dependency>
<groupId>org.springframework.amqp</groupId>
<artifactId>spring-rabbit</artifactId>
</dependency>
to
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
this ok
| |
doc_208
|
I had read so many post but still not getting the exact idea of what to do...
my Save button click code is given below
protected void Button1_Click(object sender, EventArgs e)
{
EmployeeService es = new EmployeeService();
CityService cs = new CityService();
DateTime dt = new DateTime(2008, 12, 12);
Payroll.Entities.Employee e1 = new Payroll.Entities.Employee();
Payroll.Entities.City city1 = cs.SelectCity(Convert.ToInt64(cmbCity.SelectedItem.Value));
e1.Name = "Archana";
e1.Title = "aaaa";
e1.BirthDate = dt;
e1.Gender = "F";
e1.HireDate = dt;
e1.MaritalStatus = "M";
e1.City = city1;
es.AddEmpoyee(e1,city1);
}
and Employeeservice Code
public string AddEmpoyee(Payroll.Entities.Employee e1, Payroll.Entities.City c1)
{
Payroll_DAO1 payrollDAO = new Payroll_DAO1();
payrollDAO.AddToEmployee(e1); //Here I am getting Error..
payrollDAO.SaveChanges();
return "SUCCESS";
}
A: This is an old thread, but another solution, which I prefer, is just update the cityId and not assign the hole model City to Employee... to do that Employee should look like:
public class Employee{
...
public int? CityId; //The ? is for allow City nullable
public virtual City City;
}
Then it's enough assigning:
e1.CityId=city1.ID;
A: Alternatively to injection and even worse Singleton, you can call Detach method before Add.
EntityFramework 6: ((IObjectContextAdapter)cs).ObjectContext.Detach(city1);
EntityFramework 4: cs.Detach(city1);
There is yet another way, in case you don't need first DBContext object. Just wrap it with using keyword:
Payroll.Entities.City city1;
using (CityService cs = new CityService())
{
city1 = cs.SelectCity(Convert.ToInt64(cmbCity.SelectedItem.Value));
}
A: I had the same problem but my issue with the @Slauma's solution (although great in certain instances) is that it recommends that I pass the context into the service which implies that the context is available from my controller. It also forces tight coupling between my controller and service layers.
I'm using Dependency Injection to inject the service/repository layers into the controller and as such do not have access to the context from the controller.
My solution was to have the service/repository layers use the same instance of the context - Singleton.
Context Singleton Class:
Reference: http://msdn.microsoft.com/en-us/library/ff650316.aspx
and http://csharpindepth.com/Articles/General/Singleton.aspx
public sealed class MyModelDbContextSingleton
{
private static readonly MyModelDbContext instance = new MyModelDbContext();
static MyModelDbContextSingleton() { }
private MyModelDbContextSingleton() { }
public static MyModelDbContext Instance
{
get
{
return instance;
}
}
}
Repository Class:
public class ProjectRepository : IProjectRepository
{
MyModelDbContext context = MyModelDbContextSingleton.Instance;
[...]
Other solutions do exist such as instantiating the context once and passing it into the constructors of your service/repository layers or another I read about which is implementing the Unit of Work pattern. I'm sure there are more...
A: Steps to reproduce can be simplified to this:
var contextOne = new EntityContext();
var contextTwo = new EntityContext();
var user = contextOne.Users.FirstOrDefault();
var group = new Group();
group.User = user;
contextTwo.Groups.Add(group);
contextTwo.SaveChanges();
Code without error:
var context = new EntityContext();
var user = context.Users.FirstOrDefault();
var group = new Group();
group.User = user; // Be careful when you set entity properties.
// Be sure that all objects came from the same context
context.Groups.Add(group);
context.SaveChanges();
Using only one EntityContext can solve this. Refer to other answers for other solutions.
A: In my case, I was using the ASP.NET Identity Framework. I had used the built in UserManager.FindByNameAsync method to retrieve an ApplicationUser entity. I then tried to reference this entity on a newly created entity on a different DbContext. This resulted in the exception you originally saw.
I solved this by creating a new ApplicationUser entity with only the Id from the UserManager method and referencing that new entity.
A: Because these two lines ...
EmployeeService es = new EmployeeService();
CityService cs = new CityService();
... don't take a parameter in the constructor, I guess that you create a context within the classes. When you load the city1...
Payroll.Entities.City city1 = cs.SelectCity(...);
...you attach the city1 to the context in CityService. Later you add a city1 as a reference to the new Employee e1 and add e1 including this reference to city1 to the context in EmployeeService. As a result you have city1 attached to two different context which is what the exception complains about.
You can fix this by creating a context outside of the service classes and injecting and using it in both services:
EmployeeService es = new EmployeeService(context);
CityService cs = new CityService(context); // same context instance
Your service classes look a bit like repositories which are responsible for only a single entity type. In such a case you will always have trouble as soon as relationships between entities are involved when you use separate contexts for the services.
You can also create a single service which is responsible for a set of closely related entities, like an EmployeeCityService (which has a single context) and delegate the whole operation in your Button1_Click method to a method of this service.
A: I hit this same problem after implementing IoC for a project (ASP.Net MVC EF6.2).
Usually I would initialise a data context in the constructor of a controller and use the same context to initialise all my repositories.
However using IoC to instantiate the repositories caused them all to have separate contexts and I started getting this error.
For now I've gone back to just newing up the repositories with a common context while I think of a better way.
A: This is how I encountered this issue. First I need to save my Order which needs a reference to my ApplicationUser table:
ApplicationUser user = new ApplicationUser();
user = UserManager.FindById(User.Identity.GetUserId());
Order entOrder = new Order();
entOrder.ApplicationUser = user; //I need this user before saving to my database using EF
The problem is that I am initializing a new ApplicationDbContext to save my new Order entity:
ApplicationDbContext db = new ApplicationDbContext();
db.Entry(entOrder).State = EntityState.Added;
db.SaveChanges();
So in order to solve the problem, I used the same ApplicationDbContext instead of using the built-in UserManager of ASP.NET MVC.
Instead of this:
user = UserManager.FindById(User.Identity.GetUserId());
I used my existing ApplicationDbContext instance:
//db instance here is the same instance as my db on my code above.
user = db.Users.Find(User.Identity.GetUserId());
A: I had the same problem and I could solve making a new instance of the object that I was trying to Update. Then I passed that object to my reposotory.
A: In this case, it turns out the error is very clear: Entity Framework cannot track an entity using multiple instances of IEntityChangeTracker or typically, multiple instances of DbContext. The solutions are: use one instance of DbContext; access all needed entities through a single repository (depending on one instance of DbContext); or turning off tracking for all entities accessed via a repository other than the one throwing this particular exception.
When following an inversion of control pattern in .Net Core Web API, I frequently find that I have controllers with dependencies such as:
private readonly IMyEntityRepository myEntityRepo; // depends on MyDbContext
private readonly IFooRepository fooRepo; // depends on MyDbContext
private readonly IBarRepository barRepo; // depends on MyDbContext
public MyController(
IMyEntityRepository myEntityRepo,
IFooRepository fooRepo,
IBarRepository barRepo)
{
this.fooRepo = fooRepo;
this.barRepo = barRepo;
this.myEntityRepo = myEntityRepo;
}
and usage like
...
myEntity.Foo = await this.fooRepository.GetFoos().SingleOrDefaultAsync(f => f.Id == model.FooId);
if (model.BarId.HasValue)
{
myEntity.Foo.Bar = await this.barRepository.GetBars().SingleOrDefaultAsync(b => b.Id == model.BarId.Value);
}
...
await this.myEntityRepo.UpdateAsync(myEntity); // this throws an error!
Since all three repositories depend on different DbContext instances per request, I have two options to avoid the problem and maintain separate repositories: change the injection of the DbContext to create a new instance only once per call:
// services.AddTransient<DbContext, MyDbContext>(); <- one instance per ctor. bad
services.AddScoped<DbContext, MyDbContext>(); // <- one instance per call. good!
or, if the child entity is being used in a read-only manner, turning off tracking on that instance:
myEntity.Foo.Bar = await this.barRepo.GetBars().AsNoTracking().SingleOrDefault(b => b.Id == model.BarId);
A: Use the same DBContext object throughout the transaction.
A: For my scenario we have a solution with several applications referencing the same context. I had to update the unity.config file adding lifetime type to the context.
<lifetime type="PerResolveLifetimeManager" />
A: Error source:
ApplicationUser user = await UserManager.FindByIdAsync(User.Identity.Name);
ApplicationDbContext db = new ApplicationDbContent();
db.Users.Uploads.Add(new MyUpload{FileName="newfile.png"});
await db.SavechangesAsync();/ZZZZZZZ
Hope someone saves some precious time
| |
doc_209
|
note: I am using rabbitmq as the messaging broker
code snippets below
1.Microservices
photaApiUsers [spring cloud version:2021.0.2, using dependency management for other spring modules],
PhotoAppApiAccountMangement [spring cloud version:2021.0.2, using dependency management for other spring modules]
2.Gateway (ApiGateway)[spring cloud version:2021.0.3-SNAPSHOT, using dependency management for other spring modules]
3.PhotoAppApiConfigServer [spring cloud version:2021.0.2, using dependency management for other spring modules]
*
*downloads application.properties file from github on start up which is used by (1)Microservices and (2)Gateway
4.PhotoAppDiscoveryService [spring cloud version:2021.0.2, using dependency management for other spring modules]
After i invoke the busrefresh api (http://localhost:8012/actuator/busrefresh) in postman, i noticed the
token.secret property in photaApiUsers is updated but not in ApiGateway.
code snippet from UserController where token.secret is properly updated after busrefresh, line 34, env.getProperty("token.secret") has updated value
code snippet from AuthorizationHeaderFilter method (isJwtValid) entry point, inside which i expect token.secret to be updated after busrefresh, line 49
code snippet from AuthorizationHeaderFilter where token.secret is not updated after busrefresh, line 71, env.getProperty("token.secret") has old value
| |
doc_210
|
body {
display: flex;
justify-content: center;
align-items: center;
min-height: 100vh;
background: white;
}
When I delete this I get it to work but for responsiveness i need it, like my nav menu is ending up beside this etc etc. Tell me if you need more.
Sending some more here, trying to get 4 cards besides each other, only goot 3 now and 1 below, probably cuz of the width or something, you see anything here i could adjust?
.container {
position: relative;
display: flex;
justify-content: center;
align-items: center;
width: 1500px;
flex-wrap: wrap;
}
.container .card {
position: relative;
width: 300px;
padding: 20px;
margin: 20px;
background: #3f3f3f;
border-radius: 8px;
box-shadow: 10px 5px 5px gray;
}
A: Well, don't apply flex to body, but only to the element (for example "main" or similar) where you really need it, otherwise you get the effect described, i.e. every element in the body becomes a flex item.
| |
doc_211
|
But with Spring Boot Actuator,
I have another MBean named 'Tomcat-1' which looks like MBean named 'Tomcat'.
Did I mis-configure something? or a bug of the Actuator? or an intended feature of it?
A: 'Tomcat-1' is for the Tomcat instance for management.
I set 'management.port' property to service it on another port.
I didn't watch my log messages carefully.
There was the answer.
| |
doc_212
|
Here is my Iframe:
<iframe src="https://bluemap.aternix.com/" title="Bluemap" id="bluemapframe"></iframe>
Here is my live site to demonstrate the current Bluemap Iframe:
https://aternix.com/mc-server
I have tried overflow-behavior: contain and numerous JS scroll and focus targeting scripts to prevent parent document from scrolling when mouse is over the frame but neither of these have worked. I was hoping some simple CSS targeting of the Iframe or even elements within the Iframe (which I have access to as its self hosted) would do the trick but nothing has worked thus far. I currently run vanilla JS.
| |
doc_213
|
Any work around for IE 9 ?
main.html
<textarea rows="2" class="form-control" id="name"
ng-model="processDTO.processLongName"
placeholder="Business Process Name" maxlength="1024" name="processName"
required
data-tooltip-html-unsafe="<div>{{1024 - processDTO.processLongName.length}} characters left</div>"
tooltip-trigger="{{{true: 'focus', false: 'never'}[processDTO.processLongName.length >= 0 || processDTO.processLongName.length == null ]}}"
tooltip-placement="top" tooltip-class="bluefill">
</textarea>
| |
doc_214
|
Here is my code:
<link rel="import" href="../bower_components/polymer/polymer-element.html">
<link rel="import" href="../bower_components/paper-input/paper-input.html">
<link rel="import" href="../bower_components/paper-button/paper-button.html">
<link rel="import" href="../bower_components/iron-form/iron-form.html">
<link rel="import" href="../bower_components/iron-ajax/iron-ajax.html">
<link rel="import" href="../bower_components/vaadin-combo-box/vaadin-combo-box.html">
<link rel="import" href="shared-styles.html">
<dom-module id="my-view2">
<template>
<style include="shared-styles">
:host {
display: block;
padding: 10px;
}
</style>
<div class="card">
<iron-ajax
id="ajax"
url="http://localhost/data.php"
params=''
handle-as="text"
on-response="hresponse"
debounce-duration="300"></iron-ajax>
<paper-input label="Name" id="thename" name="thename"></paper-input>
<iron-ajax url="https://api.myjson.com/bins/1b0f7h" last-response="{{response}}" auto></iron-ajax>
<vaadin-combo-box name="fromm" id="fromm" items="[[response]]" item-value-path="fromm" label="fromm" item-label-path="countryName">
<template>
<paper-item-body two-line>
<div>[[item.countryName]]</div>
</paper-item-body>
</template>
</vaadin-combo-box>
<button on-click="setajax">Click me</button>
</div>
</template>
<script>
Polymer({
is: "my-view2",
setajax: function () {
alert(this.$.thename.value);
// alert(this.$.fromm.value); not working - returns Object object
// this.$.ajax.params = {thename:this.$.thename.value, fromm:this.$.fromm.value};
// this.$.ajax.generateRequest();
},
hresponse: function(request) {
console.log(request.detail.response);
console.log(this.$.ajax.lastResponse);
}
});
</script>
</dom-module>
I can alert Name's value with alert(this.$.thename.value), but when I'm trying to do the same thing to from's value alert(this.$.fromm.value) it always returns [object Object].
A: Your item-value-path="fromm" in vaadin-combo-box is referencing the combo box whose value is response object. That is why it is showing you [object Object].
Change the value of item-value-path to what you want to display.
For example if your object is like
response:[{name:"stackoverflow", countryName:"Australia"}]
put item-value-path="countryName" if you want to display the value of countryName.
| |
doc_215
|
Bash file:
filename=./data_full.csv
while read line || [[ -n "$line" ]]; do
echo downloading $line
wget -e robots=off -m -np -R .html,.tmp -nH --cut-dirs=3 "$line" --header "Authorization: Bearer **AUTHENICATION KEY**" -P /Downloads/2015/Monthly/April
done < "$filename"
Where the filename data_full.csv is a list of the URL's I need to download.
However, when I run this bash script now it says:
--2020-04-18 11:25:23-- https://ladsweb.modaps.eosdis.nasa.gov/archive/allData/5000/VNP46A1/2015/099/VNP46A1.A2015099.h30v05.001.2019143192520.h5
Resolving ladsweb.modaps.eosdis.nasa.gov (ladsweb.modaps.eosdis.nasa.gov)... 2001:4d0:241a:40c0::40, 198.118.194.40
Connecting to ladsweb.modaps.eosdis.nasa.gov (ladsweb.modaps.eosdis.nasa.gov)|2001:4d0:241a:40c0::40|:443... connected.
HTTP request sent, awaiting response... 304 Not Modified
File ‘/Downloads/2015/Monthly/April/VNP46A1/2015/099/VNP46A1.A2015099.h30v05.001.2019143192520.h5’ not modified on server. Omitting download.
Even though the file has been deleted.
How do I make sure that wget downloads all the files in the .csv
I have used a different authentication code but that also did not work.
| |
doc_216
|
1 | 1999-04-01 | 0000-00-00 | 0000-00-00 | 0000-00-00 | 2008-12-01 |
2 | 1999-04-06 | 2000-04-01 | 0000-00-00 | 0000-00-00 | 2010-04-03 |
3 | 1999-01-09 | 0000-00-00 | 0000-00-00 | 0000-00-00 | 2007-09-03 |
4 | 1999-01-01 | 0000-00-00 | 1997-01-01 | 0000-00-00 | 2002-01-04 |
Is there a way, to select the earliest date from the predefined list of DATE fields using a straightforward SQL command?
So the expected output would be:
1 | 1999-04-01
2 | 1999-04-06
3 | 1998-01-09
4 | 1997-01-01
I am guessing this is not possible but I wanted to ask and make sure. My current solution in mind involves putting all the dates in a temporary table and then using that to get the MIN()
thanks
Edit: The problem with using LEAST() as stated is that the new behaviour is to return NULL if any of the columns in NULL. In a series of dates like the dataset in question, any date might be NULL. I would like to obtain the earliest actual date from the set of dates.
SOLUTION: Used a combination of LEAST() and IF() in order to filter out NULL dates.
SELECT LEAST( IF(date1=0,NOW(),date1), IF(date2=0,NOW(),date2), [...] );
Lessons learnt a) COALESCE does not treat '0000-00-00' as a NULL date, b) LEAST will return '0000-00-00' as the smallest value - I would guess this is due to internal integer comparison(?)
A: select id, least(date_col_a, date_col_b, date_col_c) from table
upd
select id, least (
case when date_col_a = '0000-00-00' then now() + interval 100 year else date_col_a end,
case when date_col_b = '0000-00-00' then now() + interval 100 year else date_col_b end) from table
A: Actually you can do it like bellow or using a large case structure... or with least(date1, date2, dateN) but with that null could be the minimum value...
select rowid, min(date)
from
( select rowid, date1 from table
union all
select rowid, date2 from table
union all
select rowid, date3 from table
/* and so on */
)
group by rowid;
HTH
A: select
id,
least(coalesce(date1, '9999-12-31'), ....)
from
table
| |
doc_217
|
I am trying to connect to MongoLab with MongoEngine for my Django project. The authentication works, the Model is defined and I do not get any error or warning. However Mymodel.objects return an empty QuerySet in the View. Is there any known issue about that? I have tried many settings, however the result is the same.
I would be grateful if anybody could point me to a solution or where to look.
Ali
| |
doc_218
|
Value Class Type
-------------------------------------
"foo" String string
new String("foo") String object
1.2 Number number
new Number(1.2) Number object
true Boolean boolean
new Boolean(true) Boolean object
new Date() Date object
new Error() Error object
[1,2,3] Array object
new Array(1, 2, 3) Array object
new Function("") Function function
/abc/g RegExp object (function in Nitro/V8)
new RegExp("meow") RegExp object (function in Nitro/V8)
{} Object object
new Object() Object object
One thing to note here is the typeof correctly returns the primitive data types associated in Javascript.
However, it returns an object type for an array which inherits from the Array.prototype, but returns a function type for a function that is inheriting from the Function.prototype.
Given everything is an object in Javascript (including arrays, functions & primitive data type objects), I find this behaviour of the typeof operator very inconsistent.
Can someone throw some light on how the typeof operator works in reality?
A: This is slightly odd, idiosyncratic Javascript behaviour. It's inherited from the earliest days of Javascript and probably would not be written in such a way today.
Nonetheless, we are where we are with Javascript, so we have to deal with it!
The thing is that values in Javascript are either objects or they are primitives. This is a design decision. They cannot be anything else. The types of primitives are:
*
*strings
*numbers
*booleans
*symbols (from ES2015)
*the special value undefined
*the special value null (for which typeof also returns object)
Anything and everything else is an object. This is by design. Arrays are objects, so typeof returns object, as does every other object that is not callable (i.e. a function). See the spec for typeof.
The better question is to ask why you want to test if something is an array. You probably don't need to, especially as array-like objects such as NodeLists are not arrays but are like them in many ways.
The best solution in most cases is to call Array.from on the value supplied: then you know that it is an array.
A: Use typeof operator in JavaScript
'Typeof' is an operator which is used to return a string description of the type of a variable.
Example
console.log(typeof 42);
output: "number"
console.log(typeof true);
output: "boolean"
| |
doc_219
|
The original codepen (see: Codepen) shows a few ways to open a modal window using animation. I'm using the 'UNFOLDING' animation.
The Codepen uses the following jQuery:
$('.button').click(function(){
var buttonId = $(this).attr('id');
$('#modal-container').removeAttr('class').addClass(buttonId);
$('body').addClass('modal-active');
})
$('#modal-container').click(function(){
$(this).addClass('out');
$('body').removeClass('modal-active');
});
I'm trying to rewrite this as JavaScript.
I've got the modal to open with the following JavaScript:
let button = document.getElementById('start');
let body = document.body;
button.addEventListener('click', () => {
document.getElementById('modal-container').classList.add('one');
body.classList.add("modal-active");
});
BUT, I can't get it to close!
I tried the following but it doesn't work properly (compared to original Codepen):
let button2 = document.getElementById('modal-container');
button2.addEventListener('click', () => {
document.getElementById('modal-container').classList.add('out');
document.getElementById('modal-container').classList.remove('one');
body.classList.remove("modal-active");
});
Hoping someone can show me where I've gone wrong.
Thanks.
A: Maybe try to not remove the "one" class when you click the button2
let button2 = document.getElementById('modal-container');
button2.addEventListener('click', () => {
document.getElementById('modal-container').classList.add('out');
body.classList.remove("modal-active");
});
Because when I inspect the code in the codepen, the div with the "modal-container" still has the "one" class on it when we close the modal
A: Thanks to Indana Rishi, I arrived at the following code, which, while probably not all that elegant, certainly works:
let button = document.getElementById('start');
let body = document.body;
button.addEventListener('click', () => {
document.getElementById('modal-container').classList.remove('one');
document.getElementById('modal-container').classList.remove('out');
document.getElementById('modal-container').classList.add('one');
body.classList.add("modal-active");
});
let modcon = document.getElementById('modal-container');
modcon.addEventListener('click', () => {
document.getElementById('modal-container').classList.add('out');
body.classList.remove("modal-active");
});
| |
doc_220
|
#directory where all data will be stored
dataDir="C:/Users/me/Desktop/Data/"
Files=[] #list of files
for file in os.listdir(dataDir):
Files.append(scipy.io.loadmat(dataDir+file))
But now, I'm trying to have the user select the folder so I have this:
import tkinter
from tkinter import filedialog
from tkinter import *
root=tkinter.Tk()
filename=filedialog.askdirectory(parent=root,title='Choose a file')
print (filename)
#directory where all data will be stored
dataDir=('%s',filename)
Files=[] #list of files
for file in os.listdir(dataDir):
Files.append(scipy.io.loadmat(dataDir+file))
and it is giving me this error:
"for file in os.listdir(dataDir):
TypeError: listdir: path should be string, bytes, os.PathLike or None, not tuple)
I tried making filename into a string by doing str(filename), and it still wouldn't work. Any ideas?
A: When you define dataDir = ('%s', filename) you are creating a tuple with two elements. One is '%s' and the other the value of filename.
If I understand correctly you shoud use dataDir = '%s' % filename. That way dataDir will be a string with the value of filename.
A: You create tuple in command
dataDir=('%s',filename)
and you use it in listdir(dataDir) which expect string
Use filename directly in listdir
for file in os.listdir(filename):
A: The error states that the path you give listdir should be a str and that you gave it a tuple.
With dataDir=('%s',filename), dataDir is a tuple containing two strings. However, filename is already a str. Instead of os.listdir(dataDir), try os.listdir(filename).
You will need to import os.
| |
doc_221
|
color1 = 30, 40, 50, 60
color2 = 50, 60, 70, 80
If they were to be printed what values would the resulting color have?
color_new = min(cyan1 + cyan2, 100), min(magenta1 + magenta2, 100), min(yellow1 + yellow2, 100), min(black1 + black2, 100)?
Suppose there is a color defined in CMYK:
color = 40, 30, 30, 100
It is possible to print a color at partial intensity, i.e. as a tint. What values would have a 50% tint of that color?
color_new = cyan / 2, magenta / 2, yellow / 2, black / 2?
I'm asking this to better understand the "tintTransform" function in PDF Reference 1.7, 4.5.5 Special Color Spaces, DeviceN Color Spaces
Update:
To better clarify: I'm not entirely concerned with human perception or how the CMYK dyies react to the paper. If someone specifies 90% tint which, when printed, looks like full intensity colorant, that's ok.
In other words, if I asking how to compute 50% of cmyk(40, 30, 30, 100) I'm asking how to compute the new values, regardless of whether the result looks half-dark or not.
Update 2:
I'm confused now. I checked this in InDesign and Acrobat. For example Pantone 3005 has CMYK 100, 34, 0, 2, and its 25% tint has CMYK 25, 8.5, 0, 0.5.
Does it mean I can "monkey around in a linear way"?
A: No, in general you can't monkey around in a linear way with arbitrary color spaces and hope to see a result that corresponds to human perception. The general strategy is to convert from your color space into CIE Lab or Luv color space, do the transformation, then go back to your color space (which is lossy).
Excellent FAQ: http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html
A: Supposing C,M,Y,K are the % of ink to print. Adding two colors would be:
C = min(100,C1+C2)
M = min(100,M1+M2)
Y = min(100,Y1+Y2)
K = min(100,K1+K2)
And then they must be normalized, because usually no more than three inks are printed, and an equal amount of C,M,Y is replaced by the same amount of black.
G = min(C,M,Y) // Black amount produced by C,M,Y
C -= G
M -= G
Y -= G
K = min(100,K+G)
At this point, you may want to limit C+M+Y+K to some value like 240. You can also try G=min(C,M,Y,100-K).
A: If you're just doing tints of colours then a straight multiplication will be fine - this ensures that the inks will all be in the same ratios.
You can see this by bringing up the Colors panel in InDesign, and holding down shift and dragging one of the colour sliders. The other sliders will move proportionally.
Adding two colours has the same effect as overprinting (where one colour is printed directly over another colour). So if 100% magenta and 100% cyan were printed, and then 100% black were printed on top, the result would be exactly the same as 100% magenta, 100% cyan and 100% black.
A: To answer your first question:
color_new = min(cyan1 + cyan2, 100),
min(magenta1 + magenta2, 100),
min(yellow1 + yellow2, 100),
min(black1 + black2, 100)
If this had been RGB values it would result in saturated colours (i.e. colours at or near white). The converse will be true for the CMY part of the CMYK colour - they will tend to black (well practically dark brown). The addition of black means that you get pure black (thanks Skilldrick). After all if you have 100% black and any combination of CMY the result will be black.
Re your second update I would expect that the results you obtained from Acrobat would apply universally.
A: Using Photoshop's multiply blend as a guide in CMYK mode I came to the following conclusion:
mix = colour1 + colour2 - color1 * color2
So 50% magenta blended with 50% magenta would equate to
50% + 50% - 50% * 50%
100% - 25%
75% magenta
100% black and 100% black would come to (100% + 100% - 100% * 100%) = 100%
It's seems to at least tally with the tests I did in Photoshop and multiply blends. Whether that is right for print, I can't say.
| |
doc_222
|
I have a layout file:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/container"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
tools:context="com.hust.thanhtv.nlp.MainActivity">
<FrameLayout
android:id="@+id/content"
android:layout_width="match_parent"
android:layout_height="0dp"
android:layout_weight="1">
<fragment
android:layout_width="match_parent"
android:layout_height="match_parent"
android:id="@+id/fragMain"
class="layout.ListStory">
</fragment>
</FrameLayout>
<android.support.design.widget.BottomNavigationView
android:id="@+id/navigation"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_gravity="bottom"
android:background="?android:attr/windowBackground"
app:menu="@menu/navigation" />
</LinearLayout>
and java file:
private void init(){
databaseHelper = new DatabaseHelper(this);
allStories = databaseHelper.getListStory();
FragmentManager fm = getSupportFragmentManager();
Fragment frchapter = ListChapter.newInstance(allStories);
FragmentTransaction transaction = fm.beginTransaction();
transaction.replace(R.id.fragMain, frchapter);
transaction.commit();
}
private BottomNavigationView.OnNavigationItemSelectedListener mOnNavigationItemSelectedListener
= new BottomNavigationView.OnNavigationItemSelectedListener() {
@Override
public boolean onNavigationItemSelected(@NonNull MenuItem item) {
switch (item.getItemId()) {
case R.id.navigation_home:
return true;
case R.id.navigation_dashboard:
return true;
case R.id.navigation_notifications:
return true;
}
return false;
}
};
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
BottomNavigationView navigation = (BottomNavigationView) findViewById(R.id.navigation);
navigation.setOnNavigationItemSelectedListener(mOnNavigationItemSelectedListener);
init();
}
}
you can see that to send allStories to fragment i have to recreate an new instance of ListChapter by this line of code
Fragment frchapter = ListChapter.newInstance(allStories);
(old instance has been created when i call
setContentView(R.layout.activity_main);
right?) then i replace this new instace with old instace by this block:
FragmentTransaction transaction = fm.beginTransaction();
transaction.replace(R.id.fragMain, frchapter);
transaction.commit();
i think that is unnecessary to recreate instance of ListChapter to do that work i don't know if there is another way to do that (i mean i want send data directly to old instance and update some element there). Is there a way to do that?
A: I would use the Parceble Interface to send data between fragments and activities. I also use shared preferences to link data between such components.You can manipulate LocalBroadcastManager to your liking to broadcast and listen to events.
A great library that effectively and efficiently does the aforementioned, is EventBus.
One of primary issues of these Parceble and bundle approaches is that they can create strong dependencies between each component, making it difficult to change one part of the system without impacting another area.
Publish/subscribe models try to avoid this tight integration by relying on an event bus model. In this type of model, there are publishers and subscribers. Publishers are responsible for posting events in response to some type of state change, while subscribers respond to these events.
The event acts as an intermediary for exchanging information, isolating and minimizing the dependencies between each side. In this way, message buses create a communication pipeline that is intended to help make your app more maintainable and scalable.
Android optimized event bus that simplifies communication between Activities, Fragments, Threads, Services, etc. Less code, better quality. Link.
To clear up your initial question, you want to pass data to fragment without recreating every time(replacing in main layout), You have options ,easiest way is to create a pubic static method that sets a file content,thus giving any class access to provide and access data to fro the Fragment. Or like i said previously you can create a shred preference and treat it as a central data storage pool, to any class requesting access. Or if you actually read about the event bus, you could treat the initiating class as a publisher that sends data( eg. start fragment), then a subscriber(your Fragment) that listens and captures data. This way is much cleaner because there is no strict dependency between any such component.Still not convinced?
| |
doc_223
|
In the CSV file my Date field is DD-MM-YYYY HH:MI and this gives me an error:
psycopg2.errors.DatetimeFieldOverflow: date/time field value out of
range: "31-12-2020 08:09"
Is there a way I can define the Date/Time format when using COPY FROM?
DB column is type TIMESTAMP if relevant.
I only know how to do this with a line-by-line INSERT statement.
A: Just before the COPY command do:
set datestyle = euro;
show datestyle;
DateStyle
-----------
ISO, DMY
Then this works:
SELECT '31-12-2020 08:09'::timestamp;
timestamp
---------------------
2020-12-31 08:09:00
Otherwise with my default datestyle:
show datestyle;
DateStyle
-----------
ISO, MDY
SELECT '31-12-2020 08:09'::timestamp;
ERROR: date/time field value out of range: "31-12-2020 08:09"
LINE 1: SELECT '31-12-2020 08:09'::timestamp;
For more information see here Date input Table 8.15. Date Order Conventions
| |
doc_224
|
We are using Mod proxy for communication between web server and jboss application server.
We are planning to disable TLSv1.0 in mod proxy configuration. After disabling tlsv1.0, the mod_proxy's ssl configuration looks like below.
SSLProxyEngine on
SSLProtocol all -SSLv2 -SSLv3 -TLSv1
SSLCipherSuite ALL:HIGH:MEDIUM:!aNULL:!MD5
SSLCertificateFile /app/certificates/star_rpmi.cer
SSLCertificateKeyFile /app/certificates/rpmi.key
SSLProxyMachineCertificateFile /app/jboss.keystore
SSLProxyProtocol all -SSLv2 -SSLv3 -TLSv1
However, once we disable the tlsv1.0, the communication between mod proxy and jboss application server fails.
In apache's debug logs, we have found that the ciphers suits being negotiated is TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
This cipher suit have been added in jboss application server configuration file as well. However, the connection is still failing.
What could be the reason behind this?
| |
doc_225
|
Something like count if(something) the column A is not empty.
A: It sounds like you want to count all rows. If so, use: COUNT(*) or COUNT(1).
Otherwise, use COALESCE() or CASE. If "empty" means NULL, then:
COUNT(COALESCE(a, b))
If "empty" means something else, then something like:
COUNT(CASE WHEN a <> '' THEN a ELSE b END)
A: You need to mention that condition in where clause columnA is not null or <>''
select count(something_column) from your_table where columnA <>'' or columnA is not null
A: The Tableau calculation is ifnull([First Field], [Second Field])
| |
doc_226
|
Suggestions? Here's the log:
***startup*******
12-Oct-2013 15:21:20.848 INFO [main] org.apache.catalina.core.AprLifecycleListener.init Loaded APR based Apache Tomcat Native library 1.1.28 using APR version 1.3.9.
12-Oct-2013 15:21:20.855 INFO [main] org.apache.catalina.core.AprLifecycleListener.init APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
12-Oct-2013 15:21:21.041 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized (OpenSSL 1.0.0-fips 29 Mar 2010)
12-Oct-2013 15:21:21.187 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-apr-8080"]
12-Oct-2013 15:21:21.201 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-apr-8009"]
12-Oct-2013 15:21:21.204 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 790 ms
12-Oct-2013 15:21:21.233 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service Catalina
12-Oct-2013 15:21:21.234 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.0.0-RC3
12-Oct-2013 15:21:21.243 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /home/openemm-2013/webapps/openemm
12-Oct-2013 15:21:27.879 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /home/openemm-2013/webapps/openemm-ws
12-Oct-2013 15:21:33.911 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /home/openemm-2013/webapps/manual
12-Oct-2013 15:21:34.507 INFO [localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
12-Oct-2013 15:21:34.514 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-apr-8080"]
12-Oct-2013 15:21:34.519 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-apr-8009"]
12-Oct-2013 15:21:34.521 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 13315 ms
A: You are trying to access the ROOT web application but you do not have a ROOT web application deployed. From your logs, you only have /openemm, /openemm-ws and /manual
Looking at the latest OpenEMM download (2013-R2) it contains a sever.xml file that configures different context paths for some of those web applications. There are a couple of ways to deal with this. I recommend the following renames in your $CATALINA_BASE/webapps directory. Make sure you stop Tomcat before you do the renames.
*
*openemm -> ROOT
*openemm-ws -> openemm-ws2
| |
doc_227
|
['Time : tap/stap_tap2gpsb/SBMSGRSP/status/bit0: 19359560-19359561 step 1', 'Expect : tap/stap_tap2gpsb/SBMSGRSP/status/bit0: XX', 'Acquired : tap/stap_tap2gpsb/SBMSGRSP/status/bit0: 00', 'Time : tap/stap_tap2gpsb/SBMSGRSP/status/bit1: 19359560-19359561 step 1', 'Expect : tap/stap_tap2gpsb/SBMSGRSP/status/bit1: XX', 'Acquired : tap/stap_tap2gpsb/SBMSGRSP/status/bit1: 00', '']
and I want to grab certain word from the line which is :
Acquired : tap/stap_tap2gpsb/SBMSGRSP/status/bit0: 00
Acquired : tap/stap_tap2gpsb/SBMSGRSP/status/bit1: 00
I'm using re.search function to match these line and I'm getting these:
searchObj.group() = Acquired : tap/stap_tap2gpsb/SBMSGRSP/status/bit0:0
searchObj.group(1) = 0
searchObj.group(2) = 0
status[0] == 0
searchObj.group() = Acquired : tap/stap_tap2gpsb/SBMSGRSP/status/bit1:0
searchObj.group(1) = 1
searchObj.group(2) = 0
status[1] == 0
how can I append the first match and second match together? because what I want to do is I need the status[0] and status[1] gives 1 for passing value or else it will throw these value into failed value
Below are my codes :
for line in lines:
searchObj = re.search(r'^Acquired\s+:tap/stap_tap2gpsb/SBMSGRSP/status/bit(\d): (\d)', str(line))
if searchObj:
print "searchObj.group() = ", searchObj.group()
print "searchObj.group(1) = ", searchObj.group(1)
print "searchObj.group(2) = ", searchObj.group(2)
print "status[" + searchObj.group(1) + "] == " + searchObj.group(2)
A: You can easily collect the matches into whatever data structures make sense to you. For example:
match_lines = []
tap_tuples = []
for line in lines:
searchObj = re.search(r'^Acquired\s+:tap/stap_tap2gpsb/SBMSGRSP/status/bit(\d): (\d)', str(line))
if searchObj:
match_lines.append(line)
tap_tuples.append((searchObj.group(1), searchObj.group(2)))
print('\n'.join(match_lines))
print(';'.join(tap_tuples))
By the by, if you are fetching these lines from a text file, you might want to process them one by one at the same time:
with open('file.txt') as handle:
for line in handle:
...
If this is inside a function, maybe yield a result each time you find a match if you want the calling code to process them one by one. The next call to the function will yield the next match from the still-open filehandle until the input file is consumed.
| |
doc_228
|
My SharedPreferences: SharedPreferences prefs = getPreferences(Context.MODE_PRIVATE);
Opening the Dialog and passing the TextView's ID:
fm = getFragmentManager();
myFragment = new Fragment_Subject_Edit();
FirstButton.setOnLongClickListener(new View.OnLongClickListener() {
@Override
public boolean onLongClick(View v) {
Bundle data = new Bundle();
data.putInt("ID", FirstText.getId());
myFragment.setArguments(data);
myFragment.show(fm, "ClassEditor");
return false;
}
});
Recieve ID in the Dialog Fragment:
if (getArguments() != null) {
Bundle data = getArguments();
int id = data.getInt("ID");
mTextView = (TextView) getActivity().findViewById(id);
}
Adding the new text to the TextView bellow the button:
DoneButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
String newText = mEditText.getText().toString();
mTextView.setText(newText);
getDialog().dismiss();
}
});
The question here is really just how to update the Shared Preferences
A: You will simply use Editor to save your new data on SharedPreferences
So in this case your code will be like this
DoneButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
String newText = mEditText.getText().toString();
mTextView.setText(newText);
SharedPreferences.Editor e = getActivity().getPreferences(Context.MODE_PRIVATE).edit();
editor .putString ("key","your_value");
editor .commit();
getDialog().dismiss();
}
});
| |
doc_229
|
$res = $db->query($sql);
while($rs = $res->fetch_assoc())
{
echo $rs['field'];
}
I found out that I could use mysqli_data_seek for setting the internal result pointer, so I could change the loop into the following:
$res = $db->query($sql);
$records = $res->num_rows;
for ($i = 0; $i <= $records-1; $i++)
{
mysqli_data_seek($res,$i); // set result pointer
$rs = mysqli_fetch_assoc($res);
echo $rs['field'];
}
I benchmarked both ways and couldn't see any difference but I was wondering - are there any drawbacks using the second method?
Thanks
A:
PHP - is it recommended to use mysqli_data_seek within a loop
of course not.
are there any drawbacks using the second method?
Sure. It takes as much as twice more code than first one.
| |
doc_230
|
I have put the terminal output here.
Thanks.
A: If you've had a 10.7, I'd ask "have you latest X-Code installed?", because I got no errors just now.
But instead I will ask: did you try/consider easy_install or pip install?
A: You're using an alpha release of Python 2.7.0, an alpha release for which setuptools, which is used by lxml's setup.py, doesn't work. Setuptools relies on an implementation detail of distutils.sysconfig (the _config_vars attribute) that was changed in the early Python 2.7 alpha releases, and reverted later in the release process (to un-break setuptools, in all likelyhood.)
It's a good idea to install alpha releases to test if your own software works in the newer Python version, but you should not keep using them after newer versions are released, and certainly not after the final release has been made. In this case, two more patchreleases were made later; you should really install those instead.
| |
doc_231
|
When performing a check-in executes a build definition that makes deploy and run the tests. The deploy done correctly, but when tests throws me the following error:
An error occurred while SQL Server unit testing settings were being read from the configuration file. Click the test project, open the
SQL Server Test Configuration dialog box from the SQL menu, add the
settings to the dialog box, and rebuild the project.
app.config:
<configuration>
<configSections>
<section name="SqlUnitTesting" type="Microsoft.Data.Tools.Schema.Sql.UnitTesting.Configuration.SqlUnitTestingSection, Microsoft.Data.Tools.Schema.Sql.UnitTesting, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
</configSections>
<SqlUnitTesting AllowConfigurationOverride="true">
<DatabaseDeployment DatabaseProjectFileName="[RELATIVEPATHLOCAL]"
Configuration="Release" />
<DataGeneration ClearDatabase="true" />
<ExecutionContext Provider="System.Data.SqlClient" ConnectionString="Data Source=[LOCALSERVER];Initial Catalog=[DATABASE];Integrated Security=True;Pooling=False"
CommandTimeout="30" />
<PrivilegedContext Provider="System.Data.SqlClient" ConnectionString="Data Source=[LOCALSERVER];Initial Catalog=[DATABASE];Integrated Security=True;Pooling=False"
CommandTimeout="30" />
</SqlUnitTesting>
</configuration>
[tfsbuildserver].sqlunittesting.config:
<SqlUnitTesting>
<DatabaseDeployment DatabaseProjectFileName="[RELATIVEPATHTFS]"
Configuration="Release" />
<DataGeneration ClearDatabase="true" />
<ExecutionContext Provider="System.Data.SqlClient" ConnectionString="Data Source=[SERVERTEST];Initial Catalog=[DATABASETEST];Persist Security Info=True;User ID=[USER];Password=[PASS];Pooling=False"
CommandTimeout="30" />
<PrivilegedContext Provider="System.Data.SqlClient" ConnectionString="Data Source=[SERVERTEST];Initial Catalog=[DATABASETEST];Persist Security Info=True;User ID=[USER];Password=[PASS];Pooling=False"
CommandTimeout="30" />
</SqlUnitTesting>
Tests run correctly locally. The error occurs when performing the build definition
Sorry for my English.
Thanks
A: Turns out the issue was that I had leading white space before the <SqlUnitTesting>, once these were removed the test ran as expected and remove
<?xml version="1.0" encoding="utf-8" ?>
URL: Link Resolved
| |
doc_232
|
(function (d) {
var modal = document.createElement('iframe');
modal.setAttribute('src', 'mypage.html'));
modal.setAttribute('scrolling', 'no');
modal.className = 'modal';document.body.appendChild(modal);
var c = document.createElement('link');
c.type = 'text/css';
c.rel = 'stylesheet';
c.href = '//myurl.com/testes.css';
document.body.appendChild(c);
}(document));
I tried the following which tried to close it, but, then gives a 404 message inside the iframe:
<a href="window.parent.document.getElementById('iframe').parentNode.removeChild(window.parent.document.getElementById('iframe'))">Close</a>
I am loading jquery on the page if that helps at all, meaning, if there is a jquery solution.
A: Links are supposed to link somewhere and the href attribute takes a URI.
Use a <button type="button"> and bind a click handler to it. (You could use an onclick attribute, but that wouldn't be unobtrusive)
The unobtrusive approach would be:
<a href="someUriWithoutAnIframe" target="_parent">foo</a>
And then bind an event handler along the lines of
myLink.addEventListener('click', function (e) {
var d = window.parent.document;
var frame = d.getElementById('myFrame');
frame.parentNode.removeChild(frame);
e.preventDefault();
});
A: You could put the link into some kind of container div for your iframe, creating the latter with this structure:
<div id="iframe-container">
<a href='#' onclick='this.parentNode.parentNode.removeChild(this.parentNode)'>Close</a>
<iframe src="http://some.web/page.html" />
</div>
Works without any framework.
| |
doc_233
|
https://groups.google.com/forum/#!topic/finaglers/nAaCfOiLp1w
Can someone give me some guideline/example ?
My return value want to be look like:
{
ids:[id1,id2,id3...idn]
}
A: Not going into the details on why JSONP considered insecure (I assume you know that already), the Finaglers thread you're referencing mentions JsonpFilter that can be applied to an HTTP service returning JSON to "upgrade" it to JSONP.
Here is a small example on how to wire this filter with Finch's endpoint.
import com.twitter.finagle.Http
import com.twitter.finagle.http.filter.JsonpFilter
import io.finch._
import io.finch.circe._
val endpoint: Endpoint[Map[String, String]] = get("jsonp") {
Ok(Map("foo" -> "bar"))
}
val service = endpoint.toServiceAs[Application.Json]
Http.server.serve(":8080", JsonpFilter.andThen(service))
JsonpFilter is dead simple. It checks the returned HTTP payload and it's a JSON string, it wraps it with a call to a function whose name is passed in the callback query-string param (and changes the content-type to application/javascript correspondingly). Using httpie, this would look as:
$ http :8081/jsonp
HTTP/1.1 200 OK
Content-Encoding: gzip
Content-Length: 39
Content-Type: application/json
{
"foo": "bar"
}
$ http :8080/jsonp?callback=myfunction
HTTP/1.1 200 OK
Content-Encoding: gzip
Content-Length: 56
Content-Type: application/javascript
/**/myfunction({"foo":"bar"});
| |
doc_234
|
A: One way and easiest way of doing this is to customize the page template for your tooltwist extension project, thus you will be deploying it inside your tooltwist project.
To customize:
*
*Copy page/ and page_template/ folder under ttWbd/widgets/toolbox into your project's widget directory. (e.g. {projectName}_t/widgets/{projectName}_widgets/)
*Open conf.xml of page_template and change the configuration appropriately for linkedWidget, label and description nodes. linkedWidget will have value like project {projectName}_widgets.page
*If you already created pages on your project, you need to replace linkedWidget nodes of the configuration file (conf.xml) of all your pages from toolbox.page to {projectName}_widget.page including all version folders.
*Under folder page/ you can customize template header depend on the page mode you selected in your designer. For mode = page update new_jquery_header.jsp.
Page Template:
NOTE:
A new page template option will appear on your assets list when creating new assets.
| |
doc_235
|
<xs:element name="person" type="Person"/>
<xs:element name="child" type="Child"/>
<xs:complexType name="Person">
<xs:sequence>
<xs:element name="age" type="xs:int"/>
<xs:element name="sex" type="xs:string"/>
<xs:element name="fullname" type="xs:string"/>
</xs:sequence>
</xs:complexType>
<xs:complexType name="Child">
<xs:complexContent>
<xs:extension base="Person">
<xs:sequence>
<xs:element name="grade" type="xs:string"/>
<xs:element name="school" type="xs:string"/>
</xs:sequence>
</xs:extension>
</xs:complexContent>
</xs:complexType>
I'm able to marshal instances of the Child class and send them to a client browser using the Jersey REST API.
But when I try to do this for the Person class it no longer works. I get the error message:
A message body writer for Java class app.Person, and Java type class app.Person, and MIME media type application/xml was not found
By extending the Person schema type, am I somehow rendering it no longer a root-level element? When Child doesn't extend Person, it works fine.
| |
doc_236
|
What is the right way to use process.env inside the cloud functions?
A: The Custom Function server is managed by HarperDB, so it looks for a .env file located at HarperDB's root.
The solution is to set the path value when you invoke dotenvconfig :
require('dotenv').config({ path: '/path/to/hdb/custom_functions/myproject/.env' })
| |
doc_237
|
The issue seems to be that they do not have CORS headers enabled, so I have to use jsonp. Since the GET request returns json, but my ajax function is expecting json-p I receive an error:
Uncaught SyntaxError: Unexpected token :
2?callback=jQuery22005740937136579305_1452887017109&_=1452887017110:1
I need this resource, but I am unsure how to get around this. I've looked around on SO, but haven't found anything that matches this issue specifically. There are a few sites that are able to obtain a specific user's inventory, so in some respect it has to be possible.
My Ajax call
$.ajax({
url: "http://steamcommunity.com/profiles/76561198064153275/inventory/json/730/2",
type: 'GET',
dataType: 'jsonp',
success: function(response) {
console.log(response);
if(response.error){
alert(response.error_text);
}else {
console.log("SUCCESS!!");
}
}
});
A: I have figured out a workaround! I am using django for my web application and so I tried to make the request server side.
I installed the requests library through pip (The library: http://docs.python-requests.org/en/latest/)
In my django app I made a view that would be called by my AJAX request
def get_steam_inv(request):
user_steam_profile = SteamProfile.objects.get(brokerr_user_id = request.user.id)
r = requests.get("http://steamcommunity.com/profiles/76561198064153275/inventory/json/730/2")
return JsonResponse(r.json())
Then my ajax request for this view:
$.ajax({
url: "/ajax_get_steam_inv/",
type: 'GET',
success: function(response) {
console.log(response);
// result = JSON.parse(response);
if (response.error){
alert(response.error_text);
} else {
console.log(response);
}
}
});
And now I have the data I needed !
| |
doc_238
|
This will be done according to the current database scheme.
How can it be done?
How can I create classes on runtime in python?
Should I create a json representation and save it in a database and then unserialize it into a python object?
A: You can try to read this http://code.djangoproject.com/wiki/DynamicModels
Here is example how to create python model class:
Person = type('Person', (models.Model,), {
'first_name': models.CharField(max_length=255),
'last_name': models.CharField(max_length=255),
})
You can also read about python meta classes:
- What is a metaclass in Python?
- http://www.ibm.com/developerworks/linux/library/l-pymeta.html
- http://gnosis.cx/publish/programming/metaclass_1.html
A: You could base yourself on the legacy database support of django which allows you to obtain django models from the definitions found in the database :
See here : http://docs.djangoproject.com/en/dev/howto/legacy-databases/?from=olddocs
In particular,
manage.py inspectdb
allows you to create the classes in a file. You should then be able to import them on the fly.
That said, it seems to me that you are on a risky path by doing this.
A:
I have an application that needs to generate its models on runtime.
Take a look at the source code for the inspectdb management command. Inspectdb "Introspects the database tables in the database pointed-to by the NAME setting and outputs a Django model module (a models.py file) to standard output."
How can I create classes on runtime in python?
One way to do this is to use the functions provided by the new module (this module has been deprecated in favor of types since 2.6).
Should I create a json representation and save it in a database and then unserialize it into a python object?
This doesn't sound like a good idea to me.
PS: All said you ought to really rethink the premise for creating classes at runtime. It seems rather extreme for a web application. Just my 2c.
| |
doc_239
|
The java application has to sometimes call a c program called fpcalc64, on installation it gets stored in the app folder, but it only has rw permission
Question 1
Is there a way I can give it execute permission as part of install ?
Question 2
Even if I manually go into the app folder and use chmod 777 fpcalc64 to give full execute permission, it still doesn't run and my Java program complains it doesn't have permission, this continues to happen even if I restart the application, how can this be ?
e.g
Fails
Jul 4, 2020 11:17:46 AM: WARNING: Unable to retrieve an acoustic id for song 2 file /home/ubuntu/Downloads/RedSharp_-_Walk_in_the_row.mp3 because Cannot run program "/home/ubuntu/code/jthink/jaikoz/JaikozAppImage/jaikoz/lib/app/fpcalc64": error=13, Permission denied
But if I run from linux shell it works
ubuntu@ubuntu:~/code/jthink/jaikoz/JaikozAppImage/jaikoz/lib/app$ ls -l fpcalc*
-rwxrwxrwx 1 ubuntu ubuntu 2210876 Jul 4 11:15 fpcalc
-rwxrwxrwx 1 ubuntu ubuntu 2249272 Jul 4 11:15 fpcalc64
ubuntu@ubuntu:~/code/jthink/jaikoz/JaikozAppImage/jaikoz/lib/app$ ./fpcalc64 /home/ubuntu/Downloads/Red*
FILE=/home/ubuntu/Downloads/RedSharp_-_Walk_in_the_row.mp3
DURATION=273
FINGERPRINT=AQADtIsURmsSDc9i44-Iqsemr8a1MIOXKBHCi9CJ0M7R-MH5oNNy_Dp26ag55EmUJ5DjI-zREE-P9wz-odeKCyGbHXriIj8qbiYaPRue4_zx7MHHE1eiw1yE80d1IdmDnDy-LCeub_gVVLkGP8uHmTNhVjkicUeyh0MTHs_RE8-Jurg7XItwHVl-aFmHPGMq_Al-NCGFx0d6adD6I39RnQoaTXmK48JhTqOM58R19LED8yBf9LvQo1GW4viRk4Qm6cUvpHlRi8dn9MngFqRy-EdGlNPxHbp8oR8aLnvBLkb_4I8KS9Kh57jEKsOv4nD04UeZmZDyC64efLB1jK8Oj8hP-BSuF7aQ92h-4h1eKJJ1wYErregFzTWaH_m14Q--o9nsEe47Ij_EG33GI8e14xQf1Eifo5Z03IeeK2C0ODGaD6xl5Kh9OD68HdfwhuiPH33x9kg7HdpRuQgjnbiOe8Sn0EJt-AsR6sVxPQr65vg0aM8KjeTB5Hi4RviLJqYI7QoqsTApR4PPJwi_9LiHJmWCfmiyODWYX0OTLzfCB1c-4UQf6Dj0p3gH7xH8o8-Ja4rxoz--CHpy-EWPH_2h5dODUoaWmDmeo4ko5UKbzQquNJmGypnRfIEmkUEeRcZzVNwSBv3RsBo4dUZu6HKHrgkk-aiVPUR-vNJQ-WgYx-ija7hGZCc0_ngQ6gt8432Cpk6Fp0dzIjx03AjThOGO5lfwLMWF9MIX5Qp0NFn04w5vNFUufNGhpWJcPIrR4CS-5RHK_9BSDXcIPyZ-HOdBv3hQbjKafxA8adXw49OFPEWjFrWy42FD9Irh6sgrodEuPG0CmclSXNA6wmMGbb3QTTwa8cGDPoPHHddycXijBdKNvGD-4CyRb8F3wV-c4LXBKxHywNlB9kf3C7YeNHQCcsaJ_IOWPZEQHz-e6DhFnFFwIvyhE2ePfDnyY8pz6IJ1uMoO7WGIp8fXM9gqpsITdNnRkDu0RYdjPcHHoMl_qL4F9gjDMSUOKj8YTlF8NDtRoeqGpjv4MwiTN4WG-OFhTUuMVUly9FHRcD3wHTl0ZUO46miyfcFz1Dr8HLYZtMtxS0P4Q26OfBpcFd-xySx64gadHWF2QeYxHb8SCuWSHV6iS3gG44fHoM0wLzqa8NvxV3ihmUeuXLiUXeiF5iCPlzijWEIiqk-QH490PDnqo3mOv-Dk44qMTyEajauCqyfCvNAzNOPRL7hyQs3AH8e-ZBx6K_CPib3AaMrRKMePoel79Ieu4seXD0-uQI_RrDOGcsFz_CG-Q6OPs4fxHY90KNdKuOqPThX2x2jqoFV2BXK0R9Cpw8-KnIyPH48O7ngySujbJPAyFln4Qz-meCtKXSPy4zq6oqlyfDu-6MFNo3kQXtARcXvw4Dn6rQkY5ccVBv5wSTVyXKIOPVEOLUm_oC_2aGhqtAuaS0XZQ-ml4dC_4x_6TEO-qPBuHFRSHWEfmTjyb2ge9PEwnDhznEeYZw4OHdEzo8-L97gSXPLxojn7IKShbw_KHmEaBedBeifK4_sRppK64HkLXXnwHc0JbT8eqUT84OTxobKI9chv6PAe1CqDXEEppceRE1Oh6wquD_mM__B5ZGfx5YSu1OjRaH_QC9cOmWkBVzNYhkG1FxXFB155pE7wE39wTkmPSqeGI48P7UWqLqh19EfzlAS_408RXvDRHjpx4X6CHrkTGF15PEd2PUOrPPBxirg147tQlcepoz-a57gYEXqO3oOZN1AdPNMRWsePatsxRUd05oLuEz--En-HNq4Cxy0xrUflHmKVY3eh5Tl-_EPtUAg3H3m1GB3roWEkgQ9raOFHhA_hE99w6qjFw9pmdC06Ssh9MN3xB-mk4sLZTkV5mJPg6UevNjhlXLrQIFTUqZDyIf3xo77wpWiFO0HzCz_y458CPcOPhmCeDJcfPDjaw1aOE99YOCf0Hb-KJ6oAXTLxIz6u5YJvF33RH3ZytMgP6D_aa_iF_RE2hzCjKAFO9mj2GMc7Bs-O_6FwC02g9Qf35Xge4kPeG0ye43mCWjROvEa8Cc2Xoudx5Ceeo-Ee_PgjtfAT_Pjg_sgH-Q_C_MS9FGel4fjAMqimg8n649uDnEH1QfnhaUerHFeP_hGa88iF_sfV4WQ1nMfWC0dKp9BDNF9oPMFx_dDuIb3wk_CFtyZRXA7-4NNBTpa0IMweyEc-pnCVHJ8eWDum48sPxMWe51C9GY2eDvuFh1kCOdmxK4zQR-gPP8YZWjDXeNCXzLDmHNUjHFS-4hp-4moQi3yOU0IfNLJyXNJl9Dya7PmGf0d2aMdD9UhDJTpOYdyyoLrw40yOPCJ0lkH1G34t6MyPNEsiXSi1WegrWDn6rXh49N8RrSIS6zt85uiNP_hBEz4xysJnhCp1XDo-VVJg5slw4D0q4Uoe_KiuFL6CHipd5AmLRw-aL3GKacGRJzqSO3FwJi9-4c6Ekzm2G5eOW7GQK8pHNLF0dMWVQ2wm44FGIgxfvBSeJ8N9TAmp4LngXngraUGkLJAb9EOjkfg-4nF8bHqg8wKeHFf2oPqD5j_UDz_eLMV1fLlRwyPRZyyU7_jwREaYaWjH42WTEI2-ok6KXFGIH-KY-PiO64fOo3uQxtHhSaSIp0_gE0e6EhV1QldyGfl8-ErxCx-e6LiyNLiCKecBghQiADHAADJMAGeMMFoQAYA0kgCECSNEEKIQIIAII5QwBhACHABGAKWEEo4oQgVjggIBERGAOYGQIqgxJIBEAihHkQIMIWGAAggJBIAgAhEhkCJCQMEUMQQqIpQCQhgFhNUAAAIAck4IQQABLhEBgEHCGCMUUERgDARhQCipmEBACFgQAIIITZAAABACiABACGFIYEsAIoFQABoIFEgICAECAgI4AQggyghIgQNeCAMEIYAwgARCTgiSADCMUkGEIUAQwhgBQgABhJCKWEIFssAAaZQxQjFiiGKCCAGAUIAYo4RCkCEggCJSCEiBIcAwQokQBgECCJIECCUEM1ABBAgRkAICDCLCEEOEQwIRQRhCwFkAEGQGCY8UQ0AQAS4SAAhBFDEEWAEYIYwJoygAghmKAEAKEQIAEvIoQoAkCgJhiEKKECYgI8YhIglgyBgCACJAIASEMEgoooQRSCLCACGEGAMQIIQxJKBAwBgIkFBIAEMAIQQapZwzQgmBiCKIEeG5AUgYAwwDQhAEFHBAMAWIAA4QZCARyEKAIDHEEqIMAAQAYYAUFgBLjAACGYaAEIoAxggCgghiBEBgGGAAQ4AhAgAiBjkBAUHGEIeAABRQI4gAgBmhBCLGMQSAA8QJgBQABhlnFKAAAEqYAAYI5ABSTiilBBIGCMGoMGIAYRBghiEjgGACEUOREIICgQgASBklCHBAAIEUAgQBZgQSApCDlEKQAAIUcgYwA4AhgCpkgEAKGgQVUIYAxYASwDCiHALAEQcUAMICIQhRABJAiGYIACMIQUQgRYAzBCjABAHOACAUA0YQAABwRCIBwEFcCAIAAkAQgQgxDAgigDAKSEaQYEwQRBgSBjEhDDJIEOUEQUgBQqhBxAiACAMECEoYYYQRYYAQABkEEBBECEYEQoYQBBkSBBggiUFAAiIIIUKyIggABACAAAIDAA
Question 3
Do I have to store fpcalc64 in the bin folder rather than lib/app. If I do how do I do that, it is a precompiled binary ?
A: I found the solution (something I had hit before).
Since Java 9 when you call an external app with Java on linux is uses lib/jspawnhelper to decide how to run the application. Because I provided linux runtime but the runtime was built on Windows when it was checked into source control it had lost any execute permissions. Therefore when my application complained about permissions it was actually complaining about permissions on jspawnhelper.
I also found that if you do set the permissions they will be respected when you build the package. I didn't realize this because previosly I was building linux installer on Windows and had to add extra commands to Izpack installer to make them executable.
So in summary the solution was to set execute permissions on jspawnhelper and fpcalc and commit to source control, then Jpackage would work.
| |
doc_240
|
The problem is, if a user clicks the Div twice while it loads or clicks another Div in rapid succession, cracks start to show.
I am wondering if it would be possible to only allow the query to be executed once and wait until completion rather than queuing it.
Here is my current code, if it is crap feel free to comment on how I could better it... I am not the best at Javascript/jQuery :P
function mnuClick(x){
//Reset all elements
if($('#'+x).width()!=369){
$('.menu .menu_Graphic').fadeOut(300);
$('.menu_left .menu_Graphic').fadeOut(300);
$('.menu_right .menu_Graphic').fadeOut(300);
$('.menu').animate({width: "76px"},500);
$('.menu_left').animate({width: "76px"},500);
$('.menu_right').animate({width: "76px"},500);
}
var ElementId = '#' + x;
$(ElementId).animate({
width: 369 + "px"
},500, function(){
$(ElementId + ' .menu_Graphic').fadeIn(300);
});
}
Thanks in advance,
Chris.
A: You need a "isRunning" flag. Check for it before you start. Set it when you start the sequence, clear it when it ends.
A: (function() {
var mutex = false;
function mnuClick(x){
if (!mutex) {
mutex = !mutex;
/* all code here ... */
mutex = !mutex; /* this statement should go into animation callback */
}
}
})();
mantain a state through a variable so you cannot click more than once until code has fully executed
A: you can unplug the onClick event handler (mnuClick) when the event starts, to effectively disable invoking the mnuClick twice, but be sure to restore it when the event ends.
A: Quick answer: use .addClass and .removeClass and test for the existence of the class at execution time. Test if it's set and return, or add it, execute the code, then remove it.
A: you can create an invisible maskin and maskout ( like the background in lightbox etc ) or disable clicks until the animation finishes.
| |
doc_241
|
import pandas as pd
df = pd.DataFrame({
'row': ['A','A','A','B','B','B'],
'col': [1,2,3,1,2,3],
'val': [0.1,0.9,0.2,0.5,0.2,0.2],
'animal': ['duck', 'squirrel', 'horse', 'cow', 'pig', 'cat']
})
df
row col val animal
0 A 1 0.1 duck
1 A 2 0.9 squirrel
2 A 3 0.2 horse
3 B 1 0.5 cow
4 B 2 0.2 pig
5 B 3 0.2 cat
I would like to make a plot like this
But the closest I can get (using imshow) is this
Note: imshow doesn't provide a border between tiles (as far as I'm aware) which is also important to me.
My attempt
import pandas as pd
import plotly.express as px
df = pd.DataFrame({
'row': ['A','A','A','B','B','B'],
'col': [1,2,3,1,2,3],
'val': [0.1,0.9,0.2,0.5,0.2,0.2],
'animal': ['duck', 'squirrel', 'horse', 'cow', 'pig', 'cat']
})
df_wide = pd.pivot_table(
data=df,
index='row',
columns='col',
values='val',
aggfunc='first'
)
fig = px.imshow(
img=df_wide,
zmin=0,
zmax=1,
origin="upper",
color_continuous_scale='gray'
)
fig.update_layout(coloraxis_showscale=False)
fig.show()
A: You can try this:
import pandas as pd
import plotly.graph_objects as go
df = pd.DataFrame({
'row': ['A','A','A','B','B','B'],
'col': [1,2,3,1,2,3],
'val': [0.1,0.9,0.2,0.5,0.2,0.2],
'animal': ['duck', 'squirrel', 'horse', 'cow', 'pig', 'cat']
})
df_wide = pd.pivot_table(
data=df,
index='row',
columns='col',
values='val',
aggfunc='first'
)
Then, I use heatmaps which have more control over the gaps between tiles:
fig = go.Figure(data=go.Heatmap(
z=df_wide.values,
x=df.col.unique(),
y=df.row.unique(),
text=df['animal'].values.reshape(df_wide.values.shape),
texttemplate="%{text}",
textfont={"size":15,"color":'red'},
colorscale='gray',
showscale=False,
ygap = 3,
xgap = 3)
)
fig.show()
I use ygap = 3, xgap = 3 to add gaps between tiles.
| |
doc_242
|
ABC1 OBJECT-TYPE
...
...
KEY { XYZ1 }
...
...
ABC2 OBJECT-TYPE
...
...
ABC3 OBJECT-TYPE
...
...
KEY { XYZ3 }
...
...
My first search word is KEY (as its occurs less in a file) and second search word is OBJECT-TYPE. OBJECT-TYPE can occur few lines (may be 5 or 10) above the line with KEY. If a KEY is found in a file, I need output that has the key-value and corresponding object-type-value.
Exactly like:
ABC1 KEY1
ABC2 KEY2
A: The following may be close. It works for the given input, but it makes assumptions that I do not know are true. For example, it assumes that the curly brackets have spaces between them and the key value.
#!/usr/bin/awk -f
/OBJECT-TYPE/ {
for(i=2; i<=NF; i++) {
if( $i ~ /OBJECT-TYPE/ )
key = $(i-1);
}
}
/KEY +\{.*\}/ {
for(i=1; i < NF; i++) {
if ( $i == "KEY" ) {
print key, $(i+2);
}
}
}
A: It's hard to tell what you're really trying to do.
awk '/OBJECT-TYPE/ {a[$1]=$1} /KEY/ { print a["ABC" substr($3,length($3))], $1; split("",a) }'
A: Assuming that you want all keys for all objects, and keys are listed below objects, this is how you might do it:
awk '/OBJECT-TYPE/ { obj= $1 } /KEY { .* }/ { print obj, $3 }' file.txt
output:
ABC1 XYZ1
ABC3 XYZ3
In case you only want the keys of the first object in each tree of objects, where object trees are separated by an empty line you should try this:
awk 'BEGIN { nl=1 } /^$/ { nl=1 } /OBJECT-TYPE/ && nl { obj= $1; nl=0; } /KEY { .* }/ { print obj, $3 }' file.txt
output:
ABC1 XYZ1
ABC2 XYZ3
A: It might be easier to iterate through the file backwards, find the KEY and the all the Object-Types following the key.
tac filename | awk '/KEY/ {key = $3} /OBJECT-TYPE/ {print $1, key}'
Given your example above, this outputs
ABC3 XYZ3
ABC2 XYZ3
ABC1 XYZ1
If you need the output to appear in the same order as the original file, add |tac to the pipeline.
| |
doc_243
|
In VS2019 output window I can see this message every time a barcode gets scanned:
[BarcodeReader] Received barcode data:EDA50-HB-R CodeId:j AimId:]C0
Is there a way to know which event prints it and add a listener to it?
| |
doc_244
|
Dataframe 'A'
([-1.73731693e-03, -5.11060266e-02, 8.46153465e-02, 1.48671467e-03,
1.52286786e-01, 8.26033395e-02, 1.18621477e-01, -6.81430566e-02,
5.11196597e-02, 2.25723347e-02, -2.98125029e-02, -9.61589832e-02,
-1.61495353e-03, 3.72062420e-02, 1.66557311e-02, 2.39392450e-01,
-3.91891332e-02, 3.94344811e-02, 4.10956733e-02, 6.69258037e-02,
7.92391216e-02, 2.59883593e+00, 1.54048404e+00, -4.92893250e-01,
-2.91309155e-01, -8.63923310e-01, -8.51987780e-01, 4.60905145e-01,
2.76583773e-01, 1.68323381e+00, 1.82011391e+00, 3.68951641e-01,
-1.35627096e+00, -1.24374617e+00, -1.97728773e+00, 2.70233476e+00,
-5.60139584e-01, -8.50132695e-01, 1.85987594e+00, -2.89995402e+00,
2.05908855e+00, -2.36161146e-01, -6.62032149e-01, -3.46654905e-01,
1.60181172e+00, 1.65443393e+00, -3.77934113e-03, -7.94313157e-01,
5.20531845e-03, -5.24688509e-01, -1.57952723e+00, 3.14415761e-01,
-9.32905832e-01, -1.34278662e-01, -1.84121185e+00, -1.67941178e-01,
-1.21144093e+00, 3.76283451e-01, 5.61453284e-01, -6.26859439e-01,
-4.66613293e-02, 2.56535385e-01, -5.86989954e-01, -4.21848822e-01,
5.21841502e-01, 5.76096822e-01, -1.58315586e-01, -3.31595062e-02,
-5.72139189e-01, 7.27998737e-01, 1.54143678e+00, 2.58551028e+00,
1.11951220e+00, 2.08231826e+00, 8.48119597e-01, 3.91317082e-01,
1.45425737e+00, -5.08802476e-01, -9.04742166e-01, -4.39964548e-02,
-5.07664895e-01, 1.34800131e-01, 6.60639468e-01, -7.81770841e-02,
1.77803055e-01, -5.25474907e-01, 1.56286558e+00, 1.37397348e+00,
9.35845142e-01, -8.29997405e-01, -1.12959459e+00, -7.34076036e-01,
-1.34298352e+00, -1.55242566e+00, -3.48126090e-01, 9.46175316e-01,
1.04627046e+00, 2.78090673e-01, 5.24197520e-01, -7.31359265e-01,
9.81771972e-01, -7.06560821e-01, 9.87914170e-01, 4.21145043e-01,
7.99874801e-01, -3.61598953e-01, -6.91521208e-02, -3.02639311e-01,
1.22688070e-02, -1.28362301e-01, 1.55251598e+00, 1.50264374e+00,
-1.50725278e+00, -1.15365780e-01, -9.54988005e-01, -8.96627259e-01,
-2.83129466e-01, -1.30206622e+00, -8.17198805e-01, 1.10860713e+00,
-9.80216468e-01, -8.91534692e-01, -8.34263124e-01, -7.16062684e-01,
9.43266610e-01, -6.39953720e-01, 2.20295404e-01, 6.53124338e-01,
1.12831707e+00, 7.95192837e-01, 1.06274424e+00, -9.84363663e-01,
-1.86648718e+00, 2.47560957e-01, -1.54991644e-01, -1.06641038e-01,
-2.08836784e-03, 6.62447504e-01, -1.34260765e-01, 2.98202604e-01,
-2.19112992e-01, -4.66701070e-01, -4.29040735e-02, 2.77548893e-01,
-6.48395632e-02, 4.43922718e-01, -1.06670096e+00, 7.60389677e-01,
-3.50944675e-01, -2.68452398e-01, 1.65183406e-01, -3.35291595e-01,
8.29848518e-01, 5.20341409e-01, 8.95388863e-01, 2.10437855e-01,
2.35693685e+00, -1.30064957e+00, 5.94602557e-02, -2.14385684e-02,
-1.01823776e+00, 8.10292523e-01, -1.22324503e+00, 3.37151269e-01,
6.34668773e-01, -6.14841220e-01, -3.06480016e-01, -8.71147997e-01,
-2.38711565e-01, -3.71304349e-01, -5.21931353e-01, -7.25105848e-01,
9.55749034e-01, -5.03756385e-01, 1.11945956e+00, -1.13072038e+00,
1.46584643e+00, -1.03178731e+00, 1.49044585e+00, 4.29069135e-01,
5.71108660e-01, -8.24272706e-01, 3.75994251e-01, 1.18141844e+00,
-1.22185847e-01, -1.73339604e-03, -1.89326424e-01, -1.83774529e-02,
1.63951866e-01, 2.68499548e-01, 4.42841678e-01, -5.51856731e-02,
-2.09071328e-01, -1.80936048e-01, -1.32749060e-01, -1.37133946e-01,
3.04451064e-01, -2.60560303e-02, 1.64786954e-01, 1.32592907e-01,
-1.46235968e+00, -7.26806017e-01, -3.67486773e-01, 3.71101544e-01,
8.83259501e-01, -7.15065260e-02, 1.66389135e+00, -1.78108597e+00,
-1.26130490e+00, -2.24665654e-01, -8.12489764e-01, 5.74641618e-01,
-4.67201906e-01, -1.12587866e+00, 7.75153678e-01, 5.72844798e-01,
-1.26508809e+00, 8.06000266e-01, -6.82706612e-01, 1.50495168e+00,
8.52438532e-01, 9.43195172e-01, -4.40088490e-02, -2.45587111e-01,
-9.86037547e-01, -1.11312353e+00, 9.32310853e-01, -1.04108755e+00,
4.26250651e-01, 1.70686581e-01, -2.64108584e-01, 8.06651732e-02,
5.71204776e-01, 1.46614492e-01, 1.18698807e-01, 3.55246874e-04,
6.77137159e-01, 1.15635393e-01, 1.34337204e-01, 3.27307728e-01,
-2.05416923e-01, 4.18027455e-01, 9.88345937e-02, 2.18627719e-01,
-5.18426174e-02, -1.17021957e-01, 1.70474550e-01, 4.82736350e-02,
3.21336545e-01, -1.45544581e-01, -1.20319001e-01, 2.03828814e-01,
-3.08498184e-02, -1.40565005e+00, -1.43214088e-01, 4.97504769e-01,
1.56273785e-01, 2.75011645e-01, -4.60341398e-03, 1.43803337e+00,
1.39331909e-01, 2.06784989e-01, -5.12059356e-01, -1.17023126e+00,
-5.96174413e-01, -1.22451379e+00, 1.96344831e-01, 2.14817355e-01,
1.24091029e-01, 5.14485621e-01, -6.03650270e-01, -1.65868324e+00,
-8.21932382e-01, -7.13710026e-01, -8.08813887e-01, -8.04744593e-01,
-1.06858314e-01, -4.50248193e-02, -2.20419270e-01, 8.09215220e-02,
1.35851711e+00, -1.14235665e+00, -6.68174295e-02, -6.01281650e-01,
2.34869773e-01, 3.67129075e-01, -1.34835335e+00, 7.52430154e-01,
1.37352587e+00, -1.02421527e+00, -2.07610263e-02, -3.39083658e-02,
-5.75996009e-02, -2.31073554e-02, 4.61795647e-02, -4.59340619e-01,
-3.62781811e-01, 4.54813190e-02, 6.04157090e-02, -1.87268083e-01,
1.70276057e-01, -8.61843513e-02, -1.27476047e+00, 1.30585731e+00,
-6.46389245e-01, -1.40635401e-01, -1.77942738e+00, -1.41113903e-01,
1.56715807e-01, -1.67712695e-01, 1.86451110e-01, -6.01158881e-02,
4.64978376e-01, 5.13440781e-01, 6.19532336e-01, 2.54267587e-01,
-2.78759433e-01, -3.88565967e-01, 3.87152834e-02, 1.06240041e+00,
2.09454855e-01, 9.64690667e-03, 8.95837369e-02, -3.96816092e-01,
-3.41660062e-01, 6.29889334e-01, -8.67980022e-03, 7.84849030e-01,
-4.85106947e-01, -7.31377792e-01, -8.87659450e-01, 7.61389541e-01,
9.76497314e-01, -1.06744789e+00, 1.47065840e+00, 6.25211618e-01,
7.25988559e-01, 4.19787342e-01, 1.92491575e-01, 1.13681147e+00,
-1.41299616e-01, 1.88563224e+00, 1.20414116e+00, 8.84760070e-02,
-5.82623462e-01, -6.35685252e-01, 9.42374369e-01, -2.68795041e+00,
1.55265515e-01, 1.11831120e+00, 1.42496225e+00, -2.49172328e+00,
-2.96253872e+00, -1.27634582e+00, 8.64353099e-01, 1.75738299e+00,
-1.08871311e+00, -9.71165087e-01, 7.15048842e-01, -2.17295734e-01,
-9.51989200e-01, -2.18546988e-01, 9.17042794e-01, 8.62052366e-01,
-1.85594903e-01, 4.56294789e-01, -6.85416684e-01, -2.80209189e-01,
-5.46608487e-01, 1.08818926e+00, -7.21033879e-01, -6.71183475e-01,
-6.36051999e-01, -4.59980192e-01, -5.05580110e-01, -3.78244959e-01,
-7.24025921e-01, -2.08545177e-01, 4.57899036e-02, 4.40788256e-02,
-2.37824313e-01, 1.52266134e+00, 8.17944390e-03, 1.10203927e+00,
9.86476664e-01, -5.18193891e-01, -3.20302684e-01, -3.62147726e-01,
8.09107079e-02, -2.23162278e+00, 1.08676773e+00, 5.61964453e-01,
1.27519559e-01, 9.24886749e-01, -4.75508805e-01, -5.42765960e-01,
-1.00917988e+00, -1.38181867e+00, -1.32190961e+00, 1.22737946e+00,
3.60475117e-01, 4.94411259e-01, -9.84878721e-01, -1.27991181e+00,
7.05733451e-01, 6.05978064e-01, 7.24010257e-01, 7.31500866e-01,
-2.10270319e+00, -1.44749054e+00, -4.62989149e-01, 1.88742227e+00,
2.23502013e+00, 1.24196002e+00, -8.39133460e-02, -5.83997089e-01,
7.63111106e-01, 3.59541173e-01, 1.69019230e+00, 3.16779306e-01,
8.04994106e-01, -7.79848130e-01, 4.55373478e-01, -6.99628529e-01,
-8.88776585e-01, 5.58784034e-01, 1.03796435e+00, -1.39833046e+00,
-1.30889596e+00, 1.92064711e+00, -1.03993971e+00, -5.44703609e-01,
-1.25879891e+00, -2.25683759e+00, -1.61033547e-01, 1.76603501e-01,
-2.47327624e-02, 6.42444167e-02, -6.01551357e-01, -7.00803499e-01,
1.03391796e-02, -1.65584150e-01, -6.05071619e-01, -3.43937387e-01,
-2.21285625e-01, -1.86325091e-02, -9.79578217e-01, -1.73186370e-02,
-2.30215061e-02, 9.63819799e-01, 2.14069445e+00, -2.99999601e-01,
-1.06696731e+00, 1.38805597e-01, -1.36281099e+00, -1.71499344e+00,
-2.44679986e-01, 5.14666974e-02, 4.18733154e-01, 1.59951320e+00,
1.00618752e+00, -1.88645728e+00, 1.59363671e+00, -1.70729555e-01,
9.42793430e-02, -7.23224009e-02, 6.02105534e-02, 5.52374283e-01,
6.91499535e-02, 9.86658898e-02, 1.26584605e-01, -5.92396665e-02,
2.90992852e-01, -5.76585947e-01, 6.72979673e-02, 7.38910628e-01,
-8.75090268e-02, 6.94842842e-02, -2.30246430e-01, 1.94134747e-01,
-2.09682980e+00, 7.74844906e-01, 6.15444420e-01, -1.56931485e-01,
1.66940287e+00, -1.45283370e+00, 1.37121988e-02, 1.07479283e+00,
8.83275627e-01, -7.41385657e-01, 5.47602991e-01, -1.02874882e+00,
-1.51215589e+00, 1.55364306e+00, 1.71320405e-01, 2.06341676e-01,
-1.68945906e+00, 7.59196774e-01, -2.83121853e-01, -7.70003972e-01,
-4.35559207e-01, -1.29156247e+00, -7.57105374e-01, -7.85287786e-01,
1.31572406e-01, 1.20446876e+00, -1.46802375e+00, -5.35860581e-01,
5.98595824e-01, -4.62785553e-01, 6.75677761e-02, -5.66531534e-01,
1.09685209e+00, 8.24234006e-01, 1.13620680e+00, 3.96653080e-01,
1.89639322e+00, -9.96802022e-01, -1.24232069e+00, -1.25410024e+00,
-2.06379176e+00, 1.47885801e+00, -1.66257841e+00, 8.79827437e-01,
-1.04440327e+00, -1.42881405e+00, -5.69974045e-01, 1.01359651e-01,
4.86755601e-01, -3.35863751e-01, 2.64648983e-01, 1.27375046e-02,
-6.16941256e-02, 4.08408937e-01, 7.55366537e-01, -7.27771779e-01,
7.75935529e-01, 3.58925729e-01, 6.84118904e-01, 7.47932803e-01,
-5.42091983e-01, 2.08484384e-01, 1.56950556e-01, -1.14533505e+00,
-1.22366245e+00, 1.24506739e-01, -1.02935547e+00, 2.54296268e-01,
-4.03847587e-01, -1.00212453e+00, -1.48661344e+00, 9.75954860e-01,
9.38841010e-01, -1.23894642e+00, -9.78138112e-01, -1.04247682e+00,
-1.03866562e+00, 1.26731592e+00, -3.67089461e-01, -8.48251235e-02,
-1.82675815e+00, 6.06962041e-01, -2.33818172e-01, -4.57014619e-01,
1.52576283e+00, 1.54494449e+00, 6.00789311e-01, -7.17249969e-01,
-6.12826202e-01, 4.53766411e-01, 1.39275445e+00, -1.54383812e+00,
1.54210845e+00, 2.69465492e-01, -2.30273047e+00, 1.73201080e+00,
-2.46161686e+00, -8.25393337e-01, 4.33285105e-01, 7.14390347e-01,
5.46413657e-01, 3.55625054e-01, 4.55356504e-01, -4.69216962e-01,
-9.08073083e-01, -1.55192369e+00, -1.23692861e+00, -1.01703738e+00,
-1.13617318e+00, -6.06261893e-01, 1.31444701e+00, 4.20469663e-01,
1.25780763e-01, -3.17988182e-02, 8.14623566e-01, 8.66121880e-01,
-7.69000333e-01, -1.67427496e-02, -7.96633360e-01, -3.49124840e-01,
-2.07410767e-01, -1.09316367e-01, -2.86175298e-01, 4.21715381e-01,
1.22897221e-01, -2.05947043e-01, 7.31217030e-01, -8.02955705e-01,
8.88777313e-02, 2.07183542e-01, -4.79090236e-01, -6.23960583e-01,
-4.50498790e-01, -1.08117179e-01, -2.59395547e-01, -7.48280208e-01,
3.88011905e-01, 2.54908503e-01, 8.52262132e-01, 4.77972889e-01,
-8.33500747e-02, -1.41622779e+00, -2.49822422e-01, -2.28753939e-01,
-2.26889536e-01, -2.45202952e-01, 3.17116703e-01, -1.19760575e+00,
7.04262050e-02, -5.31419343e-02, -7.31634189e-01, -4.17957184e-01,
3.77288107e-01, 7.69283048e-01, 1.55929725e+00, -1.01963387e+00,
9.07556960e-01, -4.98822527e-01, 1.02488029e+00, 5.58381436e-01,
-2.14274914e+00, -6.94806179e-01, -1.11654335e+00, -1.11325319e+00,
-1.10016520e+00, 5.18861155e-01, -1.04176598e+00, -8.66814672e-01,
2.36604302e+00, -3.18431467e-01, 2.91334051e+00, -6.61828903e-02,
-1.26603821e-02, -1.45414666e-01, 4.78580610e-02, -2.09898537e-03,
-6.69714780e-02, 1.05549065e+00, -8.84106729e-02, -9.18073007e-04,
1.25938385e+00, -8.14172470e-01, -2.59554042e-01, -6.95466246e-01,
1.08730831e+00, -9.67021920e-01, 5.84575935e-02, -1.71321175e+00,
-1.26317109e-01, -2.90733362e-01, 7.47312951e-03, -1.45607222e+00,
4.60382102e-01, 1.61288034e+00, -5.28648252e-01, 1.66048408e-01,
8.34903372e-01, 4.74884503e-01, 5.04686505e-01, 4.95510854e-01,
-1.20924643e-01, 2.99423740e-01, 1.09738018e+00, 1.50838843e-01,
-2.87229078e-01, -1.24761215e+00, 7.36582234e-01, -2.77173578e+00,
-3.74992668e+00, 5.41312143e-01, -4.37583398e-01, -1.69064854e-02,
1.84765431e+00, 5.73052756e-01, -1.06164050e+00, 5.07717049e-02,
4.25819917e-02, -2.92715384e-01, -2.03200363e-01, -5.84490589e-01,
-3.57083164e-01, 9.10876306e-01, 2.52143752e-01, 2.63129337e-02,
3.83262339e-01, 7.74313729e-01, -3.60963951e-01, -7.70989956e-02,
7.56541998e-01, 1.09766125e+00, 8.20902509e-01, 2.58690757e-01,
1.25444572e+00, 5.71737922e-01, 2.55898541e-01, -8.80233282e-01,
1.78192270e-01, 2.42501217e-01, -1.30266510e+00, -2.48044014e-02,
1.07537714e-01, 1.67386472e-01, -1.11797061e-01, -6.35950485e-02,
8.00025515e-02, -1.32397319e-01, -6.58003041e-03, 3.03937065e-01,
-1.27135161e-01, 1.01363440e-01, -8.82766995e-01, 8.44379448e-01,
-5.09627327e-01, -1.03326533e-01, -3.15431942e-01, 5.37076573e-01,
3.26753114e+00, 4.15751153e-01, 2.56849348e-01, 5.14462581e-01,
-2.61730161e-02, -3.28715744e-02, 1.88278800e-01, -1.19832919e+00,
-1.19590287e+00, -1.11394334e+00, 2.17055714e+00, 7.96829829e-01,
-1.85619100e+00, -1.07888882e+00, -2.30865383e-02, -2.40273840e-01,
4.39953192e-01, -5.29613217e-01, 6.69906410e-01, 1.15145012e+00,
6.06638031e-01, 5.99079947e-01, 9.16942482e-01, -9.66304057e-03,
5.91654439e-02, 4.37388222e-01, 1.18295465e+00, -1.64263112e+00,
-1.03293336e+00, -1.18222197e+00, -6.33519878e-02, 2.27962536e-01,
1.66108232e+00, -1.23851592e+00, -1.43787196e+00, 8.87857019e-01,
-1.19151817e+00, -1.47236056e+00, 3.50282869e-01, 1.06004408e+00,
-4.26199859e-01, 4.37361363e-01, -2.50084772e-02, 8.67900174e-01,
5.37760532e-01, 8.14530962e-02, 6.62491540e-01, 1.37045014e-01,
-7.01697152e-01, -4.21657704e-01, 7.83331329e-01, 7.70034379e-01,
1.28212695e+00, 2.53511223e+00, -3.24006440e-01, -3.41291501e-01,
-2.49147123e-01, 1.70446849e-01, -1.37162583e-01, -4.81858038e-01,
-4.86338762e-01, 6.85229336e-01, -1.55517356e-01, 1.83307879e-01,
-1.49384229e-01, 1.56007957e-02, 2.40326236e-01, 1.07336933e+00,
-3.99730396e-01, -3.33898955e-01, 3.40244317e-01, -4.92340248e-01,
-4.95815316e-01, 6.22512483e-02, 5.08544685e-01, -2.83347226e-01,
-3.08918714e-01, 1.08292681e+00, -5.29213035e-01, -2.23617454e-02,
2.62202341e-01, 1.02718292e+00, -4.49869615e-01, 3.34969168e-01,
-3.43212844e-01, 6.16483430e-01, -9.47779684e-01, -4.78857633e-01,
-9.98923354e-01, 6.32191682e-01, 2.72973961e-01, -2.96008388e-01,
2.30922383e-01, 2.06884014e-01, 5.21099867e-01, 4.16729600e-01,
-8.26782099e-02, -5.95457632e-01, -2.10804413e-01, -2.93975286e-01,
2.03009273e-01, 1.43593375e+00, -5.49739765e-01, 7.03821943e-01,
-8.28059434e-01, 9.83503607e-01, -1.08534889e+00, -6.27821255e-01,
4.03117722e-01, -2.03629129e-01, -3.95124233e-02, 3.21970160e-01,
-2.71920636e-01, -5.10057329e-01, -1.04202621e-01, 3.20627596e-01,
2.47291994e-01, -1.04118706e-01, -3.16545995e-01, 3.35604518e-01,
-5.69433751e-03, -2.38370280e-01, 3.32991597e-01, -6.11308103e-02,
-2.53167433e-01, -1.08142836e-01, 6.37938271e-01, 4.74190570e-01,
-2.08524397e-01, 9.95434184e-01, 6.78813341e-01, 1.48137820e-01,
3.66997494e-02, 1.12354066e-01, 1.33086253e+00, 6.58021086e-01,
8.35274797e-01, -1.27346531e+00, -1.19618900e+00, -1.06490676e+00,
-1.15966483e+00, 2.19041187e+00, -2.40703158e-01, -1.04679828e+00,
5.26221976e-01, 9.57229098e-01, -3.17806974e-04, 5.25084392e-02,
1.03682933e-01, -1.14126721e-01, 9.97109170e-02, 1.03757185e-01,
4.10600042e-01, 5.78106727e-01, 1.01148051e+00, -4.79936067e-01,
-1.32848972e+00, -2.20624284e-01, -1.42350771e+00, -1.17722544e-01,
-4.78121525e-01, -7.67503366e-01, 1.88827881e-01, -5.96936872e-01,
1.03021358e+00, 2.60795689e-02, -3.33047585e-03, -4.92126750e-01,
1.11066769e+00, 1.01787072e+00, -1.20277626e+00, 7.53480929e-01,
-1.13091340e+00, -4.33899313e-01, -1.50633595e+00, -1.39755762e+00,
1.68206963e+00, 3.05696594e-02, -4.92375834e-01, 4.42329013e-01,
2.13249223e+00, -1.16923258e+00, -7.43727428e-01, 9.63488691e-01,
-1.40534085e+00, 1.30882281e+00, -1.22007716e+00, 7.24629619e-01,
3.95142700e-01, -2.07336912e-01, 2.55075616e-01, -8.44328303e-02,
-3.94616429e-01, -7.84743985e-02, -2.05229049e-01, -5.23357338e-01,
3.31521045e-02, -1.46889669e+00, 4.00045935e-01, 1.27852950e-02,
-2.18957838e+00, -9.22286699e-01, -1.00263590e-02, -2.15168189e-03,
-9.58758007e-01, 1.40708729e-03, 4.08836699e-02, -3.10267180e-03,
-1.97213536e-01, -1.57090203e-04, -6.56863610e-04, -3.41218036e-03,
3.65899320e-02, 1.01258475e-02, -4.00850464e-03, 1.39965489e-03,
1.87395867e+00, -2.50914219e-04, -1.36854426e-02, -5.59371636e-01,
8.60638162e-01, -5.89030315e-02, -3.06438078e-01, -6.36052431e-02,
6.98020295e-02, 1.09568657e-01, -4.95597777e-01, -1.45987919e-01,
6.23584012e-01, -5.52485913e-01, 3.43299341e-01, -4.26641584e-01,
-6.99084799e-02, -4.55572848e-01, 2.75544065e-01, -6.38720353e-01,
3.68422013e-01, 4.06005693e-01, -2.99449896e-01, 9.50228459e-01,
4.76344007e+00, 9.73504981e-02, -3.58437771e-01, 1.98629533e-02,
9.93927115e-01, 5.36396410e-01, 5.36029608e-01, 1.42388869e+00,
4.76638501e-01, 4.36781372e-01, -4.46066365e-01, -4.20019724e-01,
5.00997260e-01, 5.30703691e-01, 1.74726375e-01, 2.35885059e-01,
-3.33462461e-01, -8.84958758e-01, 1.70318874e-01, -5.73460407e-01,
-5.17774883e-01, -3.75158795e-02, 1.68564324e+00, 4.88754154e-01])
Dataframe 'B'
[10000, 10000, 10000, 1000, 1000, 1000, 5000, 5000, 5000,
1000, 5000, 5000, 10000, 5000, 1000, 1000, 5000, 1000,
10000, 5000, 5000, 1000, 10000, 10000, 10000, 10000, 10000,
10000, 10000, 1000, 1000, 1000, 1000, 1000, 1000, 1000,
5000, 5000, 5000, 5000, 5000, 1000, 5000, 5000, 5000,
5000, 10000, 10000, 1000, 10000, 1000, 10000, 10000, 10000,
1000, 5000, 7500, 7500, 1000, 1000, 1000, 1000, 5000,
5000, 500, 500, 500, 500, 7500, 7500, 5000, 5000,
10000, 10000, 5000, 1000, 10000, 5000, 10000, 10000, 1000,
5000, 5000, 5000, 1000, 5000, 10000, 5000, 10000, 10000,
1000, 1000, 5000, 5000, 10000, 1000, 10000, 1000, 1000,
10000, 10000, 1000, 5000, 10000, 5000, 10000, 1000, 1000,
1000, 5000, 1000, 1000, 1000, 5000, 5000, 1000, 1000,
5000, 5000, 1000, 5000, 10000, 1000, 1000, 5000, 10000,
5000, 10000, 10000, 5000, 5000, 10000, 10000, 1000, 1000,
5000, 10000, 10000, 10000, 1000, 1000, 1000, 300, 300,
5000, 5000, 5000, 5000, 5000, 5000, 5000, 5000, 10000,
10000, 1000, 1000, 1000, 300, 5000, 5000, 1000, 1000,
300, 300, 5000, 10000, 10000, 10000, 10000, 1000, 1000,
1000, 1000, 300, 300, 5000, 1000, 1000, 1000, 300,
300, 300, 5000, 5000, 10000, 10000, 1000, 1000, 300,
300, 300, 300, 10000, 10000, 1000, 300, 300, 5000,
5000, 5000, 10000, 10000, 10000, 1000, 1000, 1000, 300,
5000, 5000, 10000, 1000, 300, 300, 5000, 5000, 1000,
1000, 300, 300, 5000, 10000, 1000, 1000, 1000, 300,
300, 5000, 5000, 10000, 10000, 1000, 1000, 1000, 300,
5000, 5000, 10000, 10000, 1000, 300, 300, 5000, 5000,
1000, 1000, 1000, 300, 300, 300, 300, 300, 300,
5000, 10000, 10000, 1000, 1000, 1000, 1000, 1000, 300,
300, 300, 300, 10000, 1000, 1000, 300, 300, 5000,
10000, 10000, 1000, 1000, 5000, 5000, 5000, 10000, 1000,
1000, 300, 300, 5000, 5000, 1000, 1000, 1000, 300,
5000, 5000, 10000, 10000, 10000, 10000, 300, 300, 300,
300, 5000, 5000, 5000, 10000, 10000, 1000, 300, 300,
300, 5000, 10000, 10000, 10000, 1000, 1000, 1000, 1000,
5000, 1000, 1000, 1000, 300, 300, 300, 300, 5000,
5000, 5000, 10000, 10000, 1000, 300, 300, 5000, 10000,
10000, 5000, 5000, 5000, 5000, 5000, 5000, 5000, 5000,
5000, 5000, 5000, 10000, 10000, 5000, 10000, 5000, 10000,
1000, 1000, 10000, 10000, 10000, 10000, 10000, 5000, 1000,
10000, 1000, 1000, 5000, 5000, 5000, 5000, 5000, 500,
500, 7500, 1000, 5000, 1000, 5000, 1000, 5000, 5000,
5000, 5000, 10000, 1000, 10000, 10000, 10000, 1000, 5000,
5000, 1000, 5000, 10000, 1000, 5000, 5000, 5000, 10000,
5000, 10000, 5000, 1000, 5000, 10000, 1000, 5000, 5000,
5000, 1000, 210, 226, 442, 3511, 3511, 3511, 2310,
1619, 2404, 1768, 837, 2241, 2382, 3774, 4432, 973,
580, 1501, 2369, 473, 4626, 4635, 439, 1620, 850,
1620, 1107, 2310, 390, 1982, 1587, 1497, 1588, 730,
1619, 6546, 1000, 1000, 10000, 10000, 1000, 10000, 1000,
1000, 10000, 10000, 1000, 5000, 5000, 5000, 10000, 1000,
10000, 10000, 1000, 5000, 10000, 10000, 5000, 5000, 10000,
5000, 10000, 10000, 5000, 5000, 5000, 10000, 1000, 5000,
5000, 1000, 10000, 10000, 10000, 5000, 5000, 5000, 1000,
5000, 5000, 1000, 10000, 10000, 1000, 1000, 10000, 1000,
1000, 1000, 5000, 10000, 10000, 1000, 5000, 1000, 5000,
5000, 10000, 10000, 10000, 5000, 5000, 1000, 1000, 10000,
1000, 1000, 5000, 10000, 5000, 1000, 5000, 1000, 10000,
5000, 10000, 1000, 5000, 5000, 10000, 10000, 1000, 5000,
10000, 1000, 10000, 10000, 5000, 5000, 10000, 10000, 1000,
5000, 10000, 1000, 1000, 5000, 5000, 5000, 5000, 10000,
10000, 10000, 1000, 5000, 1000, 5000, 10000, 1000, 10000,
10000, 5000, 1000, 5000, 1000, 1000, 10000, 1000, 5000,
1000, 10000, 10000, 10000, 1000, 5000, 10000, 10000, 5000,
5000, 10000, 1000, 5000, 1000, 1000, 10000, 10000, 1000,
10000, 10000, 1000, 1000, 1000, 5000, 1000, 5000, 10000,
1000, 5000, 10000, 296, 296, 296, 296, 296, 296,
296, 255, 588, 319, 444, 468, 432, 600, 480,
588, 352, 600, 396, 372, 420, 3650, 3645, 248,
2950, 208, 5000, 10000, 10000, 10000, 10000, 1000, 500,
500, 500, 500, 10000, 1000, 5000, 5000, 5000, 5000,
5000, 500, 10000, 10000, 10000, 10000, 10000, 600, 2739,
289, 2753, 277, 4751, 9570, 9601, 6186, 5116, 7996,
9601, 9613, 8024, 9601, 948, 1440, 600, 10000, 10000,
10000, 10000, 10000, 5000, 5000, 5000, 5000, 5000, 1534,
7980, 845, 823, 493, 721, 325, 8280, 5132, 7632,
2606, 5025, 5190, 7468, 6304, 8760, 9829, 8002, 8393,
9097, 9470, 678, 676, 658, 658, 655, 643, 2004,
516, 2288, 1651, 1093, 4111, 695, 1289, 1736, 1656,
1656, 1656, 452, 4233, 815, 6569, 4613, 2366, 2330,
1618, 2403, 1346, 1619, 396, 4634, 2847, 5432, 2368,
2368, 7127, 1527, 1533, 6167, 985, 1836, 1821, 1836,
629, 747, 5511, 5491, 1656, 2048, 2048, 2048, 2048,
2048, 2048, 1024, 1024, 1024, 1024, 1024, 2048, 1024,
1024, 1024, 1024, 4463, 9526, 1000, 9093, 3000, 3000,
3000, 3000, 3000, 3000, 617, 1548, 2602, 1512, 979,
549, 2495, 1940, 7601, 2058, 6001, 8808, 8201, 2163,
2163, 5701, 5901, 3653, 3653, 313, 313, 5101, 6501,
6601, 7201, 8701, 8901, 9301, 9701, 5401, 6101, 6401,
7001, 7201, 7401, 7401, 7901, 9901, 9901, 4914, 7842,
2726, 9201, 7712, 1096, 1100, 6801, 768, 1096, 4655,
1424, 786, 1687, 5051, 1256, 3549, 8808, 542, 542,
720, 542, 720, 720, 542, 1506, 825, 894, 825,
5301, 5701, 5701, 6001, 6601, 6701, 8101, 9101, 1020,
4560, 3845, 3922, 4491, 3886, 2042, 4106, 1900, 1045,
2229, 6712, 664, 4317, 3948, 3566, 623, 8420, 3089,
3362, 1656, 1776, 1656, 1656, 1656, 1550, 1540, 1248,
1247, 1416, 1540, 1248, 8500, 8800, 7000, 9700, 8500,
8600, 9800, 8900, 9200, 10000, 2400, 4500, 2200, 1300,
5800, 1800, 7500, 3700, 3500, 2200, 4000, 1500, 3600,
6000, 3400, 9000, 259, 1700, 8300, 3800, 4300, 4300,
1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800,
1800, 1800, 1800, 1800, 1800, 6306, 6492, 6264, 6150,
6276, 6282, 6342, 6102, 6588, 6300, 7104, 6234, 9989,
9995, 9996, 9993, 9991, 9986, 9994, 9993, 9985, 9986,
9994, 9999, 9989, 9997, 9991, 10000, 10000, 9700, 7500,
7900, 9600, 1200, 2100, 1500, 6900, 4900, 3800, 1600,
2200, 3600, 6000, 5700, 7700, 3200, 1500, 8200, 2800,
4300, 5400, 1600, 10000, 2600, 5600, 2000, 5500, 8600,
6300, 4700, 3500, 8600, 3900, 6500, 5300, 6800, 5800,
3800, 8400, 4600, 1900, 3400, 3000, 5800, 7000, 5900,
6100]
I want to compute spearman's correlation of the dataframes and want to plot them. Plot can be Scatter or heatmap.
But i don't have any idea how it can be done.
Any resource or reference will be helpful.
Thanks.
A: Check this code:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
A = [...] # insert here your list of values for A
B = [...] # insert here your list of values for B
df = pd.DataFrame({'A': A,
'B': B})
corr = df.corr(method = 'spearman')
sns.heatmap(corr, annot = True)
plt.show()
I get this correlation matrix:
The column A is highly correlated with itself (obviously, this always happens), while the correlation between column A and B is very low.
Version info:
Python 3.7.0
matplotlib 3.2.1
pandas 1.0.4
seaborn 0.10.1
A: You can use the jointplot function from seaborn to plot the datapoints. This not only provides you with the scatterplot, but also with the marginal distribution.
df = pd.DataFrame({"A": A, "B": B})
sbn.jointplot(x="A", y="B", data=df, alpha=0.2)
To get the spearman's correlation coefficient, you can use the spearmanr function from the scipy module:
from scipy.stats import spearmanr
r, p = spearmanr(df["A"], df["B"])
# r = 0.008703025102683665
| |
doc_245
|
I've tried
grep "\$" file.txt
and it does not recognize the $ character to search for.
A: As I posted in my comment,
grep "\\\$" file.txt
The first \\ is a literal \ for the regex engine, and the thrid \ is used to escape the dollar symbol.
A rule of thumb with matching a literal dollar symbol is:
*
*Escape up to 3 times
*Try doubling $ (i.e. $$)
Some of it should work.
A: Please see this answer in another stack exchange forum. Basically you should try grep '\$'.
A: Actually grep \\$ file.txt works
| |
doc_246
|
{
"analyzer": {
"pattern_analyzers": {
"type": "custom",
"pattern": ",",
"tokenizer": "pattern"
}
}
}
Added the same analyzer to a string field, where I store values as comma separated.
The value for the field will be like say,
"skills":"software-engineer,hardware,android developer"
Here I am not getting the exact result, as want to get is like it should split only when it encounters comma. What result I am getting currently is, the string splits on whitespace and special characters.
How to modify my analyzer to make it split the string only when it encounters a comma.
EDIT:
In scenarios like this "software,Engineer (Core, Non-IT),hardware"
It shouldnot split like "software","Engineer (Core"," Non-IT)","hardware"
Instead of "software","Engineer (Core, Non-IT)","hardware"
A: I think this is not a right way of making a custom analyzer
Try doing it this way.
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"pattern_analyzers": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "pattern",
"pattern": ","
}
}
}
}
}
| |
doc_247
|
The i loop is for barray and j loop is for carray.
I get an error that i is not defined. I want the loops to go from 0 to bsize - 2 in steps of 3, and 0 to csize - 2 in single steps.
How should I relate the size and array to the for loop?
bsize = 960
csize = 960
barray = bytearray(fi.read())
carray= bytearray(f1.read())
for i in range (bsize-2,i+3):
for j in range (csize-2,j+1):
A: for i in range (0, bsize - 2, 3): #possibly bsize - 1?
for j in range (csize - 2): # possibly csize - 1?
#do your thing
That will loop through the first one incrementing i by 3 every time, and j by 1.
Look at this tutorial or these docs to learn range, it's really useful!
I'm not sure if you want to go through bsize - 2 or just up to it. If through, use size - 1 to get size - 2.
The reason you're getting an error is that you haven't defined the i you're using in the step. As you can see, python's range isn't like a lot of other languages' for constructs. Once you get used to it, though, it's really flexible and easy to use.
Some examples using simple range:
>>> for i in range(0, 14, 3):
... print i
...
0
3
6
9
12
>>> for i in range(1, 5):
... print i
...
1
2
3
4
>>> for i in range(5):
... print i
...
0
1
2
3
4
| |
doc_248
|
These are the rules
RewriteEngine on
# index.php?store=xyz (executing perfectly)
RewriteCond %{QUERY_STRING} store=([^&]+)&?
RewriteRule /?index.php$ /%1 [END,R=301]
RewriteRule ^/?([a-zA-Z0-9]+)$ index.php?store=$1 [END]
RewriteRule ^/?([a-zA-Z0-9]+)/products$ index.php?store=$1&view=products [END]
RewriteRule ^/?([a-zA-Z0-9]+)/products/([0-9]+)$ index.php?store=$1&view=products&category=$2 [END]
RewriteRule ^/?([a-zA-Z0-9]+)/products/([0-9]+)$ index.php?store=$1&view=sales&sale=$2 [END]
RewriteRule ^/?([a-zA-Z0-9]+)/single/([0-9]+)$ index.php?store=$1&view=single&product=$2 [END]
# index.php?store=xyz&view=products(executing perfectly)
RewriteCond %{QUERY_STRING} store=([^&]+)&?
RewriteCond %{QUERY_STRING} view=products&?
RewriteRule /?index.php$ /%1/products [END,R=301]
# index.php?store=xyz&view=products&category=123(executing perfectly)
RewriteCond %{QUERY_STRING} store=([^&]+)&?
RewriteCond %{QUERY_STRING} view=products&?
RewriteCond %{QUERY_STRING} category=([^&]+)&?
RewriteRule /?index.php$ /%1/products/%3 [END,R=301]
# index.php?store=xyz&view=sales (error 404)
RewriteCond %{QUERY_STRING} store=([^&]+)&?
RewriteCond %{QUERY_STRING} view=sales&?
RewriteRule /?index.php$ /%1/sales [END,R=301]
# index.php?store=xyz&view=sales&sale=123 (error 404)
RewriteCond %{QUERY_STRING} store=([^&]+)&?
RewriteCond %{QUERY_STRING} view=sales&?
RewriteCond %{QUERY_STRING} sale=([^&]+)&?
RewriteRule /?index.php$ /%1/sales/%3 [END,R=301]
# index.php?store=xyz&view=single&product=123(executing perfectly)
RewriteCond %{QUERY_STRING} store=([^&]+)&?
RewriteCond %{QUERY_STRING} view=single&?
RewriteCond %{QUERY_STRING} product=([^&]+)&?
RewriteRule /?index.php$ /%1/single/%3 [END,R=301]
Can you please tell me what I may be doing wrong?
A: You redirect the client from
index.php?store=xyz&view=single&product=123
to
/%1/single/%3
and you have a corresponding RewriteRule
RewriteRule ^/?([a-zA-Z0-9]+)/single/([0-9]+)$ index.php?store=$1&view=single&product=$2 [END]
You also redirect the client from
index.php?store=xyz&view=sales&sale=123
to
/%1/sales/%3
but there is no corresponding RewriteRule, only two
RewriteRule ^/?([a-zA-Z0-9]+)/products/([0-9]+)$ index.php?store=$1&view=products&category=$2 [END]
RewriteRule ^/?([a-zA-Z0-9]+)/products/([0-9]+)$ index.php?store=$1&view=sales&sale=$2 [END]
So maybe changing one of the "product" rules to "sales" fixes your immediate problem.
Although, you should be aware that the redirecting rules don't do, what you might think.
RewriteCond %{QUERY_STRING} store=([^&]+)&?
RewriteCond %{QUERY_STRING} view=single&?
RewriteCond %{QUERY_STRING} product=([^&]+)&?
RewriteRule /?index.php$ /%1/single/%3 [END,R=301]
Having three RewriteCond, don't correspond to %1 and %3 in your rewrite rule, only %1 is valid, see RewriteRule for an explanation
In addition to plain text, the Substitution string can include
*
*...
*back-references (%N) to the last matched RewriteCond pattern
To have both %1 and %3, you must capture three parts in the last RewriteCond, e.g.
RewriteCond %{QUERY_STRING} store=([^&]+)&view=(single)&product=([^&]+)
RewriteRule ...
See another answer to RewriteCond to match query string parameters in any order for a solution to capture multiple parts.
| |
doc_249
|
Here I leave the image of the object on which I am getting the ids and colors
enter image description here
The dm_id are the ones I need to be able to make the color_bom_header array with the same dm_id
Here is the code that makes this happen but it doesn't work right :(
this.state.stylesArray = this.props.location.state.stylesCombo.components.map((index, current) => {
const col = this.state.stylesCombo.components && (this.state.stylesCombo.components[current].style_colors = index.color_bom_name)
if(this.state.stylesCombo.components && this.state.stylesCombo.components[current].dm_id === index.dm_id){
if(this.state.stylesCombo.components[current].dm_name === index.dm_name){
this.state.colorStArray.push(col)
this.state.cstarr = [...new Set(this.state.colorStArray)]
}
}else{
this.state.stylesCombo.components && (this.state.stylesCombo.components[current].style_colors = current.color_bom_name)
}
this.state.stylesCombo.components && (this.state.stylesCombo.components[current].style_colors = this.state.cstarr)
})
i have also tried with this but im stuck
this.state.stylesArray = this.props.location.state.stylesCombo.components.map((current, index) => {
for (current.dm_id in this.props.location.state.stylesCombo && this.props.location.state.stylesCombo.components) {
if(current.dm_id === index.dm_id){
}else{
}
}
this.state.stylesCombo.components && (this.state.stylesCombo.components[index].style_colors = this.state.colorStArray)
})
A: So the task is to make an array of elements with the same ID.
We need to somehow count how many elements of each ID are presented. We can do that with an object.
const counter = {}
for (const el of arr) {
// if counter has element with given ID, increase the counter, otherwise initialize to 1
counter[el.id] = (counter[el.id] || 0) + 1;
}
Then we find ID that have the max count
const max = Object.entries(counter).reduce((acc, curr) => {
if (acc.length === 0) return curr;
const [currKey, currVal] = curr;
const [maxKey, maxVal] = acc;
return currVal > maxVal ? curr : acc
}, [])[0]
And then we filter our state
state.filter(element => element.id === max);
// or if `element.id` is of type number
// you need to cast `max` to Number first
const newMax = Number(max);
state.filter(element => element.id === newMax);
I'm not sure I didn't overcomplicate the logic but that's the way I came up with.
| |
doc_250
|
Currently I am doing this by checking if savedInstanceState == null on an Activity.onCreate method.
I am wondering how reliable this is? Is there a better alternative? Will savedInstanceState == null in any other scenario other than on Activity startup?
A: you can use SharedPreferences at onCreate() method of otheractivity
SharedPreferences wmbPreference = PreferenceManager.getDefaultSharedPreferences(this);
boolean isFirstRun = wmbPreference.getBoolean("FIRSTRUN", true);
if (isFirstRun) {
//do something
}
By using this even if user delete app data by mistake this code will re-run.
You must be ready if the data is removed by user this could be achieved by SharedPreferences
| |
doc_251
|
Within this layout I have tiles in rows of 16 which represent something of a palette.
I want this grid of tiles to be separated by lines, like a grid should be. I can do this easily with the tiles' paintEvents.
However, the obvious problem is that between the tiles, the lines are doubled up. When I scale this up for other applications, the difference becomes even more noticeable.
So, is there a way to create a gridline overlay for my QFrame? I have considered converting the whole thing to a view/scene solution, and using drawForeground, however this seems like a completely inappropriate use of the paradigm.
Thanks for any assistance!
A: Put the QFrame into a QGridLayout, then put a custom QWidget with transparent background and paintEvent that paints the grid on top of it (same QGridLayout position).
Or since you already have a QGridLayout, just put the custom QWidget in that, above the tiles, filling the entire grid.
A side note, are you sure you want QFrame there, or if just QWidget would do? Just saying, because with QFrame you get that 1990's look into your UI... If you do want that then go ahead, just saying.
| |
doc_252
|
*
*create package.json
*install all packages with npm
*for eslint:
*
*create .eslintrc.{json,js,yaml}
*install plugins and create rules
*integrate plugins (airbnb, prettier, etc)
*repeat this for each new project
For npm installs I know -g can install globally, but if I place the .eslintrc.json in my home folder, when I open it in VIM, it says it could not load the plugins (airbnb, prettier, etc) style guides. Presumably because there is no node_modules folder in my project folder.
So, I decided to create one template folder with all the stuff from the sequence above. and copied that structure to a folder where I'm opening my .html, .css, .js or .json files as an autocmd from VIM.
Here is the relevant part of my .vimrc
autocmd FileType javascript,json,css,html silent exec '! '.$HOME.'/Documents/eslint-template/prepare.sh
and here is the prepare.sh:
$ cat Documents/eslint-template/prepare.sh
#!/bin/bash
echo Preparing environment...
templateFolder=$HOME/Documents/eslint-template
files=( $templateFolder/{.,}* )
for file in ${files[@]}; do
[ "$(basename $0)" == "$(basename $file)" ] && continue
destFile=$PWD/$(basename $file)
diff -q $file $destFile > /dev/null 2>&1 ||
cp -r $file $PWD/
done
rsync -azz --delete $templateFolder/node_modules/ node_modules/ > /dev/null 2>&1
echo Preparation completed!
I've been tweaking and testing and it has been working fine (I will run more tests, though). But it may take some 10 to 15 seconds to open a simple .html file as it has to copy the entire node_modules structure from the template to the new project. Even the -zz option from rsync, when run from within VIM seems to be a lot slower than running from terminal directly.
So, the question is, what are the other alternatives to do this?
A: You can use something similar to this CLI tool. You can put your project structure in the templates folder. Please avoid putting node_modules there because we can always install the dependencies listed in package.json.
| |
doc_253
|
0 0 0
0,0,0
0, 0, 0
rgb(0 0 0)
rgb(0,0,0)
rgb(0, 0, 0)
The regex I created to solve this problem.
const regex = /(^[\d\s,]+)|(^rgb\([0-9\s,]+\))/gi
but there are certain criteria's :
*
*r,g,b values should between [0-255].
*000 or 255255255 should not return true.
*regex should not pass any other string other than mentioned above.
const REGEX = /(^[\d\s,]+)|(^rgb\([0-9\s,]+\))/gi
const TEST_CASES = [
'0 0 0',
'0,0,0',
'0, 0, 0',
'rgb(0 0 0)',
'rgb(0,0,0)',
'rgb(0, 0, 0)'
]
for (let i = 0; i < TEST_CASES.length; i++) {
console.log(REGEX.test(TEST_CASES[i]))
}
As you can see the regex returning false for some test cases.
A: To limit the integers to the 0-255 range, the regex will have to test specific digit sequences. As you need this three times, you could probably benefit from creating your regex dynamically, and capture each number in a separate capture group.
Here is how it could be done:
const byte = String.raw`(2(?:5[0-5]|[0-4]\d)|1\d\d|[1-9]?\d)`;
const tuple = String.raw`${byte}[ ,][ ]*${byte}[ ,][ ]*${byte}`;
const regex = RegExp(String.raw`^(?:${tuple}|rgb\(${tuple}\))$`);
const tests = [
'0 0 255',
'0,128,0',
'99, 0, 0',
'rgb(0 250 100)',
'rgb(0,8,9)',
'rgb(0, 0, 0)',
// Negative tests
'155255255',
'1,1,1)',
'rgb(2,2,2',
'3,3,3,',
'256,4,4',
'5,260,5',
'6,,',
];
for (let s of tests) {
let arr = s.match(regex);
if (arr) {
const [r, g, b] = arr.slice(1).filter(Boolean).map(Number);
console.log(r, g, b);
} else console.log("no match");
}
It is important, not to use the g flag with the test method here, as then it uses and modifies the state of the regex. This is not what you want here.
Using match we get the whole match and each capture group in an array. As there are 6 capture groups, the array has 7 entries. Entries 1,2,3 are filled when the input does not have "rgb", while entries 4,5,6 are filled when the input has "rgb". With .slice(1) we discard entry 0 (the full match); with filter(Boolean) we retain the filled three entries, and with map(Number) we convert the strings to numbers.
A: This is to valid numbers from 0 to 255 : ^(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$
const REGEX = /^(?:(?:rgb\())?(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(?:,?\s?)){3}\)?$/i
const TEST_CASES = [
'0 0 0',
'0,0,0',
'0, 0, 0',
'rgb(0 0 0)',
'rgb(0,0,0)',
'rgb(0, 0, 0)'
]
for (let i = 0; i < TEST_CASES.length; i++) {
console.log(REGEX.test(TEST_CASES[i]))
}
Link to regex101
| |
doc_254
|
A: Maybe I don't understand the question - but why VBA?
= IF(And([2021 Date]>VarStartDate,[2021 Date]<VarEndDate),AddVariationPer,0)
| |
doc_255
|
the target is during rpm -i my_rpm.rpm
according to the spec file , ksh script will do some installation & configuration
for example run other script and edit some files
THX
A: You can specify the shell to use in the script with -p I believe.
So it would look like:
%post -p /sbin/ldconfig
Or:
%post -p /bin/ksh
do_stuff_here
A: Scriptlets in a spec file are normally executed by bash; however, if you need to use a different shell you could of course do that. Just write something like this:
cat > /tmp/script <<EOF
# write script here
EOF
ksh /tmp/script
OK, that is not very secure but you get the idea. This would be more secure, but I'm not sure if it would work:
ksh <<EOF
# write script here
EOF
| |
doc_256
|
A: I would follow the requirement the first time the application is launched. I would also provide a simple way to switch from full screen to windowed, for instance by pressing ESC (and another way to go back to full screen). Then I would store the mode when quitting the application and restore this mode at next launch.
A: I think the annoyance-factor depends a lot on what the application tries to do.
If it is some utility that I might start while working in 5 different applications and it forces its fullscreen-ness on my, then I'd get highly annoyed.
If it is a specialized application that helps me with the entire workflow of a given task (so that I never or rarely need any other apps open at the same time), then fullscreen might actually be a valid default.
Whatever you do, just make sure that toggling the startup behaviour is very discoverable. Because no matter which way you'll go, some of your users will disagree with your decision. And for them it should be very easy to change to their prefered way.
A: Before doing the opposite of what your requirements say, I'd have the requirements changed.
However, what about giving the user the choice at install time?
A: The window at first-start-up should default to the optimal size for the largest proportion of users. For a graphics-intensive full-featured app, that may very well be full screen.
As for individual user preferences for window size, it seems to me most users won’t know if they want full screen or not until after they’ve started to use the app and see how much screen space they need and how much they use the window in conjunction with other windows. Asking them which size they want at install or first-start-up could thus be annoying and even confusing. It’s better to put such a setting under Options/Preferences.
Perhaps the best thing to do is save the window status on exit. Users who like it non-maximized thus only have to non-maximize it (and size it) once and then forget about it. The only consideration is to have some way to reset the window to the default (e.g., Window > Standard Size menu item) for novice users who accidentally resize or reposition the window to something bizarre and don’t know how to get it back. Alternatively, you could have a Window > Keep Sizes and Positions menu item for users to explicitly save the window status across sessions.
A: Go back to the requirements writers and ask them if they have considered non-traditional monitor setups, such as:
*
*30" or larger monitor. Do you really want your app hogging up all the screen real-estate?
*Multiple monitors. Which monitor will you run on? Can the user move your app from one monitor to another? Can your app span more than one monitor?
*Virtual desktops. Can the user move your app from one desktop to another? Can they switch desktops while your app is running? Can your app span more than one desktop?
Such setups are increasingly common, especially large monitors. IMO, full-screen mode (the default for many older Windows apps) is becoming less and less useful.
A: The problem with presenting the user with the option of initially selecting fullscreen / vs windows is that they haven't used the software yet. How can they make a decision on which is better for them, without experience?
I would run the app in whichever mode provided the best user experience and then offer an option to change it both in the Preferences and though a hint while starting up the application for the 2nd time.
| |
doc_257
|
Process: com.ahmadtakkoush.source, PID: 29506
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.username.appname/com.username.appname.MainActivity}:
java.lang.IllegalStateException: FragmentManager is already executing
transactions
Here's a code sample
https://pastebin.com/ufF5LsbU
A: Remove Objects.requireNonNull(getActivity()) from addSnapshotListener listener and use like below:
query.addSnapshotListener( new EventListener<QuerySnapshot>() {
// implementation here
});
Handle the listener detachment with listener.remove on onPause().
//Stop listening to changes
registration.remove();
See more details here
| |
doc_258
|
I could able to dump single line based on some criteria [like if line is getting start with // or /* ], but in case of when lines starts from /* and after 3-4 sentences its end with * / .
Only first line which is started from /* and last line which is ended with */ could able to dump.
I'm unable to handle this situation, please help.
Below is my code:-
fileopen = open("test.c")
for var in fileopen:
if var.startswith("//"):
var1 = var1 + var
continue
if var.startswith("/*"):
var1 = var1 + var
continue
else:
continue
worksheet.write(i, 5,var1,cell_format)
Note:- Above code will be having indentation issue. As i don't how to put the code properly in stack over flow, so please ignore this issue.
For example:-
/* Test that the correct data prefetch instructions are generated for i386
variants that use 3DNow! prefetchw or SSE prefetch instructions with
locality hints. */
I want to dump entire data at once through python script but i could able to dump only "First Line", which is started with /*.
Any suggestion please!!!
Thanks in Advance.
A: import re
fileopen = open("test.c")
# Convert file to a string
source_code = ""
for var in fileopen:
source_code += var
# Find all the comments from the source code
pattern = r'//.*?$|/\*.*?\*/|\'(?:\\.|[^\\\'])*\'|"(?:\\.|[^\\"])*"'
found = re.findall(pattern, source_code, re.DOTALL | re.MULTILINE) # list of comments
var1 = ""
for var in found:
var1 = var1 + var
worksheet.write(i, 5,var1,cell_format)
| |
doc_259
|
# Java Heap Size: by default the Java heap size is dynamically
# calculated based on available system resources.
# Uncomment these lines to set specific initial and maximum
# heap size in MB.
#wrapper.java.initmemory=512
#wrapper.java.maxmemory=512
Does that mean that I should not worry about -Xms and -Xmx?
I've read elsewhere that -XX:ParallelGCThreads=4 -XX:+UseNUMA -XX:+UseConcMarkSweepGC would be good.
Should I add that on my Intel® Core™ i7-4770 Quad-Core Haswell 32 GB DDR3 RAM 2 x 240 GB 6 Gb/s SSD (Software-RAID 1) machine?
A: I would still configure it manually.
Set both to 12 GB and use the remaining 16GB for memory mapping in neo4j.properties. Try to match it to you store file sizes
| |
doc_260
|
[{...},{...},{...},{"-name": "Test", "-lists": [123, 456, 789]},{...}]
i tried with a filter function but it doesnt works :-(
this is the query where i would like to change the result to the value/array of "-lists"
.findOne({ _id: serviceID }, function (err, result) {
if (err) {
res.json(err);
} else {
try{
res.json(result.service.modules)
console.log(result.service.modules)
}catch(error){
console.log(error)
}
}
})
Have someone an idea for me?
Best regrads & stay healthy
A: You can try the map function of the array.
const data = [
{"-name": "Test", "-lists": [123, 456, 789]},
{"-name": "Test", "-lists": [222, 333, 444]}
];
const result = data.map((x) => x['-lists']);
console.log(result);
This will return an array of the lists data which is an array in itself.
A: This is an example of one of the approaches you can use to extract the value of an array nested in an object which is inside an array.
const arr = [{ someValue: 1 }, { "-lists": [1, 2, 3] }];
const result = [];
arr.filter((val) => {
if (val["-lists"]) {
result.push(...val["-lists"]);
}
});
console.log(result);
| |
doc_261
|
When I run this line of tweepy code i run into two problems. I receive the code "Tweepy has no attribute OAuthHandler" despite that being in the documentation I found. One assumes that that would be valid code.
The second problem is that when I registered my app, I did not receive a consumer_ token. I'm not quite sure how to request a consumer token either.
A: First you have to get both the consumer_token and the consumer_secret. When you register the app it gives you a couple of strings you then use for authentication. The consumer_token is the Consumer Key string twitter provides you with, and then the consumer_secret is the Consumer Secret twitter provides you with.
Then when you call auth = tweepy.OAuthHandler(consumer_token, consumer_secret) you have to have set both the consumer_token and the consumer_secret to the strings twitter provided you with. Then this should work.
A: *
*Make sure you have
import tweepy
before you attempt to call any of its classes. That sounds like the main problem you are having.
*
*You will need 2 sets of keys, the consumer keys, and the access tokens. Both are available on the https://dev.twitter.com/apps page where you have your registered app. The consumer keys are used by the OAuthHandler(), and the access tokens are used by set_access_token(). Please see this example of using OAuthHandler
| |
doc_262
| ||
doc_263
|
This all works great except on this one particular server where I have yet to get it to ever popup the application request to access QBs. An error is always returned saying, "A QuickBooks company data file is already open and it is different from the one requested." with an error code of -2147220470.
I am using these instructions to access the file: http://support.quickbooks.intuit.com/support/pages/inproducthelp/Core/QB2K12/ContentPackage/Verticals/Retail/rr_sdkapp_access_preferences.html
Also I am in single-user mode while doing this: http://support.quickbooks.intuit.com/support/articles/SLN41168
On this server there are a few QB files but none of them should be being used right now, but is there a way to find out if there are any QB files being accessed on the server that is keeping the popup from appearing?
Thanks a ton!
A: There is not a direct way that I know of to see what company file is currently open (if any) without calling BeginSession and checking for errors. If you supply a company file name and a different company is open, you will get the "A QuickBooks company data file is already open and it is different from the one requested." error.
If you omit the company name when you call BeginSession, QuickBooks will use whatever company file is open and present the prompt(assuming the rights have not already been granted). However, if there is not a company file open, then you get an error "Could not start QuickBooks." (if QuickBooks isn't running at all), or "If the QuickBooks company data file is not open, a call to the "BeginSession" method must include the name of the data file." error if QuickBooks is open, but has no company file open.
Most programs will save the company file that they have been linked to, so they will pass the file name in their BeginSession call, and then check for the "A QuickBooks company data file is already open and it is different from the one requested." error and present the information in a clean way to the customer. For example, the QuickBooks POS software will prompt the customer if they want to continue using the old file that was setup previously, or if they want to link to the file that is currently open.
A: I resolved this by installing the application connecting to the QB file on the same server. Then (this is the important part of the recipe) I changed the path from the defaulted UNC path to a local path and it magically worked. (I hate magic! If someone could explain why this happens then that would be great.)
Now that I have the integrated application authorization for this app added into the QB file I am now able to access it from other work stations using the same app and using a UNC path to point to the file.
| |
doc_264
|
I want to know that, except interceptor, is there some way that I can set these objects into ModelMap before any function?
And in interceptor, I can only set into request, but actually, some data are already in servlet context.
Thanks.
A: Try for this a annotation style as @PreHandle with this can be annotated your method or function,
and means that Handler invoke execution of this function/method right before Dispatcher handle appropriate controller.
exact explain can be found here: http://static.springsource.org/spring/docs/2.5.x/api/org/springframework/web/servlet/HandlerInterceptor.html
| |
doc_265
|
For a structure like below how should create a wrapper to access the class member from C Code.
myHeader.h for c++
-------------------
class childA:public parentA {private: void logger() override}
class childB:public parentB
{
private: /*some members*/
protected: /*some members*/
public:
explicit childB(childA* a);
}
class parentB
{
protected:
MyType object;
public:
boolean Init(MyType obj); /*the implmentation is object=obj*/
}
Now in a C code, I want to access the object.
How should I write the wrapper for this?
Object type is a function pointer => typedef S32(*IoFunc)(Msg&);
where S32 is unsigned int, Msg is a struct.
Thanks for your help.
A: Unobjectifying the code is quite simple to do:
#ifdef __cplusplus
extern "C"
{
#endif
void* construct_me(/*arguments*/);
void* get_object(void* obj);
void delete_me(void* obj);
#ifdef __cplusplus
}
#endif
And then define them:
extern "C"
{
void* construct_me(/*arguments*/)
{
return static_cast<void*>(new parentB(/*arguments*/));
}
void* get_object(void* obj)
{
return static_cast<void*>(&(static_cast<parentB*>(obj)->object));
}
void delete_me(void* obj)
{
delete static_cast<parentB*>(obj);
}
}
If the type can be used in C, then you can just do:
Type get_object(void* obj)
{
return static_cast<parentB*>(obj)->object;
}
instead of casting it to void*.
Inheritance doesn't change a thing. It's the same mechanism, except that if you have virtual functions, you should still wrap all of them for the inherited class (it's UB to transform a A* to void* to B* even if A inherits from B).
P.S.: I don't think this is any different than the answers in the link that was provided.
A: What you want is to "un-objectify" the functions.
Every public function that is inside a class has to be created outside with the first parameter void* and the rest of the parameters the same as in the member function.
This means you also have to create a "constructor" and a "destructor" function for the object.
There are 2 ways these functions can work, depending or where the data is to be stored: memory provided by the caller or memory allocated by the library (on the "free store").
The first thing that the new functions need is to declare linkage https://en.cppreference.com/w/cpp/language/language_linkage
Only two language linkages are guaranteed to be supported:
*
*"C++", the default language linkage.
*"C", which makes it possible
to link with functions written in the C programming language, and to define, in a C++ program, functions that can be called from the modules written in C.
So in the new header when it is used in C++ the linkage has to be declared, but when the header is used in C the linkage has to be removed:
So this is needed at the beginning:
#ifdef __cplusplus
extern "C"
{
#endif
and at the end:
#ifdef __cplusplus
}
#endif
Memory allocated on the "free store".
The declaration of the functions in the header within the above code should be:
void* childA_construct(); // ChildA doesn't have and constructor paramters
void* childA_destruct();
void* childB_construct(void* ptr_childA);
void* childB_destruct();
void* parentB_construct(); // parentB doesn't have and constructor paramters
void* parentB_destruct();
bool parentB_Init(struct MyType m);
Next in the implementation file:
extern "C"
{
void* childA_construct()
{
return static_cast< void* >(new childA());
}
void childA_destruct(void* ptr_childA)
{
delete static_cast< childA* >(ptr_childA);
}
void* childB_construct(void* ptr_childA)
{
childA* a_ptr = static_cast< childA* >(ptr_childA);
return static_cast< void* >(new childB(a_ptr));
}
void childB_destruct(void* ptr_childB)
{
delete static_cast< childB* >(ptr_childB);
}
void* parentB_construct()
{
return static_cast< void* >(new parentB());
}
void* parentB_destruct(void* ptr_parentB)
{
delete static_cast< parentB* >(ptr_parentB);
}
bool parentB_Init(void* ptr_parentB, struct MyType mt)
{
parentB* ptr_pb = static_cast< parentB* >(ptr_parentB);
return ptr_pb->Init(mt);
}
}
Memory allocated by caller
If the interface requires that the caller allocates memory, then the caller needs to know how much memory to allocate, so one way is to make a function return the required size.
Then in the construct method "placement new" has to be used to call the constructor.
While in the destruct function, the destructor has to be called manually.
extern "C"
{
int sizeof_childA() { return sizeof(childA); }
void childA_construct2(void* ptr_buffer) { new (ptr_buffer)childA(/*constructor params*/); }
void childA_destruct2(void* ptr_buffer) { static_cast< childA* >(ptr_buffer)->~childA(); }
}
If you want to store and use function pointer for C, then to declare a function type:
extern "C" typedef unsigned MyFuncType(struct Msg*);
then the variable can stored as:
MyFuncType func;
| |
doc_266
|
0 1 0 0
0 0 1 0
0 0 0 1
1 0 0 0
The grid is "collapsed" to form a smaller grid of 1 fewer row and 1 fewer column, so the above example would be "collapsed" to form a grid of 3 rows and 3 columns.
The new values are determined by the following rule -
new_grid[i][j] is dependent on
i) old_grid[i][j],
ii) old_grid[i][j+1],
iii) old_grid[i+1][j]
iv) old_grid[i+1][j+1]
If exactly one of the above values are 1, then new_grid[i][j] will be 1, else 0.
So for the example grid, out of [0][0], [0][1], [1][0] and [1][1], only [0][1] is 1, so [0][0] in the new grid will be 1. Similarly, out of [0][1], [0][2], [1][1] and [1][2], both [0][1] and [1][2] are 1, so [0][1] in new_grid will be 0.
The input is given in the form of new_grid values. I have to find out the number of possible configurations of old_grid, such that new_grid is possible through the collapsing rules provided.
My approach
The backtracking solution that I have currently thought of goes like this -
*
*Identify imaginary 2X2 boxes for each 1-valued cell in the old grid which would correspond to the appropriate cell in the new grid.
*All of these boxes will contain exactly one cell with value 1, so put 1 in a random cell in each box.
*Recursively check if putting 1s in the random cells ensures that each box still retains exactly one cell with value 1.
*If a grid configuration is finally obtained where every box contains exactly one cell with value 1, check if the configuration can be "collapsed" to get the new grid.
*If not, then repeat the process with a different cell with the value 1.
If there are some cells in the old grid which don't come under any "box", then they are what I have termed as "doesn't-matter" cells.
For example -
1 1
0 0
For the above new_grid, the old_grid can be -
1 0 1
0 0 0
0 0 0
or
1 0 1
0 0 0
1 1 1
The last row's cells are "doesn't-matter" cells since they don't come under any 2X2 box and they can all be 1s or 0s to be valid configurations (I am thinking that is the extent to which we can flexibly manipulate them, although I am not sure).
My question is this - This algorithm is quite possibly exponential in growth and it will take a lot of time for a grid of say, 50X10.
Is there any other way to solve this question? Or is there any clever algorithm to not go through every possible configuration in order to count them?
A: Hmm, so I have thought of a 2x3 newGrid like so:
newGrid: 0 1 0
0 0 0
Which would need to be produced by either one of these 3x4 oldGrids:
Each _ can be 1 or 0
oldGrid 1: _ 0 1 _
_ 0 0 _
_ _ _ _
oldGrid 2: _ 1 0 _
_ 0 0 _
_ _ _ _
oldGrid 3: _ 0 0 _
_ 1 0 _
_ _ _ _
oldGrid 4: _ 0 0 _
_ 0 1 _
_ _ _ _
And all the remaining 8 spots can be populated in 2^8 ways.
So the answer would be 4 * 2^8
However, imagine if newGrid had more than one 1:
newGrid: 1 1 0
0 0 0
Which will have these 8 oldGrids:
oldGrid 1: 1 0 _ _
0 0 _ _
_ _ _ _
oldGrid 2: 0 1 _ _
0 0 _ _
_ _ _ _
oldGrid 3: 0 0 _ _
1 0 _ _
_ _ _ _
oldGrid 4: 0 0 _ _
0 1 _ _
_ _ _ _
oldGrid 5: _ 1 0 _
_ 0 0 _
_ _ _ _
oldGrid 6: _ 0 1 _
_ 0 0 _
_ _ _ _
oldGrid 7: _ 0 0 _
_ 1 0 _
_ _ _ _
oldGrid 8: _ 0 0 _
_ 0 1 _
_ _ _ _
from oldGrid 1 I would have produced 2^8 combinations. But notice how some of those will be the same solutions produced by oldGrid 6. It's those that look like this:
oldGrid 1.1: 1 0 1 _
0 0 0 _
_ _ _ _
And it has 2^6 solutions.
So oldGrid 1 has 2^8 - 2^6 solutions that don't conflict with oldGrid 6.
And oldGrid 6 has 2^6 solutions that don't conflict with oldGrid 1.
And together they have (2^8 - 2^6) + (2^8 - 2^6) + 2^6 solutions.
1 and 6, 1 and 8, 2 and 5, 3 and 6, 3 and 8, 4 and 7 have conflicting solution spaces, each one with 2^6.
Which I think means the number of solutions are 8 * 2^8 - 6 * 2^6.
And that is:
numberOfSolutions = numberOf1s * 4 * 2^(oldGridSize-4) - overlappingSolutionsCount
overlappingSolutionsCount = numberOfOverlappingPairs * 2^(oldGridSize-4-overlapAreaSize)
How to calculate the overlap
function countOverlappingSolutions(newGrid: an MxN matrix) {
result = 0
oldGridSize = (M+1) * (N+1)
for each pair of 1s in the newGrid:
let manhattanDistance = manhattan distance between the 1s in the pair
let overlapAreaSize = 0
if the 1s are in the same row or column
if manhattanDistance == 1:
overlapSize = 2
else if manhattanDistance == 2
overlapAreaSize = 1
result += 2^(oldGridSize -4 -overlapAreaSize)
return result
}
The final algorithm:
let newGrid be a MxN matrix
let numberOf1s = number of 1s in newGrid
let oldGridSize = (M+1) * (N+1)
result = numberOf1s * 4 * 2^(oldGridSize - 4) - countOverlappingSolutions(newGrid)
I couldn't make the effort to write python code but I hope the solution is correct and/or shows the way
| |
doc_267
|
The list I want to create is a list of elements that is embedded in joins which makes the operation more complicated. The query that my coworker helped me write is the following:
''' --- Query written with co-worker's help
SELECT LOWER(some_query_query) as query,
AVG(n_results::FLOAT)::FLOAT as
n_results_avg,
count(*) as data_count
from some_field
JOIN
(SELECT
request_id,
some_id,
count(*) as n_results
from s_results
WHERE type_name = 'tinder_match'
AND time <= '2019-06-20'
AND time >= '2019-06-19'
GROUP BY request_id, some_id) as n_count
ON n_count.request_id = some_field.request_id
WHERE time <= '2019-06-20'
AND time >= '2019-06-19'
AND language = 'en'
AND country = 'US'
GROUP BY LOWER(some_query_query)
ORDER BY n_results_avg DESC
--- Current Behaviour: Returns a table with query,
n_results_avg, data_count as columns
--- Desired Behaviour: Returns a table with query,
list_of_name_match_results, data_count as columns
--- list_of_name_match_results is a list containing all name
match results (numbers)
'''
Actual results: Output table with query, name_match_results_avg, data_count as columns
Desired results: Output table with query, list_of_name_match_results, data_count as columns
| |
doc_268
|
Regex Expression:
<(\w+).+src=[\x22|'](?![^\x22']+mysite\.com[^\x22']+)([^\x22']+)[\x22|'].*>(?:</\1>)?
Using:
preg_replace($pattern, $2, $comment);
Comment :
Hi look at this!
<img src="http://www.mysite.com/blah/blah/image.jpg"></img>
<img src="http://mysite.com/blah/blah/image.jpg"></img>
<img src="http://subdomain.mysite.com/blah/blah/image.jpg"/>
<img src="http://www.mysite.fakesite.com/blah/blah/image.jpg"></img>
<img src="http://www.fakesite.com/blah/blah/image.jpg"></img>
<img src="http://fakesite.com/blah/blah/image.jpg"></img>
Which one is your favorite?
Wanted Outcome:
Hi look at this!
<img src="http://www.mysite.com/blah/blah/image.jpg"></img>
<img src="http://mysite.com/blah/blah/image.jpg"></img>
<img src="http://subdomain.mysite.com/blah/blah/image.jpg"/>
http://www.mysite.fakesite.com/blah/blah/image.jpg (notice that it's just url, because it's not from my site)
http://www.fakesite.com/blah/blah/image.jpg
http://fakesite.com/blah/blah/image.jpg
Which one is your favorite?
Anyone see anything wrong?
A:
I'm trying to use preg_replace to filter member comments. To filter script and img tags.
HTML Purifier is going to be the best tool for this purpose, though you want a whitelist of acceptable tags and attributes, not a blacklist of specific harmful tags.
A: The biggest thing wrong I can see is trying to use regex to modify HTML.
You should use DOMDOcument.
$dom = new DOMDocument('1.0', 'UTF-8');
$dom->loadHTML($content);
foreach($dom->getElementsByTag('img') as $element) {
if ( ! $element->hasAttribute('src')) {
continue;
}
$src = $element->getAttribute('src');
$elementHost = parse_url($src, PHP_URL_HOST);
$thisHost = $_SERVER['SERVER_NAME'];
if ($elementHost != $thisHost) {
$element->parentNode->insertBefore($dom->createTextNode($src), $element);
$element->parentNode->removeChild($element);
}
}
A: you shoud use im mode;
#<(\w+).+src=[\x22|'](?![^\x22']+mysite\.com[^\x22']+)([^\x22']+)[\x22|'].*>(?:</\1>)?#im
| |
doc_269
|
How I currently have it, the program compares the users data to each application and confirms or denies access. This seems inefficient. Any help us greatly appreciated!
Note: I am reading the applications and their numbers in from an XML, so I can store them as I wish.
A: If there are large numbers of numbers required per application, the best approach is to use set intersection. If the numbers are contiguous or at least dense, you can optimize this into a bitset. For only one or two numbers though, I'd recommend just testing each number individually, since it's likely to be faster than full set operations.
A: The solution:
*
*Define a class for each application (let's call it App). The class contains the name of the application, and a (sorted) List/array of entitlements.
*Use Map to map from String to App: Map<String, App> for all single entitlement apps (you can use HashMap or TreeMap - your choice). If there are multiple apps that only need one and the same entitlement, consider Map<String, List<App>>. Exclude the apps that need multiple entitlements from the Map, and store them in a separate List/array instead.
*When you are given a list of entitlements to retrieve apps, loop through the list and just grab everything that the Map maps the String to. For those that needs multiple entitlements, just check individually (you can speed up the checking a bit by sorting the list of entitlements given, and storing the entitlements to each App in sorted order - but it may not even matter since the size is small).
The solution reduces the time complexity for the operation. However, a few hundred apps times the number of entitlements of around 10 is quite small in my opinion, unless you call this many times. And it's better to time your original approach and this approach to compare - since the overhead might shadows any improvement in time.
A bit of further improvement (or not) is:
*
*Use Map<String, List<App>> and include even apps that need multiple entitlements (those app will be mapped to by many entitlements).
*When we search for app, we will use a Map<App, Integer> to keep track of how many entitlements that we have confirmed for multiple entitlement apps. So the flow will be like:
new mapAppInteger
foreach entitlement in inputListOfEntitlement
listOfApps = mapStringAppList.get(entitlement)
if listOfApps found
for each app in listOfApps
if app needs singleEntitlement
put app in output list
else // needs multiple
if app is in mapAppInteger
map app --> count + 1
if mapAppInteger.get(app) == app.numberOfRequiredEntitlement
put app in output list
remove app from mapAppInteger
else // not in mapAppInteger
map app --> 1
| |
doc_270
|
When I connect the phone and browse with safari, the iphone doesn't appear in the Develop menu.
The process works fine with an ipad iOS 11 (I can access the debug console via the Develop menu).
Anyone has an idear on this ?
Thanks.
A: With Apple this usually means that you should update Safari to the latest version. Well, you better upgrade everything with Apple. Sometimes they break and fix things.
Another thing you could try is to reset the PRAM and SMC that often solve Mac hardware problems :
http://osxdaily.com/2010/03/24/when-and-how-to-reset-your-mac-system-management-controller-smc/
And last, check that you have enabled web inspector on the said iphone. Sometimes we forget the very basic thing or it might have been turned off by something.
Some people with the same problem solved it by using Safari Technology Preview: https://developer.apple.com/safari/download/
| |
doc_271
|
https://www.includehelp.com/stl/sort-an-array-in-descending-order-using-sort-function.aspx
How to sort C++ array in ASC and DESC mode?
https://www.geeksforgeeks.org/sort-c-stl/
but none address the question of doing this for a std::array and not primitive int myArr[] type.
I have a code like this:
#include <iostream>
#include <array>
#include <string>
#include <algorithm>
#include <functional>
using namespace std;
int main(){
array<int, 5> myArray = {30, 22, 100, 6, 0};
for(int item : myArray){
cout << item << endl;
}
sort(myArray.begin(), myArray.end());
cout << "NOW, SORTED: " << endl;
for (int otheritem: myArray){
cout << otheritem << endl;
}
}
Which produces:
30
22
100
6
0
NOW, SORTED:
0
6
22
30
100
However, I am trying to produce this output:
100
30
22
6
0
By sorting the array in descending order. I have tried following the tips from the SO post above:
sort(myArray, myArray.size()+n, greater<int>());
But that generates error:
no instance of overloaded function "sort" matches the argument list -- argument types are: (std::array<int, 5ULL>, unsigned long long, std::greater<int>)
How can I sort standard array of int in descending order?
A: Unlike the raw arrays, std::array won't convert to pointer implicitly (even you can get the pointer explicitly from std::array::data), you should use begin() and end(), which are commonly used to get iterators from STL containers. e.g.
sort(myArray.begin(), myArray.end(), greater<int>());
or
sort(std::begin(myArray), std::end(myArray), greater<int>());
PS: The latter works with raw arrays too.
| |
doc_272
|
try {
long start = System.currentTimeMillis();
// 1) Load DOCX into XWPFDocument
InputStream is = new FileInputStream(new File(
"/mnt/sdcard/HelloWorld.docx"));
XWPFDocument document = new XWPFDocument(is);
// 2) Prepare Pdf options
PdfOptions options = PdfOptions.create();
// 3) Convert XWPFDocument to Pdf
OutputStream out = new FileOutputStream(new File(
"/mnt/sdcard/HelloWorld.pdf"));
PdfConverter.getInstance().convert(document, out, options);
System.err.println("Generate pdf/HelloWorld.pdf with "
+ (System.currentTimeMillis() - start) + "ms");
} catch (Throwable e) {
e.printStackTrace();
}
I surfed google more, i got many suggestion like project-->clean, editing eclipse.ini, change proguard in sdk,.. everything i tried, but none can help.
Can anybody help me or any solution ? Thanks.
| |
doc_273
|
How do I get it ?
In my playstore I simple add the app-ads link at :
=> Store settings
=> Store Listing contact details
=> Website
Please check my screenshot :
A:
For Apple App Store: Add your developer website in the marketing URL
field of your store listing.
For Google Play: Add the website URL in the contact information of
your app listing:
For developer website update on Apple App store - need to publish an update to app. Hope they change this policy in future.
Reference - https://support.google.com/admob/answer/9363762?hl=en
To troubleshoot any issues with apps-ads.txt - https://support.google.com/admob/answer/9776740
| |
doc_274
|
Matcher matcher = pattern.matcher(pageText));
int count = 0;
while (matcher.find()) {
count++;
}
counter is returning 0 as the space is missing in my pageText variable.
Is there a way to ignore the whitespace check and should be able to find the match for the pattern "How are you"?
A: One of simplest ways could be replacing spaces with \s* in regex pattern so it would look more like "How\\s*are\\s*you" so it could match Howareyou How areyou Howare you.
String pageText="Hello World, How areyou doing";
Pattern pattern = Pattern.compile("How are you".replaceAll("\\s+","\\\\s*"));
Matcher matcher = pattern.matcher(pageText);
int count = 0;
while (matcher.find()) {
count++;
}
System.out.println(count);
Edit:
Since you are using Pattern.quote to escape all regex special characters adding \s* inside doesn't make much sense since it will also be escaped. Simple solution for that would be quoting only words since only they can have regex metacharacters which require escaping, so we are lookinf for solution which will build us something like
quote(word1)\s*quote(word2)\s*quote(word3)
Code for that can look like:
String pageText = "Hello World, How areyou doing";
String searchFor = "How are you";
String searchingRegex = Stream.of(searchFor.split("\\s+"))//stream of words
.map(word -> Pattern.quote(word))//quote each word
.collect(Collectors.joining("\\s*"));//join words with `\s*` delimiter
Pattern pattern = Pattern.compile(searchingRegex);
//...
| |
doc_275
|
The problem is that I am not able to capture the formString query at the server-side. I am providing below the method I am used to capture the data unsuccessfully. The echo json_encode($name) is returning nothing to the html server.
I tried the query with several input fields values serialized and did not work. I tried to submit the query string a simple string including only the first name 'John', but it did not work either.
processForm()
var name = document.getElementById("fullName").value;
var formString = name;
var name = document.getElementById("fullName").value;
var formString = name;
var xhr = new XMLHttpRequest();
xhr.open('POST', formfile.php, true);
xhr.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
xhr.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
xhr.onreadystatechange = function () {
if(xhr.readyState == 4 && xhr.status == 200) {
var result = xhr.responseText;
xhr.send(formString);
button.addEventListener("click", function(event) {
event.preventDefault();
processForm();
PHP snippet:
header('Content-Type: application/json');
function is_ajax_request(){
return isset($_SERVER['HTTP_X_REQUESTED_WITH']) &&
$_SERVER['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest';
}
if(is_ajax_request()) {
$Ajax_results = array (
'Testing Text' => 'Hello World',
'Your Ajax submittal succeeded.
);
echo json_encode($Ajax_results);
} else {
$None_Ajax_results = array (
'errors' => 'None Ajax - short'
'Your Ajax submittal failed. Errors.'
);
echo "Form is Non Ajax Submitted";
echo json_encode($None_Ajax_error);
exit;
}
Define and set variables:
global $name;
$errors = null;
if (isset($_POST['name'])) { $name = $_POST['name']; }
else { $name = ''; }
echo '$name';
echo json_encode($name);
A: If I am reading your question correctly and assuming you have proper heartbeat between Ajax and the server as you state you do, taking a quick look at your code as provided you are not properly formatting your "formString". In order for your formString to properly show up in the $_POST['name'] it should be:
var formString = "name="+name
This is because the the post string being sent ("formString" in your case) should have the format:
field1=val1&field2=val2& ... fieldN=valN
where the name of each field is stated, followed by '=' and the value of the field. Multiple fields are separated by the'&' character. Which in PHP which will translate to
$_POST = {field1=>val1, field2=>val2, ... fieldN=>valnN}
on the server side. This is of course not literal code above but an example of the standard API. Take a closer look at how to format Post strings for HTML GET/POST
| |
doc_276
| ||
doc_277
|
from transformers import BertModel, BertTokenizer, BertConfig
config=BertConfig.from_pretrained(arch)
In these line, I would want to have a variable name rather than Bert that I can change (Bert <--> CTRL <--> any other model)
A: Yes you can
from transformers import BertConfig as btcfg
config=btcfg.from_pretrained(arch)
| |
doc_278
|
greys = (r+g+b)./3;
fc = cat(3, r, g, b);
combined = (greys+fc)./2; <---error occurs here
But when my code gets to the greys+fc part, it throws an error. This error:
Error using +
Matrix dimensions must agree.
Error in imgSimpleFilter (line 36)
combined = (greys+fc)./2;
when I print the number of rows and columns in both the grey and fc matricies, I get 400 for all of the values (which is exactly as I expected, as I am working with a 400x400 image).
Why isn't it letting me add these together?
I have no problems with the line
greys = (r+g+b)./3;
and that's adding three 400x400 matrices together. Any ideas?
A: You can't add them because greys is 400x400, while fc is 400x400x3.
Try typing size(greys) and size(fc) on the command line, or whos greys fc to see it.
If you want to "combine" them by averaging them, you could use bsxfun:
combined = bsxfun(@plus, greys, fc) ./ 2;
| |
doc_279
|
~ $ pwd
pwd/Users/me
You can see that it put "pwd" before the directory there, which is annoying.
My shell(zsh) doesn't do this when I run commands outside of tmux.
show-environment -g doesn't reveal any weird options being passed to zsh or anything: SHELL=/bin/zsh
I read through the manpage and Googled around but I can't find anything.
Thanks for any help!
A: Figured it out -- needed to change my ~/.tmux.conf to have a different TERM(xterm instead of screen-256color):
# act like vim
setw -g mode-keys vi
bind h select-pane -L
bind j select-pane -D
bind k select-pane -U
bind l select-pane -R
bind-key -r C-h select-window -t :-
bind-key -r C-l select-window -t :+
# act like GNU screen
unbind C-b
set -g prefix C-a
# look good
#set -g default-terminal "screen-256color"
set -g default-terminal "xterm"
set -g status "off"
| |
doc_280
|
import boto3
aws_access_key_id = '...............'
aws_secret_access_key = '................'
tkn = '..........'
region_name = '............'
amz = boto3.client('sagemaker-runtime',
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
aws_session_token=tkn,
region_name=region_name)
response = amz.invoke_endpoint(
EndpointName='mymodel',
Body=b'bytes'
)
However, this doesn't work. Do I have to specify something else in Body ?
A: You can use boto3 session . I assume you already prepared jsons and your aws credentials are already on ~/.aws/credentials.
import boto3, json, sagemaker
sagemaker_session = sagemaker.Session()
role = "YOUR-SAGEMAKER-EXECUTION-ROLE"
region = boto3.Session().region_name
endpointName= 'YOUR ENDPOINT NAME'
predictor = sagemaker.predictor.RealTimePredictor(
endpointName,
sagemaker_session=sagemaker_session,
content_type="application/json")
d='YOUR JSON LINES- YOU CAN OPEN WITH PYTHON BUILT IN FUNCTIONS'
response=predictor.predict(json.dumps(d))
response has the answer body which formatted in json. You can parse it and use your results.
A: Each endpoint expects different binary data. By specifying Body=b'bytes' you're passing the bytes of the string literal bytes, while you should be passing some actual input data to infer off.
According to the doc, I recommended to include the relevant ContentType, of the input data you're sending.
You said:
However, this doesn't work.
What is the error you're getting back?
| |
doc_281
|
@Emp nvarchar(50),
@Start_Date nvarchar(50),
@End_Date nvarchar(50)
as
WITH Ordered
AS (SELECT CONVERT(VARCHAR(15), cast(Substring(unitts, 1, 8) AS DATE), 105) AS Data,
Substring(UnitTS, 9, 2) + ':' + Substring(UnitTS, 11, 2) AS EventTime,
CASE
WHEN RdrHead = 'A' THEN 'OUT'
ELSE 'IN '
END AS Reader,
[RdrName],
[CrdName],
IDENTITY (int, 1, 1) AS rn,
UnitTS
INTO #TEMP --rn = row_number() over (order by Crdname,UnitTs)
FROM TandA.dbo.History
WHERE ( UnitNr = '3'
AND RdrNr IN ( '0', '2', '3' )
OR UnitNr = '4'
AND RdrNr IN( '1', '6' ) )
AND Type = 'A'
AND Sign = '+'
AND Substring(unitts, 1, 8) >= @Start_Date
AND Substring(unitts, 1, 8) <= @End_Date
AND ( CrdName IN ( @mp )
OR @emp = 'all' )
SELECT *
FROM #TEMP
ORDER BY rn
DROP TABLE #TEMP)
SELECT o_out.CrdName,
o_out.RdrName,
o_out.Data,
CASE
WHEN o_in.EventTime IS NULL THEN 'Necunoscut'
ELSE o_in.EventTime
END In_Time,
[Out_Time] = o_out.EventTime,
CASE
WHEN cast(datediff (s, o_in.EventTime, o_out.EventTime) AS INT) IS NULL THEN '0'
ELSE cast(datediff (S, o_in.EventTime, o_out.EventTime) AS INT)
END Duration
FROM Ordered o_out
LEFT JOIN Ordered o_in
ON o_in.rn = o_out.rn - 1
AND o_in.Reader = 'in'
WHERE o_out.Reader = 'out'
A: The syntax of your query is incorrect. You cannot create and drop the #TEMP table within the CTE query.
BTW, CTE is not required on this case, given that all the info you need is on the #TEMP table. You can rewrite the query as follows:
SELECT CONVERT(VARCHAR(15), cast(Substring(unitts, 1, 8) AS DATE), 105) AS Data,
Substring(UnitTS, 9, 2) + ':' + Substring(UnitTS, 11, 2) AS EventTime,
CASE
WHEN RdrHead = 'A' THEN 'OUT'
ELSE 'IN '
END AS Reader,
[RdrName],
[CrdName],
IDENTITY (int, 1, 1) AS rn,
UnitTS
INTO #TEMP --rn = row_number() over (order by Crdname,UnitTs)
FROM TandA.dbo.History
WHERE ( UnitNr = '3'
AND RdrNr IN ( '0', '2', '3' )
OR UnitNr = '4'
AND RdrNr IN( '1', '6' ) )
AND Type = 'A'
AND Sign = '+'
AND Substring(unitts, 1, 8) >= @Start_Date
AND Substring(unitts, 1, 8) <= @End_Date
AND ( CrdName IN ( @mp )
OR @emp = 'all' )
ORDER BY rn;
SELECT o_out.CrdName,
o_out.RdrName,
o_out.Data,
CASE
WHEN o_in.EventTime IS NULL THEN 'Necunoscut'
ELSE o_in.EventTime
END In_Time,
[Out_Time] = o_out.EventTime,
CASE
WHEN cast(datediff (s, o_in.EventTime, o_out.EventTime) AS INT) IS NULL THEN '0'
ELSE cast(datediff (S, o_in.EventTime, o_out.EventTime) AS INT)
END Duration
FROM Ordered o_out
LEFT JOIN #TEMP o_in
ON o_in.rn = o_out.rn - 1
AND o_in.Reader = 'in'
WHERE o_out.Reader = 'out';
DROP TABLE #TEMP;
| |
doc_282
|
I already saw so much article and video about Nullable, They just saying we won't worry Null reference exception any more.
Also they keep saing there is so much way to use it by: Disable, Enable, Warning, Annotations.....bla bla bla.
And They Introduce us a lot of ways to use it with : ?., ??, ??=, NotNullWhenTrue, NotNullWhenFalse...etc
But I didn't saw anyone tell us: How to use it when it disabled.
We have a lot of scenario to use null before.
1. Property:
// What is the default value when nullable disabled , and how way we should use it?
Public string Name { get; set; }
2. Linq:
Person model = PersenList.Where(x => x.id == id).FirstOrDefault();
if (null != model)
{
// Do something
}
// How should we do when nullable diabled, what is the default value now, and how way we could check it a default value or not?
3.Temporary variable:
string ageDescription = null;
if (student.Age > 13)
{
ageDescription = "X";
}
if (student.Age > 15)
{
ageDescription = "XL";
}
if (student.Age > 18)
{
ageDescription = "XXL";
}
System.Diagnostics.Debug.WriteLine($"The Student size: {(ageDescription ?? "Not found")}");
// What should we do in this case, bring "Not found" at the began always?
Or
string description = null;
if (student.Score < 70)
{
description = "C";
}
if (student.Score > 70)
{
description = "B";
}
if (student.Score > 80)
{
description = "A";
}
if (student.Score > 90)
{
description = "AA";
}
student.description = description;
JsonConvert.Serialize(student, {with Ignore Null Option for save more space});
// How do we save the json size and space, if we disable the nullable?
Or
string value = null;
try {
value = DoSomething();
if (value == "Something1")
{
Go1();
}
if (value == "Something2")
{
Go2();
}
if (value == "Something3")
{
Go3();
}
} catch (Exception ex)
{
if (null == value)
{
GoNull();
}
else
{
GoOtherButException(ex)
}
}
// How should we handle this kind of problem?
4.Entity Framework
//The tables always has null field and how we deal with it when nullable disabled?
I know there are much more scenario we might handle. I feel like they just bluffing there are so many Nullable feature are Awesome, but not give us any direction or good way to point out.
I hope someone already use C#10, pointed us how to change our old fashioned code style after disabled Nullable, and give us some example to show us how we should do in the future. Thanks
--------Update1--------
I add some more variable examples.
--------Update2--------
Some foks just said that we could use whatevery way we want. that's based on you requirement. If you want to use it, just simply add ? like:
string? name = null
But I more hope they could just tell me: use String.Empty replace the null in every place. Ha ha....
But In that case every place I still need to check if ( variable != String.Empty), but we could avoid null reference exception, also I'm not sure String.Empty will take how much spaces in memory.
So why don't anyone tell us to do that: when they told us disable the nullable, we need to changing our code style with how way?
Another thing that is I really can't get that How do we check the default value of Linq when use FirstOrDefault(), before we always use if (null != model).
Maybe I really want to know: How is the world like in the future if we all disable the nullable.
A: First thing that I feel like needs clearing up is that nullable reference types being enabled will not impact if your code can build or not. It has no impact on default values.
Nullable reference types are meant to enable you to Express your design intent more clearly with nullable and non-nullable reference types. As is the title of the awesome tutorials: nullable reference types and I highly suggest anyone trying to dive into this feature to read it.
The nullable reference types feature allows you to explicitly state: I never expect this (nullable type) property to be unknown. Thus it should always have a value.
To state that you expect a property to always have a value you define it as usual
public string Text { get; set; }.
To explicitly state that a property is not certain to have a value it would be defined with a '?' after the type
public string? Text { get; set; }
Hopefully the intent of this feature is now, more or less, clear. Let's dig into your concrete questions.
1. Property:
your question:
// What is the default value when nullable disabled , and how way we should use it?
public string Name { get; set; }
A string or class property without initialization will still be null. An int without explicit initialization will still be 0 and a boolean without explicit initialization will still default to false. As not to repeat the same text on how to use it I will only repeat this. You should use it to Express your design intent more clearly with nullable and non-nullable reference types. It is up to you to mark all classes in the solution with whether or not a property has this characteristic. No this is not fun to do. But you also don't need to do it in 1 go. You can easily just enable the feature (have a ton of warnings) and steadily progress towards a solution that has more or less all classes covered on this.
2. Linq:
Person model = PersonList.Where(x => x.id == id).FirstOrDefault();
// How should we do when nullable diabled, what is the default value now, and how way we could check it a default value or not?
If you have a Linq query where you are not certain you can actually find what you are looking for (as is the reality for a lot of cases) .FirstOrDefault() can still be used. But Person model should express that it might potentially be null by changing it to Person? model.
3.Temporary variable:
You want to have a temp variable that is null at first and assign it conditionally? No problem, just add that magic '?' after it.
OR! If you definitely know you are going to give it a value why not assign an empty value to it? For example:
string description = string.Empty;
if (student.Score < 70)
description = "C";
// ... some more conditions ...
student.description = description;
4. Entity Framework
I reckon that in the data access layer it will become super clear which properties to mark as will never be null and the ones that might. Just take a look at the database model and queries / stored procedures and the null checks you do before writing to the database. Everything that is unable to be null in your database will be in your data model class and everything that might contain will be marked with the '?' after the type in the data model class.
What is the world like if Nullable Reference Types are enabled?
Your IDE will show you warnings of potential unexpected null. This will, if the team works to reduce warnings, result in properties that explicitly state if they could be null, or not. This is done by adding a "?" after the property type. If you are certain that something is not null you can add a "!" to suppress the warning.
For DTO classes the "!" is something that is used often. This is due to the necessity for the DTO class to have an empty constructor and its properties to be settable using set; or init; properties. A warning will be shown. Here either the warning can be disabled for the file or the properties can be initialized with a default value (which can be null) with the warning suppressed using the "!". For both ways the properties should be null guarded prior to using them.
Suppress the warning by setting a default value using "!":
public List<string> Comments { get; set; } = null!;
A usual occurring phenomenon of this is that you will most likely see null guard clauses decreasing. This can be done if you are certain that the code being called is guaranteed to comply with the nullable references types feature. It is worth noting that these properties might still be null and thus you open yourself up for null reference exceptions instead of argument null exceptions. Nonetheless in my opinion this is worth the clarity and reduced size. But to each their own on how they see this. Stackoverflow question: When to null-check arguments with nullable reference types enabled
The end result is that your code will clearly express which variables are expected, and thus should be worked with in a nullable manner, to be non existing sometimes. This might result in fewer guard clauses with the benefits of less (boilerplate) code if possible
References
I really do hope that I managed to help you out a bit. If you have any more thirst from knowledge beside the tutorial here are some more really good docs regarding nullable reference types:
*
*tutorials: nullable reference types
*Working with Nullable Reference Types
*Nullable reference types
| |
doc_283
|
When I try to copy data from backups to desired tables with pg_restore (pg_restore -d db -U user --table desired_table my_table.backup) i get the following errors:
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 234; 1259 16514 TABLE "my_table" user
pg_restore: error: could not execute query: ERROR: relation "table" already exists
Command was: CREATE TABLE public.table (....);
pg_restore: from TOC entry 3277; 0 16514 TABLE DATA my_table db
pg_restore: error: COPY failed for table "my_table": ERROR: duplicate key value violates unique constraint "my_table_pkey"
DETAIL: Key (id)=(3) already exists.
CONTEXT: COPY my_table, line 2
pg_restore: warning: errors ignored on restore: 2
I know that duplicate keys are problem, but how can I avoid this and just add the data from my_table backup's to the wanted tables with existing data in it ?
| |
doc_284
|
figure
hist(ligand,50)
h=findobj(gca,'Type','patch');
set(h,'FaceColor',[0 .5 .5],'EdgeColor','w')
hold on;
hist(potassium,50)
g=findobj(gca,'Type','patch');
set(g,'FaceColor',[0 1 1],'EdgeColor','w')
hold on;
hist(rectifier,50)
title('Alignment to AFP1')
xlabel('Score'); ylabel('Number of Sequences')
hold off;
where the first colour is [0 .5 .5], the second [0 1 1] and the third is the default colour. However even though I have specified two separate colours for the first two using two handles, h and g - both are the same colour, using the g handle.
What am I doing wrong?
edit - this is for Luis Mendos's suggestion - I am getting an "index exceeds matrix dimensions" with the following
figure
hist(ligand,50)
g=findobj(gca,'Type','patch');
set(g(1),'FaceColor',[0 .5 .5],'EdgeColor','w')
hold on;
hist(potassium,50)
set(g(2),'FaceColor',[0 1 1],'EdgeColor','w')
hist(rectifier,50)
title('Alignment to AFP1')
xlabel('Score'); ylabel('Number of Sequences')
hold off;
Thanks.
A: The problem is that g is a two-element vector, because it includes the two histograms that have already been plotted. Remove the lines with h (lines 3 and 4) and replace the line set(g,...) by
set(g(1),'FaceColor',[0 .5 .5],'EdgeColor','w')
set(g(2),'FaceColor',[0 1 1],'EdgeColor','w')
| |
doc_285
|
private void button1_Click(object sender, EventArgs e)
{
//Populate
ProjectContext context = new ProjectContext();
context.TbUser.ToList()
.ForEach(p => Console.WriteLine(string.Format("{0}
{1}",p.name,p.surname)));
}
Insert
private void button2_Click(object sender, EventArgs e)
{
//Insert
ProjectContext context = new ProjectContext();
context.TbUser.Add(new TbUser {name= "aa", surname = "bb" });
context.SaveChanges();
}
I search around but not found solution for UPDATE and DELETE. Appreciated with thanks for all answer.
| |
doc_286
|
-bash: //anaconda/bin/python: No such file or directory
Does anyone know how to get python2 working again? I'm fine with uninstalling python3 and I'm on Mac btw. Thank you! :)
A: When you deleted your install, you didn't really change where your system is looking for Python. You could change your system's PATH variable so that it doesn't look in the anaconda folder, but I sometimes find environment variables wonky (I probably shouldn't). You could add a line like:
alias python="/path/to/your/python"
to your .bashrc file, but there are better ways!
Rather than deleting your install, I recommend running different virtual environments for the different version of Python that you need. Basically run the following commands in your terminal to create an environment for Python 2.7 (leave off the $ and anything before it; that just indicates the terminal prompt):
$ conda create -n py27 python=2.7 anaconda
$ source activate py27
[py27] $ python --version
which (for me) returns:
Python 2.7.11 :: Anaconda 4.0.0 (64-bit)
The last line verifies which version of Python you are running now. For Python 3.4 use the same, but name it py34 (or whatever you like) and replace python=2.7 with python=3.4.
When you want to use 2.7, use source activate py27, then start python or ipython or whatever. When you want to use 3.4, use source activate py34 instead. When you are done with either, type source deactivate.
If you need to point a development environment (like PyDev for Eclipse) to your binaries you can find them under your anaconda folder; something like /path/to/anaconda/envs/py27/ or .../envs/py34/.
A: Try
python2 myscript.py
instead of
python myscript.py
Since python points to python3 in your case. If you want to uninstall python3: refer this
Alternatively, you may change the python alias using:
alias python=/usr/local/bin/python<version>
Add it to ~/.bashrc (or ~/.profile for few of the OS X versions) if you want it to point to python2.x over 3 always on the shell.
| |
doc_287
|
Is there a service out there I can integrate with to for handling the accounts and collecting the monthly subscription fee (using credit cards or something else)?
A: There are several services out there that can help you with this - Recurly (where I work), Chargify, CheddarGetter, Spreedly, etc. These services handle the recurring billing aspect, customer management, email communication, following up with failed payments, etc. You'll also need a payment gateway (to process the cards) and a merchant account (for the payment gateway to deposit the funds into).
Authorize.net and PayPal both have recurring billing features, but the logic for upgrading/downgrading accounts is not there or difficult to use, and you'll still need to handle customer communication for failed payments and other actions.
A: You need to use a payment gateway here, which will be responsible for handling the transaction between your site and the many different payment networks. There are a lot of operations happening in between, so you might want to check out the wikipedia article for a step by step information on that.
We personally use authorize.net in our company for many of its advantages, some of which are:
*
*It has an API that makes it easy to integrate with any language.
*It is a trusted brand already, proven by the number of merchants that use them.
*It is secure.
*It provides the service with a reasonable price.
A: Most of major payment gateway providers do support recurring billing or subscription plans, paypal,authorize.net etc, most of the time you have to log in to your account admin console and configure a plan, and send the payment plan id with the payment request to the payment gateway. some payment gateway providers, like Braintree supports to create recurring billing plans dynamically and assign users to that plan at the run time it self, how ever it's always better to go for a local payment gateway provider or a payment gateway which provides low fees, if your preferred payment gateway provider is not supporting recurring billing anther options is to store cc details on the server and and handle it your self but it's a great risk to store cc details on the server, and you will have to follow PCI standards and it's hard.
| |
doc_288
|
`index`, `timestamp`, `refNum`, `priority`, `status`, `type`, `email`, `telephone`, `title`, `description`, `system`, `category`, `time`, `requiredBy`, `dateReason`, `oneOff`, `url`, `browserDevice`, `jsessionid`, `customersProducts`, `investigationsUndertaken`, `whatquestions`, `supportingInformation`, `open`
I am quite new to trying to nest a query and I am trying to select all records in my incidents table where the title or status match a keyword and then I only want the results which occur between a selected timestamp range. Both the individual queries here work on their own and return results but when I try and nest them I get the error #1241 - Operand should contain 1 column(s). What am I doing wrong?
SELECT
*
FROM
`incidents`
WHERE
`timestamp` BETWEEN '2014-12-01 00:00:00' AND '2015-11-23 23:59:59'
IN
(SELECT * FROM `incidents` WHERE `title` LIKE '%test%' OR `status` LIKE '%test%')
Ideally I want to return all fields, hence the use of *, where it's between my timestamp range and the status or title contain a keyword I specify. In the example query above I have used the keyword 'test'.
A: When you write a subquery, it's generally expected that you will provide an index column for the main query to return from. You cannot return SELECT * because there's no implicit column to key off of. I picked index (hopefully that's actually an index)
SELECT
*
FROM
`incidents`
WHERE
`timestamp` BETWEEN '2014-12-01 00:00:00' AND '2015-11-23 23:59:59'
AND index IN
(SELECT index FROM `incidents` WHERE `title` LIKE '%test%' OR `status` LIKE '%test%')
| |
doc_289
|
I am sending the form data as
clientId test123
clientScopes local
clientSecret AfrTest123
clientSecretOnly true
clientServices[] test
clientVersions []
module clientConfig
save Save
On the API side I am doing something like this.
TestConfiguration config = Json.fromJson(request().body().asJson(), TestConfiguration.class);
| |
doc_290
|
I also tried encoding as base64 and then using window.atob(data) to decode on the other side but got all sorts of weird characters.
@objc func validateReceipt(subscriptionType: String?, callback successCallback: RCTResponseSenderBlock) -> Void {
let receiptUrl = NSBundle.mainBundle().appStoreReceiptURL
let receipt: NSData = NSData(contentsOfURL: receiptUrl!)!
let receiptdata: NSString = receipt.base64EncodedStringWithOptions(NSDataBase64EncodingOptions(rawValue: 0))
successCallback([receipt])
}
Any help would be much appreciated!
| |
doc_291
|
dont you really hate when the bot asks you to type more, its mostly code for a reason I dont have a clue and I posted all my code!!!
const express = require("express");
const expressGraphQL = require("express-graphql");
const graphql = require("graphql");
const {
GraphQlSchema,
GraphQlObjectType,
GraphQLString,
GraphQLList,
GraphQLInt,
GraphQLNonNull,
} = graphql;
const app = express();
const authors = [
{ id: 1, name: "J. K. Rowling" },
{ id: 2, name: "J. R. R. Tolkien" },
{ id: 3, name: "Brent Weeks" },
];
const books = [
{ id: 1, name: "Harry Potter and the Chamber of Secrets", authorId: 1 },
{ id: 2, name: "Harry Potter and the Prisoner of Azkaban", authorId: 1 },
{ id: 3, name: "Harry Potter and the Goblet of Fire", authorId: 1 },
{ id: 4, name: "The Fellowship of the Ring", authorId: 2 },
{ id: 5, name: "The Two Towers", authorId: 2 },
{ id: 6, name: "The Return of the King", authorId: 2 },
{ id: 7, name: "The Way of Shadows", authorId: 3 },
{ id: 8, name: "Beyond the Shadows", authorId: 3 },
];
const BookType = new GraphQlObjectType({
name: "Book",
description: "A Book written by an author",
fields: () => ({
id: { type: GraphQLNonNull(GraphQLInt) },
name: { type: GraphQLNonNull(GraphQLString) },
authorId: { type: GraphQLNonNull(GraphQLInt) },
}),
});
const RouteQueryType = new GraphQlObjectType({
name: "Query",
description: "Root Query",
fields: () => ({
books: new GraphQLList(BookType),
description: "List of Books",
resolve: () => books,
}),
});
const schema = new GraphQlSchema({
query: RouteQueryType,
});
app.use(
"/graphql",
expressGraphQL({
schema: schema,
graphiql: true,
})
);
app.listen(5000, () => console.log("server running"));
A: Wrong capitilisation GraphQlObjectType should be GraphQLObjectType
| |
doc_292
|
So far I know I can do this:
def typed[A : c.WeakTypeTag]: Symbol = weakTypeOf[A].typeSymbol
object TupleSymbols {
val tuple2 = typed[(_, _)]
val tuple3 = typed[(_, _, _)]
// ... and so on
}
Is there a more sane approach than the above monstrosity?
A:
import c.universe._
import Flag._
def tuple(i: Int) = {
def list = (1 to i).toList
c.typecheck(
ExistentialTypeTree(
tq"(..${list map (i => Ident(TypeName(s"_$i")))})", //just like (_,_, ...)
list map (i =>
TypeDef(Modifiers(DEFERRED | SYNTHETIC), TypeName(s"_$i"), List(), TypeBoundsTree(EmptyTree, EmptyTree))
)
)
)
}
//test
println(tuple(2).tpe <:< typeOf[(_, _)])//true
println(tuple(3).tpe <:< typeOf[(_, _, _)])//true
edit1 :
def asTuple(tpe: Type): Boolean = {
def allTuple = 1 to 22 map { i =>
val typeNames = 1 to i map (e => TypeName(s"_$e"))
tq"(..$typeNames) forSome {..${typeNames.map(e => q"type $e")} }"
} map (t => c.typecheck(t).tpe)
allTuple.exists(_ <:< tpe)
}
//test
println(asTuple(typeOf[Int])) // false
println(asTuple(typeOf[(_, _)])) // true
println(asTuple(typeOf[(_, _,_)])) // true
A: As per the suggestion in the comments, this can be nicely handled with simple matching.
def isTuple(tpe: Type): Boolean = {
tpe.typeSymbol.fullName.startsWith("scala.Tuple")
}
| |
doc_293
|
Some of these work, and some come back with an error "PDF_VALIDATION_FAILED".
We have narrowed it down to the PDF document itself, and have watered down the original template to contain just four fields. We have watered down our excel spreadsheet to four basic fields using (for example) "a,1,a,2" for one input and "aa,1,a,2" as another, however one will consistently work and one will consistently fail.
Viewing the generated PDF's in a local PDF viewer (Adobe and PDF XChange Editor) the document appears fine, viewing the documents side by side in a hex/diff editor (WinMerge) shows minor differences in the streams being sent (as expected).
Is there any documentation on what validation is being performed on the PDF so we can emulate this locally and make sure our PDF's are valid before sending to the docusign API?
Thanks
Template
A: I am able to successfully create an envelope with the Documents you have provided.
See here for the complete CreateEnvelope request that I have used
I have used these documents that you have provided
*
*Working PDF
*Non Working PDF
| |
doc_294
|
How would I go about calculating an average for, say, every wednesday?
I would first have to see how many unique wednesdays are mentioned, then see how much the total value is for each specific wednesday (there can be multiple entries for one day, but the time of day is not relevant)
Then I would have to add up all those values, and devide them by the amount of wednesdays.
I know this is oddly specific, that's why I probably haven't found anything about it. How would I go about doing this in PHP?
A: lets say you have a table with the following columns : id timestamp amount
You have to select all the rows from the table and put them in array.
Once it's done go trough the array
<?php
$total = 0;
$bingo = 0;
foreach($a as $i)
{
if(date( "l" ,$i["timestamp"]) == "Wednesday")
{
$bingo++;
$total += $i["amount"];
}
}
$avg = $total/$bingo;
?>
A: You could use a query like this to get daily averages for each weekday.
SELECT DAYOFWEEK(date_only), AVG(total) FROM (
/*The subquery will return total amounts for each distinct day
regardless of how many records there are for each day. */
SELECT
DATE(your_table.timestamp_column) as date_only,
SUM(your_table.amount_column) as total
FROM your_table
GROUP BY DATE(your_table.timestamp_column)) as daily_totals
GROUP BY DAYOFWEEK(date_only);
DATE() will take only the date part from the timestamp. Grouping by this value and using the SUM() aggregate function will get you totals for each day.
DAYOFWEEK() returns a numeric value from a date representing the day of the week (1=Sunday, etc.). Grouping by this value and using the AVG() aggregate function will get you the average total for each weekday.
How to handle the results in PHP will depend on your specific needs.
| |
doc_295
|
The attachment is a vCalendar file.
Here's the code:
StringBuilder sbCalendar = new StringBuilder();
DateTime dtStart = eventDate;
DateTime dtEnd = eventDate;
sbCalendar.AppendLine("METHOD: REQUEST");
sbCalendar.AppendLine("BEGIN:VCALENDAR");
sbCalendar.AppendLine("PRODID:-//DP//NET");
sbCalendar.AppendLine("MIMEDIR//ENVERSION:1.0");
sbCalendar.AppendLine("METHOD:REQUEST");
sbCalendar.AppendLine("BEGIN:VEVENT");
sbCalendar.AppendLine("DTSTAMP:" + dtStart.ToUniversalTime().ToString("yyyyMMdd\\THHmmss\\Z"));
sbCalendar.AppendLine("DTSTART:" + dtStart.ToUniversalTime().ToString("yyyyMMdd\\THHmmss\\Z"));
sbCalendar.AppendLine("DTEND:" + dtEnd.ToUniversalTime().ToString("yyyyMMdd\\THHmmss\\Z"));
sbCalendar.AppendLine("LOCATION:" + eventLocation);
sbCalendar.AppendLine("DESCRIPTION;ENCODING=QUOTED-PRINTABLE:" + eventBody);
sbCalendar.AppendLine("SUMMARY:" + eventSubject);
sbCalendar.AppendLine("PRIORITY:3");
sbCalendar.AppendLine("UID:" + Guid.NewGuid().ToString());
sbCalendar.AppendLine("ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION:MAILTO:[email protected]");
sbCalendar.AppendLine("ATTENDEE;ROLE=CHAIR;PARTSTAT=ACCEPTED:MAILTO:[email protected]");
sbCalendar.AppendLine("CLASS:PUBLIC");
sbCalendar.AppendLine("ORGANIZER:MAILTO:[email protected]");
sbCalendar.AppendLine("SEQUENCE:0");
sbCalendar.AppendLine("STATUS:TENTATIVE");
sbCalendar.AppendLine("END:VEVENT");
sbCalendar.AppendLine("END:VCALENDAR");
byte[] byteArray = Encoding.UTF8.GetBytes(sbCalendar.ToString());
Stream contentStream = new MemoryStream(byteArray);
SmtpClient smtp = new SmtpClient("localhost");
MailMessage memo = new MailMessage();
memo.IsBodyHtml = true;
memo.From = new MailAddress("[email protected]");
foreach (string emailAddress in emailAddresses)
{
memo.To.Add(emailAddress);
}
memo.Body = messageBody;
memo.Subject = messageSubject;
Attachment attachment = new Attachment(contentStream, "termenLitigiu.ics", "text/calendar");
attachment.TransferEncoding = System.Net.Mime.TransferEncoding.Base64;
memo.Attachments.Add(attachment);
smtp.Send(memo);
This works and does what is supposed to do, it sends a working (recognized by outlook) vcalendar file.
The problem is that in the body of the mail, besides the contents of the messageBody parameter, the contents of the attached file also appear, something like this:
From: sender
Sent: Tuesday, October 05, 2010 4:59 PM
To: someemail
messageBody contents here
METHOD: REQUEST
BEGIN:VCALENDAR
PRODID:-//DP//NET
MIMEDIR//ENVERSION:1.0
METHOD:REQUEST
BEGIN:VEVENT
DTSTAMP:20101006T135934Z
DTSTART:20101006T135934Z
DTEND:20101006T135934Z
LOCATION:Minstead
DESCRIPTION;ENCODING=QUOTED-PRINTABLE:My first meeting
SUMMARY:Learning Calendaring and Scheduling
PRIORITY:3
UID:721d9e3c-9010-47f5-9ad0-83c38cb0cbb7
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION:MAILTO:someemail
ATTENDEE;ROLE=CHAIR;PARTSTAT=ACCEPTED:MAILTO:someemail
CLASS:PUBLIC
ORGANIZER:MAILTO:someemail
SEQUENCE:0
STATUS:TENTATIVE
END:VEVENT
END:VCALENDAR
I want to get rid of that text, and display only the contents of my messageBody parameter and have the vCalendar file just attached to the mail message.
How can i do this? Is this an outlook issue or a coding issue?
Edit: I'm only interested in displaying the message in Microsoft Outlook. I've looked into the source of the message (in Outlook right click > View Source) and the text i want to get rid of is within the <body></body> html tags of the message)
A: Found the solution:
http://msdn.microsoft.com/en-us/library/system.net.mime.dispositiontypenames.attachment.aspx
In the constructor of the Attachment I replaced "text/calendar" with MediaTypeNames.Application.Octet and set the DispositionType to Attachment as opposed to Inline which was probably the default value.
Attachment attachment = new Attachment(contentStream, "termenLitigiu.ics", MediaTypeNames.Application.Octet);
attachment.ContentDisposition.DispositionType = DispositionTypeNames.Attachment;
It now gives me a clean mail message with the body of the message containing what it should and a working .ics attachment.
| |
doc_296
|
Is there a simple solution to doing this? I was thinking all I would need is an input field where the user can enter a code, it gets verified and then allows a download but it sounds too simple. Is this even possible with something like .php (complete beginner)?
I'm willing to learn or I would've packed it in already so any advice would be great. Thanks.
Edit:
Thanks to some great advice I was able to create this!
A: If you wanted to do it at a very simple level, it is not much more than you describe it to be. You would need to familiarize with PHP and MySQL or some other database, but it isn't too difficult to create.
You need to figure a few things out, such as how do you want to limit the codes, 3 downloads in the first 24 hours to allow for failed downloads, restrict it to IP, or strictly one full download. Since you have the list of the 1000 codes given, you will probably want to base your system around having codes pre-generated and inserted in the database, rather than having an algorithm that can validate the codes on the fly and then check for already used codes.
You would want to store the download(s) in a directory that is not accessible from the web, and the php script would validate the code, and if valid serve the download to the user.
You can probably look to the wordpress plugin's database structure for other ideas, but I think at the very least you would need:
download_code (the code itself, probably primary key or at least index)
download_file (optional, the name/path of the file this code allows them to download)
redeemed (0 if not redeemed, 1 if redeemed)
redemption_time (timestamp of the first, or last redemption based on your requirements)
download_count (how many times downloaded if allowing more than 1)
ip_address (ip address of the redeemer)
email_address (email address if you want to collect it, or give user the option)
download_url (the unique string for the download url. this could be one way to implement download verification beyond the code, but is not necessary)
You would then need to create an html page with the text box for entering the code, and any other optional data you wish to collect. The form would submit to your PHP script.
When the PHP script receives a form post, it would validate all of the data (i.e. email address if you were collecting it). Once all data is valid, you read from the database looking for a code matching what the user entered.
If no data was found with the code, send them back to the form to try re-entering it. If a record is found, you can check the redeemed value from the database and see if the code has been used or not. If it has, this is where you can use custom logic to decide if they are still within their download window, the ip address is the same, or whatever criteria you want to use to allow re-downloads.
If it has been redeemed, show an error message. If it is still okay to download, you can serve a download by reading the file and sending it to the browser see example #1 here.
At some point you will have to update your database to set the redeemed flag to 1 and update the other values such as timestamp and download count. You can either run this code before you serve the download, or you can run it after the download is served. In some cases if the download was cut off, the last portion of your script won't run and therefore won't update redeemed or download_count. This may or may not be what you want, so you can decide where you want to do the updating.
Eventually you can update it to include an administration panel, but in the beginning all configuration could be done within the php script or config file. And eventually you could update it to use flash or some other technology to download the file and show progress bars etc.
Hopefully that will give you some idea on whether or not you want to try to implement it. Otherwise you could always search php on Hotscripts to see if there is an already rolled standalone version of what you want.
| |
doc_297
|
What's the different between the two file?
| |
doc_298
|
How do I fix this error? It seems to be the only thing stopping the rest of the page to load.
<?php
$link=mysql_connect("*************","************","***********");
if (!$link)
{
echo "Failed to connect to MySQL: " . mysql_error();
}
mysql_error();
mysql_select_db(horseshoevotes);
$sql = "INSERT INTO `submissions` (Restaurant, Restaurant Location, Horseshoe)
VALUES('$_POST[restaurant]','$_POST[location]','$_POST[horseshoe]','$_POST[email]')";
if (!mysql_query($sql))
{
die('Error: ' . mysql_error($link));
}
echo "Submitted";
mysql_query($sql);
mysql_close($link);
?>
A: You haven't actually done anything to the database - you've only put the query into a variable.
Add
mysql_query($sql) or die ("Error! ".mysql_error());
before you call mysql_close.
A: Use like this instead of $sql
$restaurant = mysql_real_escape_string($_POST[restaurant]);
$location = mysql_real_escape_string($_POST[location]);
$horseshoe = mysql_real_escape_string($_POST[horseshoe]);
$email = mysql_real_escape_string($_POST[email]);
mysql_query("INSERT INTO `submissions`
(Restaurant, Location, Horseshoe, Email)
VALUES('$restaurant','$location','$horseshoe','$email')
");
A: To learn quickly how to use MySQL, start using a wrapper class.
I recommend: https://github.com/ajillion/PHP-MySQLi-Database-Class
Have used this myself when I first switched from MySQL to MySQLi. Made the learning curve lots smaller.
Have a look at the source (it's hosted on GitHub so you don't need to download it to have a glance).
Also do not continue to use MySQL. The successor MySQLi is more secure due to prepared statements and some other improvements. PDO would be even better, but it's more advanced.
Give it a try.
A: use
`
around column names, that is this:
INSERT INTO `submissions` (Restaurant, Restaurant Location, Horseshoe)
should be this:
INSERT INTO `submissions` (`Restaurant`, `Restaurant Location`, `Horseshoe`)
I'm guessing the next thing you will see is the error about column count not matching with values. Please proof read and dry run your queries.
| |
doc_299
|
Well, let's begin. I want to add some modifiers to different minecraft events (such as how much damage is dealed in the EntityDamageByEntityEvent) according to a level that is stored in a MySQL database.
This level system would have 99 levels (1 to 100), beeing 50 the default level; for example, in the case of the EntityDamageByEntityEvent I'd like to change the damage so:
a) If the player had a 50 level, the damage wouldn't be modified (just would would be multiplied by 1).
b) If the player had a 1 level, the damage dealed would be multiplied by 1/3.
c) If the player had level 100, the damage would be miltiplied by 3.
So far, so good, but how could I do it in the case of a player with level 17 or 89? What I want to know is how to convert, let's say, a 1 to 100 scale in a 1/3 to 3, beeing 50 = 1... A little bit mind blowing...
Thanks in advance, hope you understood me!
A: If you plot the points (1, 1/3), (50, 1), and (100, 3) on a graph, you'll see that they don't form a line. So it's not clear how your damage-modifier function should behave between those points.
You could do it in piecewise-linear fashion, with linear interpolation from 1 to 50 and (separately) from 50 to 100:
final float scalingFactor;
if (level < 50) {
// Convert (1..50) range into (0..1) range.
final float interp = (level - 1) / 49.0;
// Convert (0..1) range into (1/3..3/3) range.
scalingFactor = ((interp * 2.0) + 1.0) / 3.0;
}
else {
// Convert (50..100) range into (0..1) range.
final float interp = (level - 50) / 50.0;
// Convert (0..1) range into (1..3) range.
scalingFactor = (interp * 2.0) + 1.0;
}
return damage * scalingFactor;
Or, if you're willing to have the 1/3 damage be at level 0 instead of 1, you could use an exponential function:
// Convert (0..100) range into (-1..+1) range.
final float interp = (level - 50) / 50.0;
// 3^(-1) == 1/3
// 3^0 == 1
// 3^(+1) == 3
final float scalingFactor = Math.pow(3.0, interp);
return damage * scalingFactor;
With the piecewise-linear approach, the difference in damage from one level to the next is the same for levels between 1 and 50, and the same for levels between 50 and 100, but the difference in damage from level 50 to 51 is significantly greater than from level 49 to 50.
The exponential function gives you a smooth curve, where the difference in damage per level increases gradually over the full range.
A: You could do something like this:
@EventHandler
public void damage(EntityDamageEvent e){
int i = e.getDamage();
if(e.getDamager() instanceof Player){
int level = //get players level here
float multiplier = (level >= 50 ? (1 + ((1/25) * (level - 50)) : ((1/3) + (1/75) * (level - 1)));
e.setDamage(i * multiplier);
}
}
This will multiply the damage by 1/3 if the players level is 1, by 1 if the level is 50, and by 3 if the level is 100. This will also work for all numbers in-between.
Here's how it works:
*
*There are 50 numbers between 50 (inclusive) and 100 (exclusive), and if you want to add 2 gradually in that range, you can use the fraction 2/50, or 1/25 every time the number goes up.
*You can do the same thing, just removing 2/3 gradually within 50 values, you could remove 2/150, or 1/75 every time the number goes down.
So, for example, a level of 5 would give you 1/3 + (1/75 * (5 - 1)) = 0.386666... = 29/75 of the original damage
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.