Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
Languages:
code
ArXiv:
Libraries:
Datasets
Dask
License:
Dataset Viewer
Auto-converted to Parquet
id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_0
for the list: | ?- L=[a,b,c]. L = [a,b,c] ? yes Is there a means to display: L = .(a, .(b, .(c, []))). A: Normally, write_canonical(List) or ?- write_term(List, [quoted(true), ignore_ops(true)]), as pointed out in the comments. Since SWI-Prolog decided to do things differently, this is not good enough: ?- write_canonical([a]). [a] true. ?- write_term([a], [quoted(true), ignore_ops(true)]). [a] true. ?- write_term([a], [dotlists(true)]). .(a,[]) true. See the documentation on write_term/2, pay attention to the options brace_terms(Bool) and dotlists(Bool). But beware: if you start SWI-Prolog 7 normally, the ./2 is not the list functor any more! ?- L = .(a, []). ERROR: Type error: `dict' expected, found `a' (an atom) % WHAT? ?- L = '[|]'(a, []). L = [a]. If you start it with swipl --traditional, things are back to normal, sort of: $ swipl --traditional Welcome to SWI-Prolog (Multi-threaded, 64 bits, Version 7.3.4-32-g9311e51) Copyright (c) 1990-2015 University of Amsterdam, VU Amsterdam SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. Please visit http://www.swi-prolog.org for details. For help, use ?- help(Topic). or ?- apropos(Word). ?- L = .(a, []). L = [a]. You still cannot use write_canonical(List) or write_term(List, [quoted(true), ignore_ops(true)]). Read the linked section of the SWI-Prolog documentation for details and rationale. As an advice, if you decide to use SWI-Prolog stick to SWI-Prolog 7 with the defaults and only use write_term(List, [dotlists(true)]) if you need to communicate with another Prolog implementation. The usual list notation, [a, b, ...] should be good enough in most conventional situations.
doc_1
-When I debug (VisualStudio2010), the web part that needs the query string runs first, and throw me an exception. So, How can I set priorities to run A first, instead of B?? There is another way ? A: It sounds to me like you either have the web parts on the same page or you have "web part(B)" on your startup page and "web part(A)" on a different page. Additionally, you should probably be a little more defensive by ensuring your request object has the query parameter before attempting to use it.
doc_2
1) Google analytics. Every time I do a turbolinks navigate to page, another google analytics JS gets stacked into my code at the top of the page: <script async="" src="//www.google-analytics.com/analytics.js"></script> <script async="" src="//www.google-analytics.com/analytics.js"></script> <script async="" src="//www.google-analytics.com/analytics.js"></script> ... It will keep going on. Also, my pusher connections keep stacking up and go to my limit of 20 for my free account, when there is only 1 user. Just wondering if anyone has dealt with either of these issues, and if anyone has any idea as what could be causing it, if it could be caused by the same thing, etc. A: The solution I came up with, at least for the google analytics issue, is to place it in the head tag instead of the body. This fixes the stacking. Pusher was the same issue.
doc_3
HTML: <div id="abc"> <h1>hello</h1> <div class="xyz"> <input type="button">Click Me</button> </div> <h1>Hello 1</h1> <div class="xyz"> <input type="button">Click Me</button> </div> </div> jQuery: $(':button').on('click', function(){ alert($(this).closest('.xyz').index()); }); How can i get the index for xyz without counting other siblings. I am getting 1 and 3 (I know why :) ) but I want 0 and 1. eq: for first button click, i want 0 and for second it should be 1. jsFiddle: http://jsfiddle.net/1tqp6tx1/ Much appreciated for any help A: Try $('.xyz').index($(this).closest('.xyz'); A: Add the class name inside the .index() too. Try this: $(':button').on('click', function(){ alert($(this).closest('.xyz').index('.xyz')); }); Check JSFiddle Demo
doc_4
<mj-text align="center"> © 2018 or &copy; or &#169; </mj-text> and this code printing unnecessary  ,like this © . how can i print copyright symbol in mjml ? A: You can try it like this: <mj-text align="center">Copyright &#160;&#169;&#160 </mj-text> This works fine with mjml 4.0. . A: I use the MJML app and have no problems in using &#169;
doc_5
In the parent component I have a showProgress variable, I want it when I change it to true the child v-model <progress-modal> switch to true. ProgressModal.vue <template> <v-dialog v-model="show" persistent max-width="400"> <v-card :dark="($theme === 'dark')"> <v-card-title class="headline" v-text="title"></v-card-title> <v-divider></v-divider> <div class="text-xs-center mt-2"> <v-progress-circular indeterminate :size="100" :color="$color"></v-progress-circular> <v-card-text v-text="text"></v-card-text> </div> </v-card> </v-dialog> </template> <script> export default { name: 'progress-modal', props: ['title', 'text'], data: () => ({ show: true }), methods: { } } </script> I already tried to use <progress-modal v-model="showProgress"> Instead of v-model in v-dialog but it does not work :( A: Pass value prop as value to v-dialog component, and re-emit input from v-dialog component: //CustomDialog.vue <v-dialog :value="value" @input="$emit('input', $event)"> </v-dialog> ... props:['value'] and add v-model to your parent (custom dialog) //Parent.vue <custom-dialog v-model="showProgress"> Example A: To enable usage of v-model by the parent, you have to define a value prop in the child and use it. <template> <v-dialog v-model="value" persistent max-width="400"> ... </template> <script> export default { name: 'progress-modal', props: ['title', 'text', 'value'], // added 'value' data: () => ({ ... </script> This way, when you use: <progress-modal v-model="showProgress"> ...the value inside progress-modal will have the value of parent's showProgress. Keeping it named show To use other internal name instead of value you can declare the model option in the component. <template> <v-dialog v-model="show" persistent max-width="400"> ... </template> <script> export default { name: 'progress-modal', props: ['title', 'text', 'show'], // added 'show' model: { // added model option prop: 'show' // }, // data: () => ({ }), // in this case, remove show from data ... </script>
doc_6
When I add data-rel="popup" to my button the page becomes empty with a gray circle in the center. Do you know what's wrong? <!DOCTYPE html> <html> <head> <title></title> <meta name=viewport content="user-scalable=no,width=device-width" /> <link rel=stylesheet href="css/jquery.mobile-1.3.2.css" /> <script src="js/jquery-1.6.1.min.js"></script> <script src="js/jquery.mobile-1.3.2.js"></script> </head> <body> <div data-role=page id=win1> <div data-role=header> <h1></h1> </div> <div data-role=content> xxxxxxx </div> <div data-role="footer" class="ui-bar"> <a href="#popupBasic" data-role="button" data-rel="popup" data-icon="plus">My button</a> </div> <div data-role="popup" id="popupBasic"> <p>This is a completely basic popup, no options set.<p> </div> </body> </html>
doc_7
When run sonar-eclipse plugin to analyze my android project,I got a error log like this: 19:11:02.596 INFO - Execute Findbugs 3.0.0 done: 12857 ms 19:11:02.650 INFO - Sensor FindbugsSensor done: 12913 ms 19:11:02.651 INFO - Sensor CpdSensor... 19:11:02.651 INFO - SonarEngine is used for java 19:11:02.652 INFO - Cross-project analysis disabled 19:11:02.809 INFO - Sensor CpdSensor done: 158 ms 19:11:02.809 INFO - Sensor MantisSensor... Exception in thread "main" org.sonar.runner.impl.RunnerException: Unable to execute Sonar at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:91) at org.sonar.runner.impl.BatchLauncher$1.run(BatchLauncher.java:75) at java.security.AccessController.doPrivileged(Native Method) at org.sonar.runner.impl.BatchLauncher.doExecute(BatchLauncher.java:69) at org.sonar.runner.impl.BatchLauncher.execute(BatchLauncher.java:50) at org.sonar.runner.impl.BatchLauncherMain.execute(BatchLauncherMain.java:41) at org.sonar.runner.impl.BatchLauncherMain.main(BatchLauncherMain.java:59) Caused by: Access to the secured property 'sonar.mantis.login.secured' is not possible in preview mode. The SonarQube plugin which requires this property must be deactivated in preview mode. how to fix this error? A: You have to deactivate the Mantis plugin in preview mode. Through the SonarQube web interface, log in as administrator and then go to Settings > General > General and update the "Plugins excluded for Preview and Incremental modes" property with the key of the Mantis plugin.
doc_8
@FindBys( { @FindBy(className = "class1") @FindBy(className = "class2")} ) Please guide me someone. A: You have multiple ways: * *Using lambda and CLASS_NAME: WebDriverWait(driver,20).until(lambda driver: driver.find_element(By.CLASS_NAME, "class1") or driver.find_element(By.CLASS_NAME, "class2")) *Using CSS_SELECTOR: WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".class1, .class2")) *Note : You have to add the following imports : from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC References You can find a couple of relevant detailed discussions in: * *Any built-in way for branching waits with OR conditions? *WebDriverWait for multiple conditions (OR logical evaluation)
doc_9
Im trying to connect to s-cassandra (which is a stubbed cassandra as can be reviewed here), with a datastax node.js cassandra driver. For some reason passing "127.0.0.1:8042" as a contact point to the driver results in a DriverInternalError:( tough sometimes it does work randomly and I havent still figured out why sometimes it does and sometime i doesnt..) The DriverInternalError I get: {"name": "DriverInternalError", "stack": "...", "message": "Local datacenter could not be determined", "info": "Represents a bug inside the driver or in a Cassandra host." } That is what I see from Cassandra Driver's log: log event: info -- Adding host 127.0.0.1:8042 log event: info -- Getting first connection log event: info -- Connecting to 127.0.0.1:8042 log event: verbose -- Socket connected to 127.0.0.1:8042 log event: info -- Trying to use protocol version 4 log event: verbose -- Sending stream #0 log event: verbose -- Sent stream #0 to 127.0.0.1:8042 {"name":"application-storage","hostname":"Yuris-MacBook-Pro.local","pid":1338,"level":30,"msg":"Kafka producer is initialized","time":"2016-08-05T12:53:53.124Z","v":0} log event: verbose -- Received frame #0 from 127.0.0.1:8042 log event: info -- Protocol v4 not supported, using v2 log event: verbose -- Done receiving frame #0 log event: verbose -- disconnecting log event: info -- Connection to 127.0.0.1:8042 closed log event: info -- Connecting to 127.0.0.1:8042 log event: verbose -- Socket connected to 127.0.0.1:8042 log event: info -- Trying to use protocol version 2 log event: verbose -- Sending stream #0 log event: verbose -- Sent stream #0 to 127.0.0.1:8042 log event: verbose -- Received frame #0 from 127.0.0.1:8042 log event: info -- Connection to 127.0.0.1:8042 opened successfully log event: info -- Connection pool to host 127.0.0.1:8042 created with 1 connection(s) log event: info -- Control connection using protocol version 2 log event: info -- Connection acquired to 127.0.0.1:8042, refreshing nodes list log event: info -- Refreshing local and peers info log event: verbose -- Sending stream #1 log event: verbose -- Done receiving frame #0 log event: verbose -- Sent stream #1 to 127.0.0.1:8042 log event: verbose -- Received frame #1 from 127.0.0.1:8042 log event: warning -- No local info provided log event: verbose -- Sending stream #0 log event: verbose -- Done receiving frame #1 log event: verbose -- Sent stream #0 to 127.0.0.1:8042 log event: verbose -- Received frame #0 from 127.0.0.1:8042 log event: info -- Peers info retrieved log event: error -- Tokenizer could not be determined log event: info -- Retrieving keyspaces metadata log event: verbose -- Sending stream #1 log event: verbose -- Done receiving frame #0 log event: verbose -- Sent stream #1 to 127.0.0.1:8042 log event: verbose -- Received frame #1 from 127.0.0.1:8042 log event: verbose -- Sending stream #0 log event: verbose -- Done receiving frame #1 log event: verbose -- Sent stream #0 to 127.0.0.1:8042 log event: verbose -- Received frame #0 from 127.0.0.1:8042 log event: info -- ControlConnection connected to 127.0.0.1:8042 and is up to date Ive tried playing with the firewall and open application but help is not there.. tough sometimes it does work randomly and I havent still figured out why.. I have a mac OS X El Capitan A: The Solution that helped me: I needed to prime the system.local table as a prime-query-single { query: 'prime-query-single', header: {'Content-Type': 'application/json'}, body: { "when": { "query": "SELECT * FROM system.local WHERE key='local'" }, "then": { "rows": [ { "cluster_name": "custom cluster name", "partitioner": "org.apache.cassandra.dht.Murmur3Partitioner", "data_center": "dc1", "rack": "rc1", "tokens": [ "1743244960790844724" ], "release_version": "2.0.1" } ], "result": "success", "column_types": { "tokens": "set<text>" } } } }
doc_10
"Runtime Error: 9, Subscript out of range" its highlighting the first variable declaration. At first I thought it was due to the wrong datatype but changing and playing around with that had no luck. I also tried both Cells & Range Public vFolderPath As String Public vCMFNewPath As String Public vKBNewPath As String Public vDPI As Integer Private Sub SetGlobal() Dim vGo As String Dim vTemplateLocation As String Dim vCMFFilename As String Dim vKBFilename As String Dim vDriver As String Dim vPKG As String vDPI = Workbooks("tools.xlsm").Sheets("SETTINGS").Range("B2").Value vFolderPath = Workbooks("tools.xlsm").Sheets("SETTINGS").Range("B3").Value & "\" Any ideas? A: Code works fine, running from a file called Tools.xslm. with a tab called Settings, an integer in cell B2 and a string value in cell B3. This works when running from a module in Personal xlsb or from within Tools.xlsm. It works even if you do not declare any of the variables. A: Try below code : Public vFolderPath As String Public vCMFNewPath As String Public vKBNewPath As String Public vDPI As Integer Private Sub SetGlobal() Dim vGo As String Dim vTemplateLocation As String Dim vCMFFilename As String Dim vKBFilename As String Dim vDriver As String Dim vPKG As String Dim wkbSetting As Workbook, shtSetting As Worksheet On Error Resume Next Set wkbSetting = Workbooks("tools.xlsm") On Error GoTo 0 On Error GoTo err_rout If Not wkbSetting Is Nothing Then On Error Resume Next Set shtSetting = wkbSetting.Sheets("SETTINGS") On Error GoTo 0 On Error GoTo err_rout If shtSetting Is Nothing Then Err.Raise Number:=32, Description:="Sheets Settings not found" End If vDPI = CInt(shtSetting.Range("B2").Value) vFolderPath = shtSetting.Range("B3").Value & "\" Else Err.Raise Number:=31, Description:="Workbook - tools.xlsm not found" End If Exit Sub err_rout: MsgBox Err.Description, vbInformation End Sub
doc_11
Note: This stored procedure working fine when running it in the SQL server but throws an exception in the Entity framework. The data reader is incompatible with the specified 'BuyMediaModel.GetSpendBreakdownofBuyer_Result'. A member of the type, 'Impressions', does not have a corresponding column in the data reader with the same name. Here is my sp code. ALTER Procedure [dbo].[GetSpendBreakdownofBuyer] -- exec GetSpendBreakdownofBuyer '134', '2020' @BuyerID varchar(100), @Year varchar(20) AS BEGIN SET FMTONLY OFF SELECT CASE WHEN s.MediaType='Radio' THEN 'Radio' WHEN s.MediaType='Newspaper' or s.MediaType='Magazine' THEN 'Print' WHEN s.MediaType='TV' THEN 'TV' WHEN s.MediaType='Podcast' THEN 'Podcast' WHEN s.MediaType='OOH' THEN 'OOH' WHEN s.MediaType='Digital' THEN 'Digital' ELSE 'Other' END AS 'MediaType', 0 as Impressions, Convert(char(3), od.AdvertisingDate, 0) as 'Month', SUM(od.Price) as 'Budget_Spent' INTO #TempBreakdown FROM tblOrder o JOIN tblOrderDetail od ON o.Order_ID = od.Order_ID JOIN tblMediaSeller s ON s.MediaSeller_ID =od.Seller_ID WHERE o.Buyer_ID in (select cast(item as int) from dbo.SplitString(@BuyerID) ) AND YEAR(od.AdvertisingDate)=@Year AND od.Status in ('Ordered','Approved') GROUP BY Convert(char(3), od.AdvertisingDate, 0), s.MediaType UNION SELECT 'Other' as Mediatype, 0 as Impressions, [Month], Sum(BudgetSpent) FROM tblBudgetSpent WHERE Buyer_ID in (select cast(item as int) from dbo.SplitString(@BuyerID) ) AND [Year]=@Year GROUP BY [Month] SELECT SUM(Budget_Spent) AS Total, --0 as Impressions, [month] INTO #TempTotal FROM #TempBreakdown GROUP BY [month] SELECT b.MediaType, 0 as Impressions, --CASE WHEN -- t.Total <> 0 -- THEN((b.Budget_Spent/t.Total)*100) -- ELSE -- 0 END AS budget_spent, b.Budget_Spent, b.Month FROM #TempBreakdown b JOIN #TempTotal t ON b.Month=t.Month UNION SELECT 'Sales', 0 as Impressions, Sales, [Month] FROM tblSalesFigure WHERE BuyerID in (select cast(item as int) from dbo.SplitString(@BuyerID) ) AND Year=@Year UNION SELECT 'Impressions', Impressions as Impressions, 0 , [Month] FROM tblMatrics WHERE Buyer_ID in (select cast(item as int) from dbo.SplitString(@BuyerID) ) AND Year=@Year END
doc_12
I have a list of mac addresses written in a file. I need to get the list of mac addresses on the network, compare them with the addresses from the file and then print in stdout the addresses from the file that are not found in the list of addresses from the network. And in the end update the file with those addresses. For now I managed to read a file that I give as an argument: import sys with open(sys.argv[1], 'r') as my_file: lines = my_file.read() my_list = lines.splitlines() I am trying to read the mac addresses by running the process arp from python: import subprocess addresses = subprocess.check_output(['arp', '-a']) But with this code I get this: Internet Address Physical Address Type 156.178.1.1 5h-c9-6f-78-g9-91 dynamic 156.178.1.255 ff-ff-ff-ff-ff-ff static 167.0.0.11 05-00-9b-00-00-10 static 167.0.0.123 05-00-9b-00-00-ad static ..... How can I filter here so I can get only the list of mac addresses? Or can I check the two lists like this to see if a mac address from the file is on the network and if not to print it out? A: Starting with what you have: networkAdds = addresses.splitlines()[1:] networkAdds = set(add.split(None,2)[1] for add in networkAdds if add.strip()) with open(sys.argv[1]) as infile: knownAdds = set(line.strip() for line in infile) print("These addresses were in the file, but not on the network") for add in knownAdds - networkAdds: print(add) A: To fetch MAC address you can use this regular expression: import re addresses = """ Internet Address Physical Address Type 156.178.1.1 5f-c9-6f-78-f9-91 dynamic 156.178.1.255 ff-ff-ff-ff-ff-ff static 167.0.0.11 05-00-9b-00-00-10 static 167.0.0.123 05-00-9b-00-00-ad static .....""" print(re.findall(('(?:[0-9a-fA-F]{1,}(?:\-|\:)){5}[0-9a-fA-F]{1,}'),addresses)) Output: ['5f-c9-6f-78-f9-91', 'ff-ff-ff-ff-ff-ff', '05-00-9b-00-00-10', '05-00-9b-00-00-ad'] As far as I can see your MAC-adresses are not following naming convention used in this regexp, so it is up to you to play with [0-9a-fA-F] thing. A: Use split() and count the amount of - characters in each item: text = """ Internet Address Physical Address Type 156.178.1.1 5h-c9-6f-78-g9-91 dynamic 156.178.1.255 ff-ff-ff-ff-ff-ff static 167.0.0.11 05-00-9b-00-00-10 static 167.0.0.123 05-00-9b-00-00-ad static .....""" for item in text.split(): if item.count('-') == 5: print item Output: 5h-c9-6f-78-g9-91 ff-ff-ff-ff-ff-ff 05-00-9b-00-00-10 05-00-9b-00-00-ad You can also do it with list comprehensions (listcomps): print [item for item in text.split() if item.count('-') == 5] Output: ['5h-c9-6f-78-g9-91', 'ff-ff-ff-ff-ff-ff', '05-00-9b-00-00-10', '05-00-9b-00-00-ad'] A: You can parse /proc/net/arp and avoid the need for a subprocess at all: with open("/proc/net/arp") as f, open(sys.argv[1], 'r') as my_file: next(f) mac_addrs = {mac for _, _, _, mac,_, _ in map(str.split, f)} for mac in map(str.rstrip, my_file): if mac not in mac_addrs: print("No entry for addr: {}".format(mac)) /proc/net/arp looks like: IP address HW type Flags HW address Mask Device xxx.xxx.xxx.xxx 0x1 0x2 xx:xx:xx:xx:xx:xx * wlan0 where the fourth column is the mac/hw address. If you were to use arp you might also find arp -an gives you less output to parse. If you want to add macs that are listed in the network but not in the file you can open with "a+" and append any values not seen to the file at the end: with open("/proc/net/arp") as f, open(sys.argv[1], 'a+') as my_file: next(f) mac_addrs = {mac for _, _, _, mac,_, _ in map(str.split, f)} for mac in map(str.rstrip, my_file): if mac not in mac_addrs: print("No entry for addr: {}".format(mac)) else: mac_addrs.remove(mac) my_file.writelines("{}\n".format(mac)for mac in mac_addrs)
doc_13
function[L, U] = myLU(A,B, pos) %A = Mtrix that becomes U %B = Matrix that becomes L tmp_L = B; [x,y] = size(A); if pos > x L = B; U = A; return else pos %<-- just to see if it iterates through everything [tmp_U,tmp_L] = elimMat(A,pos); myLU(tmp_U,tmp_L, pos+1); end L = tmp_L; U = tmp_U; end I where elimMat(A, pos) returns the elimination matrix for column pos. as well as another matrix, which will end up being the matrix of multipliers. What i tried doing is then finding the LU factorization of this matrix A. since elimMat returns L and U(this works, if i do it manually it works), i had to make a function that allows me to do it automatically without using a for loop. I thought i would do it recursively. What i ended up doing is adding another variable B to the function so that i can store intermediate values of my matrix obtained in each step and put it all together later. So here is my question. Am i implementing the recursion wrong? and if i am how can i fix it? The other thing i wanted to ask is how can i implement this so i do not need variable B as an additional imput, and only use the existing variables, or variables previously defined, to find the solution? I would really like only two inputs in my function: The matrix name and the starting index. here is elimMat if if helps: function [M,L] = elimMat(A,k) %find the size of the matrix [x,y] = size(A); tmp_mat = zeros(x,y); %M = The current matrix we are working on for Elimination -> going to %become U. %L = The L part of the matrix we are working on. Contains all the %multipliers. This is going to be our L matrix. for i = 1:x mult = A(i,k)/A(k,k); if i > k tmp_mat(i,k) = mult; P = A(k,:)*mult; A(i,:) = A(i,:)-P; elseif i == k tmp_mat(k,k) = 1; end end M = A; L = tmp_mat; end thanks for any feedback you can provide. Here is the output: WHAT I GET VS what it should be: [U = VS [U = 1 2 2 1 2 2 0 -4 -6 0 -4 -6 0 -2 -4] 0 0 2 L = VS [L= 1 0 0 1 0 0 4 0 0 4 1 0 4 0 0] 4 0.5 1 As you can see only the first column is changed A: You forgot to catch the output of your recursive call: [tmp_L, tmp_U] = myLU(tmp_U,tmp_L, pos+1); Matlab passes variables by value, so a function cannot change its input variable itself (ok, it can, but it's tricky and unsafe). Your original version didn't return the updated matrices, so the outermost function call encountered the myLU() call, let the recursion unfold and finish, and then went on to use tmp_L and tmp_U as returned from the very first call to elimMAT(A,1). Note that you might want to standardize your functions such that they return U and L in the same order to avoid confusion.
doc_14
I checked the docs in MSDN but to no avail. Would someone know if it is at all possible? Thanks in advance. Cheers A: Refer this tutorial. You may want to add your domain name to the list of allowed domains. Here is the code snippet from the tutorial's soruce code. var ListOfAllowedDomains = new Collection<Uri> { // Lists domains that can send tile updates and so forth as push notifications. // Only these authorized domains will be allowed by the shell to push new tiles to the phone new Uri(@"http://YOUR WEB SERVICE'S DOMAIN HERE") // e.g. if you published a webservice at http://foo.com/service1.svc -- put "http://foo.com" here. }; //Register this channel with the shell, pass on authorized domain in way method expects myPushChannel.BindToShellTile(ListOfAllowedDomains); I have fully integrated this into one of my mobile Apps and it is working smoothly. If I understand your question correctly, you want to pull these Images through a relative URI which is hosted in the service. A: You can use local resource or remote resource only to update background image of a tile, and it's not possible to use isolated storage. From MSDN: Background image. You can use a local resource or remote resource for the background image of a Tile. If you want to use a local resource, it must have been installed as a part of the XAP package. For example, it is not possible to download an image, put it into isolated storage, and then use it as a local resource for the background image of the Tile.
doc_15
A: Have a look at map and some STL algirhtims: http://www.cplusplus.com/reference/stl/map/ lower_bound Return iterator to lower bound (public member function) upper_bound Return iterator to upper bound (public member function) distance Calculates the number of elements between first and last.
doc_16
An exception has occurred in the compiler (1.8.0_31). Please file a bug at the Java Developer Connection (http://java.sun.com/webapps/bugreport) after checking the Bug Parade for duplicates. Include your program and the following diag nostic in your report. Thank you. java.lang.NullPointerException at com.sun.tools.javac.code.Types.isConvertible(Types.java:290) at com.sun.tools.javac.comp.Check.assertConvertible(Check.java:922) at com.sun.tools.javac.comp.Check.checkMethod(Check.java:876) at com.sun.tools.javac.comp.Attr.checkMethod(Attr.java:3838) at com.sun.tools.javac.comp.Attr.checkIdInternal(Attr.java:3615) at com.sun.tools.javac.comp.Attr.checkMethodIdInternal(Attr.java:3522) at com.sun.tools.javac.comp.Attr.checkMethodId(Attr.java:3501) at com.sun.tools.javac.comp.Attr.checkId(Attr.java:3488) at com.sun.tools.javac.comp.Attr.visitSelect(Attr.java:3370) at com.sun.tools.javac.tree.JCTree$JCFieldAccess.accept(JCTree.java:1897) at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:607) at com.sun.tools.javac.comp.Attr.visitApply(Attr.java:1843) at com.sun.tools.javac.tree.JCTree$JCMethodInvocation.accept(JCTree.java:1465) at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:607) at com.sun.tools.javac.comp.Attr.attribExpr(Attr.java:649) at com.sun.tools.javac.comp.Attr.visitVarDef(Attr.java:1093) at com.sun.tools.javac.tree.JCTree$JCVariableDecl.accept(JCTree.java:852) at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:607) at com.sun.tools.javac.comp.Attr.attribStat(Attr.java:676) at com.sun.tools.javac.comp.Attr.attribStats(Attr.java:692) at com.sun.tools.javac.comp.DeferredAttr$DeferredAttrNode$StructuralStuckChecker.canLambdaBodyCompleteNormally(DeferredAttr.java:704) at com.sun.tools.javac.comp.DeferredAttr$DeferredAttrNode$StructuralStuckChecker.visitLambda(DeferredAttr.java:652) at com.sun.tools.javac.tree.JCTree$JCLambda.accept(JCTree.java:1624) at com.sun.tools.javac.comp.DeferredAttr$DeferredAttrNode$StructuralStuckChecker.complete(DeferredAttr.java:605) at com.sun.tools.javac.comp.DeferredAttr$DeferredType.check(DeferredAttr.java:245) at com.sun.tools.javac.comp.DeferredAttr$DeferredType.access$000(DeferredAttr.java:132) at com.sun.tools.javac.comp.DeferredAttr$DeferredAttrNode.process(DeferredAttr.java:554) at com.sun.tools.javac.comp.DeferredAttr$DeferredAttrContext.complete(DeferredAttr.java:479) at com.sun.tools.javac.comp.Resolve.rawInstantiate(Resolve.java:578) at com.sun.tools.javac.comp.Resolve.selectBest(Resolve.java:1431) at com.sun.tools.javac.comp.Resolve.findMethodInScope(Resolve.java:1618) at com.sun.tools.javac.comp.Resolve.findMethod(Resolve.java:1689) at com.sun.tools.javac.comp.Resolve.findMethod(Resolve.java:1662) at com.sun.tools.javac.comp.Resolve.findConstructor(Resolve.java:2545) at com.sun.tools.javac.comp.Resolve$11.doLookup(Resolve.java:2514) at com.sun.tools.javac.comp.Resolve$BasicLookupHelper.lookup(Resolve.java:3074) at com.sun.tools.javac.comp.Resolve.lookupMethod(Resolve.java:3325) at com.sun.tools.javac.comp.Resolve.resolveConstructor(Resolve.java:2511) at com.sun.tools.javac.comp.Resolve.resolveConstructor(Resolve.java:2502) at com.sun.tools.javac.comp.Attr.visitNewClass(Attr.java:2097) at com.sun.tools.javac.tree.JCTree$JCNewClass.accept(JCTree.java:1516) at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:607) at com.sun.tools.javac.comp.Attr.attribExpr(Attr.java:649) at com.sun.tools.javac.comp.Attr.visitVarDef(Attr.java:1093) at com.sun.tools.javac.tree.JCTree$JCVariableDecl.accept(JCTree.java:852) at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:607) at com.sun.tools.javac.comp.Attr.attribStat(Attr.java:676) at com.sun.tools.javac.comp.Attr.attribClassBody(Attr.java:4342) at com.sun.tools.javac.comp.Attr.attribClass(Attr.java:4252) at com.sun.tools.javac.comp.Attr.attribClass(Attr.java:4181) at com.sun.tools.javac.comp.Attr.attrib(Attr.java:4156) at com.sun.tools.javac.main.JavaCompiler.attribute(JavaCompiler.java:1248) at com.sun.tools.javac.main.JavaCompiler.compile2(JavaCompiler.java:901) at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:860) at com.sun.tools.javac.main.Main.compile(Main.java:523) at com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:129) at com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:138) at org.codehaus.plexus.compiler.javac.JavaxToolsCompiler.compileInProcess(JavaxToolsCompiler.java:126) at org.codehaus.plexus.compiler.javac.JavacCompiler.performCompile(JavacCompiler.java:169) at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:785) at org.apache.maven.plugin.compiler.CompilerMojo.execute(CompilerMojo.java:129) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:347) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:154) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:582) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214) at org.apache.maven.cli.MavenCli.main(MavenCli.java:158) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356) [INFO] ------------------------------------------------------------- [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] An unknown compilation problem occurred [INFO] 1 error [INFO] ------------------------------------------------------------- [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE I've never seen something like this before. How can I work around this? A: The line which caused the problem: int count = possibleCards.stream().reduce(0, (cnt, c) -> cnt += (c.getTags().contains(tag) ? 1 : 0), (cnt1, cnt2) -> cnt1 + cnt2); This line of code was supposed to count all card objects which had a specific tag. The bug is reproducible. More details can be found here: https://bugs.openjdk.java.net/browse/JDK-8056038?page=com.atlassian.streams.streams-jira-plugin:activity-stream-issue-tab A: Builds of Java 8 older than u25 on some platforms and u31 on others generally have oddball issues where lambdas and unboxing might not play nice together. The workaround may be to explicitly infer some of the lambda types where unboxing may occur, such as: (Integer count, c) -> count += c.getTags().contains(tag) ? 1 : 0) But in this specific scenario, you might be able to avoid the issue entirely by using a combination of mapToInt() and sum() so that there is clearly no unboxing shenanigans going on. The issue is supposed to be fixed in u40, but as of the writing of this answer, is not released.
doc_17
While we can extract a copy of a section of an index via specifying row numbers like below: i1=df.index[1:3].copy() Unfortunately we can't extract a copy of a section of an index via specifying the key (like the case of df.loc method). When I try the below: i2=df.index['a':'c'].copy() I get the below error: TypeError: slice indices must be integers or None or have an __index__ method Is there any alternative to call a subset of an index based on its keys? Thank you A: Simpliest is loc with index: i1 = df.loc['b':'c'].index print (i1) Index(['b', 'c'], dtype='object') Or is possible use get_loc for positions: i1 = df.index i1 = i1[i1.get_loc('b') : i1.get_loc('d') + 1] print (i1) Index(['b', 'c'], dtype='object') i1 = i1[i1.get_loc('b') : i1.get_loc('d') + 1] print (i1) Index(['b', 'c', 'd'], dtype='object') Alternative: i1 = i1[i1.searchsorted('b') : i1.searchsorted('d') + 1] print (i1) Index(['b', 'c', 'd'], dtype='object') A: Try using .loc, see this documentation: i2 = df.loc['a':'c'].index print(i2) Output: Index(['a', 'b', 'c'], dtype='object') or df.loc['a':'c'].index.tolist() Output: ['a', 'b', 'c']
doc_18
Cursor query = getContentResolver().query(MoviesContract.MoviesEntry.CONTENT_URI, null, null, null, null); query.moveToFirst(); while (query.isAfterLast() == false){ Log.d("Test", query.getString(0)); query.moveToNext(); } I'm doing tests on this block of code. When I execute the Log.d line, this error is raised: java.lang.IllegalStateException: Couldn't read row 0, col 0 from CursorWindow. Make sure the Cursor is initialized correctly before accessing data from it. This is how I know my database has content: What I'm missing? It's my first time dealing with cursors. A: Found the problem: CursorWindow: Window is full: requested allocation 1369680 bytes, free space 596540 bytes, window size 2097152 bytes I was storing images into the Database. I'm going to change my architecture to store the image from the webservice with the Offline Caching of Picasso.
doc_19
I used the function spark_apply() for the same. However I get the following error Error: Unable to retrieve a spark_connection from object of class data.frame when I run my code. Below is my code snippet myFunction <- function(sparkdataframe){ inputdf<-collect(sparkdataframe) inputdf<-as.matrix(inputdf) inputdf1<-t(inputdf) doc<-Corpus(VectorSource(inputdf1)) doc<-tm_map(doc,removePunctuation) data.frame(doc = sapply(doc, as.character), stringsAsFactors = FALSE) return(doc) } # Use spark_apply to run function in Spark spark_apply(sparkdataframe,function(e) (myFunction(e))) A: That's because you try collect in the closure: inputdf<-collect(sparkdataframe) The object recieved by your function is a plain R data.frame. Remove this line completely and replace following one with: inputdf<-as.matrix(sparkdataframe)
doc_20
I am running my kernel (such as it is) under QEMU, but am finding that when the page fault occurs, it crashes (I'm back to command prompt). I've been reading and digging searching for hints that may help me figure out the exact cause. I figure that I am missing something and/or have misunderstood something. Incidentally, when I set the page fault vecotr up to use an interrupt gate, I can get things to work as I expect them. It is when I try to use a task gate for this that I encounter problems (and yes, I do want to persist in trying to use the task gate). The IDT portion is set up correctly to register a task gate for the page fault vector. Its selector refers to a descriptor for the TSS in the GDT. As far as the descriptor for the TSS in the GDT, I am certain that too is set up properly. However, I am not 100% certain if I have the TSS properly populated and have been unable thus far to determine exactly how to set up all of its fields. Some of them such as esp, eip, cs, ds, es, fs, gs, ss and eflags have been relatively straight forward. However, others such as the LDT segment selector are less clear. Must the LDT segment selector be non-zero AND point to an LDT descriptor in the GDT? Which of these fields must be set for the scenario described above? I am having a heck of time figuring this one out. Any help would be greatly appreciated. A: The LDT is not necessary in a correctly functioning x86 operating system (indeed, it is forbidden in a x86-64 operating system). To avoid using it, set it to zero. One thing in particular to be aware of - the structures on osdev.org about TSS is back-to-front (http://wiki.osdev.org/TSS). You'll need to be careful, since getting the TSS wrong will trigger a TSS-fault exception.
doc_21
Is there a way to configure psycopg2 so it will not show the sql statement or mask some fields? Example: "ERROR", "message": "(psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint \"pksa_account\"\nDETAIL: Key (b_loadid, customer_account_number)=(xx, xxxx) already exists.\n\n[SQL: INSERT INTO ... ... Where after INSERT INTO are actual fields and values. A: We solved this problem using sqlalchemy to connect (which is using psycopg2 under the hood btw) that provides configuration option when creating engine: hide_parameters = True and that's all it took to resolve. Now we have pretty logs with parameters masked: DETAIL: Key (id)=(4) already exists. [SQL: INSERT INTO xxx.xxx(id, loaddatetime) VALUES (%(id)s, %(loaddatetime)s)] [SQL parameters hidden due to hide_parameters=True] (Background on this error at: http://sqlalche.me/e/13/gkpj)```
doc_22
For example a record of temperature has a date and a location (coordinates). All of the coordinates are already in the database, what I need to add is time and the value of the temperature. Values and metadata are in a CSV file. Basically what I'm doing is: * *Get time through the file's name *Insert time into DB, and keep the primary key *Reading file, get the value and coordinates *Select query to get the id of the coordinates *Insert weather value with foreign keys (time and coordinates) The issue is that the "SELECT id FROM location WHERE latitude = ... AND longitude = ..." is too slow. I have got 230k files and currently one file takes more than 2 minutes to be processed... Edit: by changing the index, it now takes 25 seconds and is still too slow. Moreover, the PreparedStatement is also still slower and I cannot figure out why. private static void putFileIntoDB(String variableName, ArrayList<String[]> matrix, File file, PreparedStatement prepWeather, PreparedStatement prepLoc, PreparedStatement prepTime, Connection conn){ try { int col = matrix.size(); int row = matrix.get(0).length; String ts = getTimestamp(file); Time time = getTime(ts); // INSERT INTO takes 14ms prepTime.setInt(1, time.year); prepTime.setInt(2, time.month); prepTime.setInt(3, time.day); prepTime.setInt(4, time.hour); ResultSet rs = prepTime.executeQuery(); rs.next(); int id_time = rs.getInt(1); //for each column (longitude) for(int i = 1 ; i < col ; ++i){ // for each row (latitude) for(int j = 1 ; j < row ; ++j){ try { String lon = matrix.get(i)[0]; String lat = matrix.get(0)[j]; String var = matrix.get(i)[j]; lat = lat.substring(1, lat.length()-1); lon = lon.substring(1, lon.length()-1); double latitude = Double.parseDouble(lat); double longitude = Double.parseDouble(lon); double value = Double.parseDouble(var); // With this prepared statement, instruction needs 16ms to be executed prepLoc.setDouble(1, latitude); prepLoc.setDouble(2, longitude); ResultSet rsLoc = prepLoc.executeQuery(); rsLoc.next(); int id_loc = rsLoc.getInt(1); // Whereas this block takes 1ms Statement stm = conn.createStatement(); ResultSet rsLoc = stm.executeQuery("SELECT id from location WHERE latitude = " + latitude + " AND longitude =" + longitude + ";" ); rsLoc.next(); int id_loc = rsLoc.getInt(1); // INSERT INTO takes 1ms prepWeather.setObject(1, id_time); prepWeather.setObject(2, id_loc); prepWeather.setObject(3, value); prepWeather.execute(); } catch (SQLException ex) { Logger.getLogger(ECMWFHelper.class.getName()).log(Level.SEVERE, null, ex); } } } } catch (SQLException ex) { Logger.getLogger(ECMWFHelper.class.getName()).log(Level.SEVERE, null, ex); } } What I already did: * *Set two B-tree index on table location on columns latitude and longitude *Drop foreign keys constraints PreparedStatements in parameters are : // Prepare selection for weather_radar foreign key PreparedStatement prepLoc = conn.prepareStatement("SELECT id from location WHERE latitude = ? AND longitude = ?;"); PreparedStatement prepTime = conn.prepareStatement("INSERT INTO time(dataSetID, year, month, day, hour) " + "VALUES(" + dataSetID +", ?, ? , ?, ?)" + " RETURNING id;"); // PrepareStatement for weather_radar table PreparedStatement prepWeather = conn.prepareStatement("INSERT INTO weather_radar(dataSetID, id_1, id_2, " + variableName + ")" + "VALUES(" + dataSetID + ", ?, ?, ?)"); Any idea to get things go quicker? * *Ubuntu 16.04 LTS 64-bits *15.5 Gio *Intel® Core™ i7-6500U CPU @ 2.50GHz × 4 *PostgreSQL 9.5.11 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit *Netbeans IDE 8.2 *JDK 1.8 *postgresql-42.2.0.jar A: * *The key issue you have here is you miss ResultSet.close() and Statement.close() kind of calls. *As you resolve that (add relevant close calls) you might find that having SINGLE con.prepareStatement call (before both for loops) would improve the performance even further (of course, you will not need to close the statement in a loop, however you still would need to close resultsets in a loop). *Then you might apply batch SQL A: Using EXPLAIN, the point where query becomes latent could be figured out. One of the situation where I have encountered case alike being: * *Compound queries e.g. parameterized similar date ranges, from different tables and then joining them on some indexed value. Even if the date in the above serve as index still the query produced in preparedStatement, could not hit the indexes and ended up doing a scan over the joining data.
doc_23
So when my data provider do API calls it causes me this error: The response to 'getList' must be like { data : [...] }, but the received data is not an array. The dataProvider is probably wrong for 'getList' The responses of my old API has various data fields like { 'posts': [] } or { 'users': [] }. How can I use these name of fields instead of { 'data': [] } ? A: The 'data' in this case just refers to the type of information that should be retuned, not the name of the object. Within your API, you can simply return a list in the following form: const posts = [ { "id":1, "name":"post1" }, { "id":2, "name":"post2" }, ]; return JSON.stringify(posts); Then return that 'posts' object in your response and don't forget to set the expected ContentRange headers. Not sure what language you are using, but the principle above should be easy enough to follow and apply in any language.
doc_24
The outputs are numbers from inputs from a MS Form, that I compose to floats. The condition should check the input numbers is between a range of two numbers in all 4 AND statements and if one of the statements is false, the condition should be false and then send a mail. But it is allways turns out true. A: Thanks to Gandalf it now works perfectly and here is the solution.
doc_25
This doesn't sound like an upgrade to me. I tried it with this code. int errNo; ClientUser ui; ClientApi client; Error err; client.DefinePort(myport, &err); client.DefineClient(myclient, &err); client.Init(&err); client.Run("tickets", &ui); errNo = client.Final(&err); myport and myclient are string values that have valid port and client values. They've been successfully tested on other commands as well. What I was expecting was a list of my current tickets to be displayed from stdout. A: p4 tickets isn't a server command that you can access via the API, it's implemented directly in the P4 CLI (bypassing the usual client.Run() call). If you actually send the tickets command to the server, the server tells you to upgrade your client because it assumes that a newer client would know that tickets isn't a real command. If you want to implement functionality similar to p4 tickets in your app, take a look at the clientTickets function in clientmain.cc: https://workshop.perforce.com/files/guest/perforce_software/p4/2018-2/client/clientmain.cc#936
doc_26
After my remote call, I store the returned string to that variable When I try to access it from the cli again, the variable is Undefined although I stored it. this is my class: class CustomCLI extends Command { static User: string static flags = { connect: flags.string({ char: 'c' }), user: flags.boolean({ char: 'u' }) } async run() { if (flags.connect) { //do stuff and store to CustomCLI.User = "Returned String" //Access Given to Admin } if (flags.user) { //do stuff and store to console.log(CustomCLI.User) //this is undefined } } } Execution example
doc_27
import os from abc import ABCMeta, abstractmethod import pandas as pd class Reader(ABCMeta): @abstractmethod def read(cls, f): raise NotImplementedError @staticmethod def _valid(f): return os.path.exists(f) class CSVReader(Reader): def read(self, f): if not self._valid(f): return None else: return pd.read_csv(f).values class XLSReader(Reader): def read(self, f): pass class SHPReader(Reader): def read(self, f): pass Any idea what is the best way to solve it? A: Your Reader class could have a public method read that it defines that calls _valid then _read from abc import ABC, abstractmethod class Reader(ABC): # Or metaclass=ABCMeta def read(self, f): if not self._valid(f): raise ValueError("Invalid") else: return self._read(f) @abstractmethod def _read(self, f) raise NotImplementedError @classmethod def _valid(cls, f) # Subclasses can have more restrictive validations return os.path.exists(f)
doc_28
Exception calling "CreateFromDirectory" with "4" argument(s): "The path is not of a legal form." At line:5 char:4 + [System.IO.Compression.ZipFile]::CreateFromDirectory($sourcedir, + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : ArgumentException Microsoft article on ZipFile.CreateFromDirectory: http://msdn.microsoft.com/en-us/library/hh485721(v=vs.110).aspx The code I'm trying: function ZipFiles( $zipfilename, $sourcedir ) { [Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") $compressionLevel = [System.IO.Compression.CompressionLevel]::Optimal [System.IO.Compression.ZipFile]::CreateFromDirectory($sourcedir, $zipfilename, $compressionLevel, $false) } Get-Content public-build\index.html | ForEach-Object { $_ -replace "data-default-api=`"dev`"", "data-default-api=`"test`"" } | Set-Content public-build\index2.html cp public-build\index2.html public-build\index.html rm public-build\index2.html ZipFiles("public-build.zip", "C:\Users\Administrator\Desktop\public-build") I have tried changing "C:\Users\Administrator\Desktop\public-build" to: "C:\Users\Administrator\Desktop\public-build\" "C:\\Users\\Administrator\\Desktop\\public-build" "C:\\Users\\Administrator\\Desktop\\public-build\\" "public-build" "public-build\" ".\public-build" ".\public-build\" All throw the same error. I have also tried with a foldername of just "publicbuild", in case it was the hyphen, but still got the same error. I'm rather stumped. All I want to do is zip up the folder. Hopefully someone will point out some obvious mistake I'm making, but otherwise I also welcome any alternate approaches. I'd prefer not to install third party tools, but may have to resort to it if there is no other solution. A: I think the problem is in the way you're providing arguments when you call the function. In Powershell arguments are provided as space separated values, it doesn't use the () syntax. ZipFiles "public-build.zip" "C:\Users\Administrator\Desktop\public-build"
doc_29
SELECT `collection_series`.`chart_name`, `datadatexnumericy`.`x` as date, `datadatexnumericy`.`y` FROM (`datadatexnumericy`) JOIN `collection_series` ON `collection_series`.`series_id` = `datadatexnumericy`.`series_id` WHERE `collection_series`.`collection_id` = '265' chart_name date y Sydney 1973-09-30 2.50000 Melbourne 1973-09-30 5.70000 Brisbane 1973-09-30 6.60000 Perth 1973-09-30 7.10000 but what if i want results like below is there any solution any help would be appriciated thanks in advance ... date Sydney Melbourne Brisbane Perth 1973-09-30 2.50000 5.70000 6.60000 7.10000 below is my table structure datadatexnumericy(first table) series_id x y 43532 1991-12-31 -2.10000 don't confuse about series_id because city name is coming from collection series table where series_id matches and fetches city name collection_series(second table) in this table there is coloumn which name is collection_id and series_id collection id is '265' and i am matching `collection_series`.`series_id` = `datadatexnumericy`.`series_id` A: I can't think of a way right now to query it in such fashion but you could restructure the normal fetched results first. Build it with that specialized format first, then present it. First is to get the headers (the dates and places etc.), then you need to group the body data according to dates, push them inside another container. Rough example: <?php // temporary container $temp = array(); while($row = whatever_fetch_function_assoc($result)) { $temp[] = $row; // push the rows } // restructure $places = array_column($temp, 'chart_name'); // if this is not available (only PHP 5.5) // foreach($temp as $v) { // $places[] = $v['chart_name']; // if its not available just use foreach // } // header creation $headers = array_merge(array('Date'), $places); // for headers foreach($temp as $v) { // then extract the dates $data[$v['date']][] = $v['y']; // group according to date } ?> Then, once the structure is made, then you present it, (as you normally would) in a loop: <!-- presentation --> <table cellpadding="10"> <thead> <tr><?php foreach($headers as $h): // headers ?> <th><?php echo $h; ?></th> <?php endforeach; ?></tr> </thead> <tbody> <?php foreach($data as $date => $values): ?> <tr> <td><?php echo $date; // the date ?></td> <?php foreach($values as $d): ?> <td><?php echo $d; ?></td> <?php endforeach; ?> </tr> <?php endforeach; ?> </tbody> </table> Somewhat of a sample output A: If its for a known set of chart_names, then you can use the following technique for generating the pivot table select dd.x as date, max( case when cs.chart_name = 'Sydney' then dd.y end ) as `Sydney`, max( case when cs.chart_name = 'Melbourne' then dd.y end ) as `Melbourne`, max( case when cs.chart_name = 'Brisbane' then dd.y end ) as `Brisbane`, max( case when cs.chart_name = 'Perth' then dd.y end ) as `Perth` from datadatexnumericy dd join collection_series cs on cs.series_id = dd.series_id group by dd.x You can also add the where condition before the group by as WHERE cs.collection_id = '265' Here is how you can make it dynamic set @sql = NULL; select group_concat(distinct concat( 'max(case when cs.chart_name = ''', cs.chart_name, ''' then dd.y end) AS ', replace(cs.chart_name, ' ', '') ) ) INTO @sql from collection_series cs join datadatexnumericy dd on cs.series_id = dd.series_id ; set @sql = concat('select dd.x as date, ', @sql, ' from datadatexnumericy dd join collection_series cs on cs.series_id = dd.series_id group by dd.x'); prepare stmt from @sql; execute stmt; deallocate prepare stmt; Check the demo here
doc_30
The problem is while hiding the status bar and rotate screen, it leave a white pace at status bar position until the screen is rotated completely as below screenshot. I guess that steps of operations when rotating screen are: 1. hide status bar 2. rotate screen 3. resize screen to take place of status bar. So until the screen is rotated completely, user still can see the white space, it is not good and I want to do something such as: set color for that blank space to black, or set animation to hide that blank space but unlucky! So, does anyone have solution to resolve this, please help me, thanks a lot! A: you should make the size of view 320 * 480. as you might have hidden the statusbar but the size of view will be 320 * 460(default) Check out this one.
doc_31
What other cool frameworks exist that aren't on .NET and I may not know about? Can we leave out things which have a direct or pretty reasonable analogue, just the kewl shiznit! PS we aren't so bad. I'm pretty sure NDepend started out on .NET and has moved to Java PPS one answer per item please! It makes it a lot easier to discuss them! A: http://ruby.sadi.st/Heckle.html Think you write good tests? Not bloody likely... Put it to the test with heckle. It’ll put your code into submission in seconds. The premise is really really simple to understand: ★ Your tests should pass. ★ Break your code. ★ Now they should fail. You could check this by hand, but why bother? Use heckle and put it to the test: heckle -f ClassName For each failure heckle points out, you've got a test to write. Chances are, your tests suck. A: Maybe you should ask the Java folks (add some Java tag), .NET tag watchers may not know about Java frameworks .NET does not have :) A: * *Liquibase - A library for tracking, managing and applying database changes. *A decent embedded webserver, such as Jetty *A build system equivalent to Maven *An embedded AD/LDAP server for development purposes, such as ApacheDS A: http://www.terracotta.org/ A kind of distributed JVM which shares objects automatically across a farm. Or something. Reading http://willcode4beer.com/design.jsp?set=kill_your_db makes it sound pretty cool. A: At work, we use the ATG e-commerce platform, JBoss to run our local builds and Maven to build everything. We also have components from the Struts framework. Personally speaking, I prefer the Spring Framework. IOC is my new favourite pattern!
doc_32
var UpdateCommand = "UPDATE MY_SCHEMA." + this.Entities.PRODUCT.EntitySet.Name + " SET STOCK=0"; var AffectedRows = this.Entities.ExeceuteStoreCommand(UpdateCommand); I want to specify the schema as with the entity name (which later, if implememented in a reusable library method, could be passed as parameter). So, I tried... var Container = this.Entities.MetadataWorkspace.GetEntityContainer(this.Entities.DefaultContainerName, System.Data.Metadata.Edm.DataSpace.CSpace); var Set = Container.GetEntitySetByName(this.Entities.PRODUCT.EntitySet.Name, true); var SchemaName = Set.MetadataProperties["Schema"].Value; The problem is that the SchemaName returned is always null! I've seen solutions based on parsing SQL generated by Entity Framework, but that could be fooled (text can containg anything), or SQL-Server specific. The idea is to be as DB agnostic as EF is. Question is... how to get an entity schema name from EF objects, not parsing generated SQL and being db-agnostic? A: you can get it from SSpace. In ef5 following works. Probably will work in ef4 also // for code-first var Container = this.Entities.MetadataWorkspace.GetEntityContainer("CodeFirstDatabase", DataSpace.SSpace); // db-first var Container = this.Entities.MetadataWorkspace.GetEntityContainer("DbFirstModelStoreContainer", DataSpace.SSpace); var schemaName = Container.GetEntitySetByName(this.Entities.PRODUCT.EntitySet.Name, true).Schema // or var set = Container.GetEntitySetByName(this.Entities.PRODUCT.EntitySet.Name, true); var schemaName = Set.MetadataProperties["Schema"].Value;
doc_33
# df is from csv and has blank cells - I've used empty strings to demo here df = pd.DataFrame({'id': ['101', '102', '103', '104'], 'method_1': ['HR', 'q-SUS', 'PEP', 'ET'], 'method_2': ['q-SUS', 'q-IEQ', 'AUC', 'EEG'], 'method_3': ['SC', '', 'HR', 'SC'], 'method_4': ['q-IEQ', '', 'ST', 'HR'], 'method_5': ['PEP', '', 'SC', '']}) print(df) id method_1 method_2 method_3 method_4 method_5 0 101 HR q-SUS SC q-IEQ PEP 1 102 q-SUS q-IEQ 2 103 PEP AUC HR ST SC 3 104 ET EEG SC HR I want to end up with a table that looks something like this: Method A Method B Number of Times Combined HR SC 3 HR q-SUS 1 HR PEP 2 q-IEQ q-SUS 2 EEG ET 1 EEG SC 1 etc. etc. etc. So far I've been trying variations of this code using itertools.combinations and collections Counter: import numpy as np import pandas as pd import itertools from collections import Counter def get_all_combinations_without_nan(row): # remove nan - this is for the blank csv cells set_without_nan = {value for value in row if isinstance(value, str)} # generate all combinations of values in row all_combinations = [] for index, row in df.iterrows(): result = list(itertools.combinations(set_without_nan, 2)) all_combinations.extend(result) return all_combinations # get all possible combinations of values in a row all_rows = df.apply(get_all_combinations_without_nan, 1).values all_rows_flatten = list(itertools.chain.from_iterable(all_rows)) count_combinations = Counter(all_rows_flatten) print(count_combinations) It's doing something, but it seems to be counting multiple times or something (it's counting more combinations than are actually there. I've had a good look on Stack, but can't seem to solve this - everything seems really close though! I hope someone can help - Thanks! A: Use DataFrame.melt for reshape with remove empty strings or missing values, then use DataFrame.merge for all combinations, remove rows with same methods and count by GroupBy.size: df1 = df.melt('id', value_name='method_') df1 = df1[(df1["method_"] != '') & (df1["method_"].notna())] df = (df1.merge(df1, on='id', suffixes=('A','B')) .query("method_A != method_B") .groupby(['method_A','method_B']) .size() .reset_index(name='Number of Times Combined')) print (df.head(20)) method_A method_B Number of Times Combined 0 AUC HR 1 1 AUC PEP 1 2 AUC SC 1 3 AUC ST 1 4 EEG ET 1 5 EEG HR 1 6 EEG SC 1 7 ET EEG 1 8 ET HR 1 9 ET SC 1 10 HR AUC 1 11 HR EEG 1 12 HR ET 1 13 HR PEP 2 14 HR SC 3 15 HR ST 1 16 HR q-IEQ 1 17 HR q-SUS 1 18 PEP AUC 1 19 PEP HR 2
doc_34
I debugged the application with gdb which catches the exception but unfortunately the operation triggering exception seems to also trash the stack so I cannot get any detailed information about the place in my code which causes that to happen. The only detail I could finally get was the operation triggering the exception(from the following piece of stack trace): 3 raise() 0x402720ac 2 __aeabi_uldivmod() 0x400bb0b8 1 __divsi3() 0x400b9880 The __aeabi_uldivmod() is performing an unsigned long long division and reminder so I tried the brute force approach and searched my code for places that might use that operation but without much success as it proved to be a daunting task. Also I tried to check for potential divisions by zero but again the code base it's pretty large and checking every division operation it's a cumbersome and somewhat dumb approach. So there must be a smarter way to figure out what's happening. Are there any techniques to track down the causes of such exceptions when the debugger cannot do much to help? UPDATE: After crunching on hex numbers, dumping memory and doing stack forensics(thanks Crashworks) I came across this gem in the ARM Compiler documentation(even though I'm not using the ARM Ltd. compiler): Integer division-by-zero errors can be trapped and identified by re-implementing the appropriate C library helper functions. The default behavior when division by zero occurs is that when the signal function is used, or __rt_raise() or __aeabi_idiv0() are re-implemented, __aeabi_idiv0() is called. Otherwise, the division function returns zero. __aeabi_idiv0() raises SIGFPE with an additional argument, DIVBYZERO. So I put a breakpoint at __aeabi_idiv0(_aeabi_ldiv0) et Voila!, I had my complete stack trace before being completely trashed. Thanks everybody for their very informative answers! Disclaimer: the "winning" answer was chosen solely and subjectively taking into account the weight of its suggestions into my debugging efforts, because more than one was informative and really helpful. A: (Using the basic idea from Fedor Skrynnikov, but with compiler help instead) Compile your code with -pg. This will insert calls to mcount and mcountleave() in every function. Do not link against the GCC profiling lib, but provide your own. The only thing you want to do in your mcount and mcountleave() is to keep a copy of the current stack, so just copy the top 128 bytes or so of the stack to a fixed buffer. Both the stack and the buffer will be in cache all the time so it's fairly cheap. A: You can implement special guards in functions that can cause the exception. Guard is a simple class, in constractor of this class you put the name of the file and line (_FILE_, _LINE_) into file/array/whatever. The main condition is that this storage should be the same for all instances of this class(kind of stack). In the destructor you remove this line. To make it works you need to put the creation of this guard on the first line of each function and to create it only on stack. When you will be out of current block deconstructor will be called. So in the moment of your exception you will know from this improvised callstack which function is causing a problem. Ofcaurse you may put creation of this class under debug condition A: Enable generation of core files, and open the core file with the debuger A: Since it uses raise() to raise the exception, I would expect that signal() should be able to catch it. Is this not the case? Alternatively, you can set a conditional breakpoint at __aeabi_uldivmod to break when divisor (r1) is 0. A: My first suggestion would be to open a memory window looking at the region around your stack pointer, and go digging through it to see if you can find uncorrupted stack frames nearby that might give you a clue as to where the crash was. Usually stack-trashes only burn a couple of the stack frames, so if you look upwards a few hundred bytes, you can get past the damaged area and get a general sense of where the code was. You can even look down the stack, on the assumption that the dead function might have called some other function before it died, and thus there might be an old frame still in memory pointing back at the current IP. In the comments, I linked some presentation slides that illustrate the technique on a PowerPC — look at around #73-86 for a case study in a similar botched-stack crash. Obviously your ARM's stack frames will be laid out differently, but the general principle holds.
doc_35
I'm trying to create an app in React. The workflow is the user sees a signup modal. Where they add their name and email. After submitting the form they are taken to the main screen where their name is displayed next to an avatar image in the top right corner. I'm using React Router to the effect of something like this var routes = ( <Router history={createHistory()}> <Route path="/" component={NewUserForm}/> <Route path="/user/:userName" component={App}/> </Router> ) My form component: var NewUserForm = React.createClass({ addNewUser: function(event) { event.preventDefault(); var newUser = { name: this.refs.name.value, email: this.refs.email.value }; }, render: function() { return ( <form className="new-user-form" ref="newUser" onSubmit={this.addNewUser}> <input type="text" ref="name" placeholder="Your Name"/> <input type="email" ref="email" placeholder="Your Email"/> <button type="submit">Submit</button> </form> ) } }); My app component: var App = React.createClass({ getInitialState: function() { return { user: {} } }, render: function() { return ( <div className="main-app-body"> <Header user={this.state.user}></Header> <Sidebar></Sidebar> </div> ) } }); Ultimately, I want to pass along the users name and email to the header component so that it's displayed at the top. How do I pass along the user object? I know how to pass data along from parent to child components with props, but not with siblings. Where am I going wrong? A: React is unidirectional, you can use store from architecture like flux or redux
doc_36
What I would like is to redirect example.com/blog/article-1 to example.com/article-1 I used the following to redirect, but get a 404 error RewriteEngine On RedirectMatch 301 ^blog(/.*)?$ https://www.example.com/$1 I have tested this with ... ^blog1(/.*)?$ ... and it worked as expected, but anything related to /blog it seems to ignore the htaccess file. There is no longer a directory there, so there shouldn't be any other htaccess file that is taking precedence. Any ideas as to why this isn't redirecting right? A: If you can't beat them, join them! I ended up resolving this by creating the /blog directory that it was trying to look in and put in a new .htaccess file to redirect back to the root directory.
doc_37
E:\Programs\GMM\bin\GMMFailoverTool.exe -mssql="Server=SomeServer;Database=GMM01" list The problem I'm having is executing it properly with PowerShell even without trying to do this via Invoke-Command. $binary = "E:\Programs\GMM\bin\GMMFailoverTool.exe" $command = "-mssql=`"Server=SomeServer;Database=gmm01`" list" Write-Host BINARY: $binary -ForegroundColor Yellow write-Host ARGS: $command -ForegroundColor Yellow Write-Host FullCommand: $binary $command -ForegroundColor Yellow & $binary $command Output: BINARY: E:\Programs\GMM\bin\GMMFailoverTool.exe ARGS: -mssql="Server=SomeServer;Database=gmm01" list FullCommand: E:\Programs\GMM\bin\GMMFailoverTool.exe -mssql="Server=SomeServer;Database=gmm01" list And the return of the command is like it didn't get any parameters at all (or those were incorrect). The question is how to properly pass those arguments where $command is already defined as it should? If I do it by hand without having it all in variables it works… & "E:\Programs\GMM\bin\GMMFailoverTool.exe" -mssql="Server=SomeServer;Database=gmm01" list A: Pass the arguments as an array: $command = '-mssql="Server=SomeServer;Database=gmm01"', 'list' & $binary $command Also, I had some situations where the only way of correctly passing arguments to an external program was to run the command with cmd.exe: $command = '-mssql="Server=SomeServer;Database=gmm01" list' cmd /c "$binary $command" To run the command remotely you need to either define the variables inside the scriptblock: Invoke-Command -Computer 'remotehost.example.com' -ScriptBlock { $binary = ... $command = ... & $binary $command } or (perhaps better, if the value of $command is generated by some other function) pass them into the scriptblock via the parameter -ArgumentList: $binary = ... $command = ... Invoke-Command -Computer 'remotehost.example.com' -ScriptBlock { & $args[0] $args[1] } -ArgumentList $binary, $command because the content of the scriptblock doesn't know anything about the rest of your script.
doc_38
I am developing software with Java SE and to use Java-Image Scaling library that is in url: https://code.google.com/p/java-image-scaling/ I'm resize a photo that will be for 6400x4800 with 47 MB. If I run the program within the Netbeans resizing is performed successfully. If I run the JAR starting DOS resizing successfully also occurs. If I run the JAR File Explorer in Windows the image is not resized and the program is stopped eternity. Does not generate any exception The problem is in the line of code (When .JAR runs on Windows): BufferedImage rescaled = resampleOp.filter(src, null); I think the Windows lock resizing because the image size is too large or too heavy or take a long time to run this resizing. If the image resized was smaller the Windows error did not occur. I did this test How can I resolve this problem in Windows? Is there any solution for this? Thanks a lot A: Seems to work just fine for me; scaling image of 7680x4800 @ 23.3mb JPG. The only issue I did have was the fact that, when double clicked from Windows Explorer, it placed the image in C:\Windows\System32. You might consider popuping up a JOptionPane with the full (CanonicalPath) of the output file... import com.mortennobel.imagescaling.ProgressListener; import com.mortennobel.imagescaling.ResampleOp; import java.awt.EventQueue; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import java.util.List; import java.util.concurrent.ExecutionException; import javax.imageio.ImageIO; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JOptionPane; import javax.swing.ProgressMonitor; import javax.swing.SwingWorker; import javax.swing.UIManager; import javax.swing.UnsupportedLookAndFeelException; import javax.swing.border.EmptyBorder; public class Test { public static void main(String[] args) { new Test(); } protected JLabel message; public Test() { EventQueue.invokeLater(new Runnable() { @Override public void run() { try { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); } catch (ClassNotFoundException | InstantiationException | IllegalAccessException | UnsupportedLookAndFeelException ex) { ex.printStackTrace(); } message = new JLabel("Rescampling, wait for it..."); message.setHorizontalAlignment(JLabel.CENTER); message.setBorder(new EmptyBorder(10, 10, 10, 10)); JFrame frame = new JFrame("Testing"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.add(message); frame.pack(); frame.setLocationRelativeTo(null); frame.setVisible(true); ResampleWorker worker = new ResampleWorker(); worker.execute(); } }); } public class ResampleWorker extends SwingWorker<String, String> { @Override protected String doInBackground() throws Exception { ProgressMonitor pm = new ProgressMonitor(null, "Scaling", "Please wait...", 0, 100); File source = new File("C:\\Users\\shane\\Downloads\\1713601.jpg"); try { System.out.println("Reading..."); publish("Reading source image..."); BufferedImage img = ImageIO.read(source); int toWidth = img.getWidth() * 10; int toHeight = img.getHeight()* 10; System.out.println("From..." + (img.getWidth()) + "x" + (img.getHeight())); System.out.println("to..." + toWidth + "x" + toHeight); publish("Resample..." + toWidth + "x" + toHeight); ResampleOp op = new ResampleOp(toWidth, toHeight); Thread.yield(); op.addProgressListener(new ProgressListener() { public void notifyProgress(float fraction) { int p = (int) (fraction * 100); pm.setProgress(p); // System.out.println(p); } }); BufferedImage scaled = op.filter(img, null); pm.close(); // File dest = new File("scaled.jpg"); // ImageIO.write(scaled, "jpg", dest); // JTextField field = new JTextField(dest.getCanonicalPath()); // JOptionPane.showMessageDialog(null, field); } catch (IOException ex) { JOptionPane.showMessageDialog(null, "Failed to load - " + ex.getMessage()); } publish("Done..."); return null; } @Override protected void process(List<String> chunks) { message.setText(chunks.get(chunks.size() - 1)); message.revalidate(); message.repaint(); } @Override protected void done() { try { get(); JOptionPane.showMessageDialog(null, "All done"); } catch (InterruptedException | ExecutionException ex) { ex.printStackTrace(); JOptionPane.showMessageDialog(null, "Error " + ex.getMessage()); } System.exit(0); } } } Also tested with image of 1920x1200, scaling to 7680x4800.
doc_39
Here is the HTML Code <form method="post" action="handle_post.php"> <p>First Name: <input type="text" name="first_name" size="20"></p> <p>Last Name: <input type="text" name="last_name" size="20"></p> <p>Email Address: <input type="email" name="email" size="30"></p> <p>Posting: <textarea name="posting" rows="5" cols="40"> </textarea> </p> <input type="submit" name="submit" value="Send My Fucking Email"> </form> Here is the PHP code .. when I am trying to make it work it is saying that something is wrong in this line - $posting I am pasting the code here .. let me know what is wrong in it <?php date_default_timezone_set('Africa/Lagos'); $first_name = $_POST['first_name']; $last_name = $_POST['last_name']; $posting = $_POST['posting']; $email = $_POST['email']; $fullname = $first_name . ' ' . $last_name; print "<div>Thank you my lord - $name, for your kind posting in this thread. This is the excerpt of the post: <p>$posting</p> </div>"; $fullname = urlencode((binary) $fullname); $email = urlencode((binary) $email); print "<p>Click <a href=\"thanks.php?name=$name&email=$email\">Here</a> to continue. </p>"; ?> A: You use the variable $name which not declared: print "<div>Thank you my lord - $name, for your kind posting in this thread. This is the excerpt of the post: <p>$posting</p> </div>"; A: Turn your print into echo when using variables like that.
doc_40
The project is pretty big, so I'll do my best to include relevant pieces of code. this is the Model::Draw() function: void Model::Draw(Shader shader, Camera camera) { //Setting uniforms shader.SetUniformMat4("view", camera.GetViewMatrix()); shader.SetUniformMat4("projection", camera.GetProjectionMatrix()); shader.SetUniformMat4("model", m_Transform); m_Mesh.Draw(shader); } m_Mesh.Draw(shader) calls Mesh::Draw(Shader shader): void Mesh::Draw(Shader shader) { shader.Bind(); GlCall(glBindVertexArray(VAO)); GlCall(glDrawElements(GL_TRIANGLES, (GLsizei) Indices.size(), GL_UNSIGNED_INT, 0)); //GlCall(glBindVertexArray(0)); //glActiveTexture(GL_TEXTURE0); } This is the main rendering loop: while (!glfwWindowShouldClose(window)) { simple.SetUniform3f("objColor", 1.0f, 1.0f, 1.0f); lightModel.Draw(simple, context.ActiveCamera); //Render shaded.SetUniform3f("objectColor", glm::vec3(1.0f)); shaded.SetUniform3f("lightColor", glm::vec3(1.0f)); shaded.SetUniform3f("lightPos", glm::vec3(1.0f)); model.Draw(shaded, context.ActiveCamera); //OPENGL glfwSwapBuffers(window); glfwPollEvents(); } By using the glGetError() function I got that the error is 1281 on Shader.Bind(), which is just a single line of code: glUseProgram(m_RenderId); It also prints that the three uniforms model, view and perspective in the shader do not exist (this is a check that my shader class does and it issues it when glGetUniformLocation(m_RenderId, uniformName) returns -1), even if they are present. Also by debugging, I found out that both shaders have two different m_RenderId, so I think there's no problem there. The error occurs the second time the while loop runs. So I might be forgetting to do something at the end of the draw call but by looking online I couldn't find anything. And what is strange is that it gives me an error only if I try to render two different meshes with two different shaders, If I try to render both the model mesh and the lightModel with the same shader it works fine. I am pretty new to OpenGL and I've done my best to actually understand what the code is doing but this error just doesn't make sense to me, so I excuse in advance if this is a silly mistake. I might be also missing some code pieces in the question if, more is necessary I'll add it. How can I fix it?
doc_41
The images I used is this : Bitmap ImageBitmap = (Bitmap)pictureBox1.Image; var filter = new FiltersSequence(new IFilter[] { Grayscale.CommonAlgorithms.BT709, new Threshold(0x40) }); var binaryImage = filter.Apply(ImageBitmap); // for (int i = 0; i < 10000; i++) { // System.Drawing.Image image = System.Drawing.Image.FromFile(imagePath); // GrayBMP_File.CreateGrayBitmapFile(image, "c:/path/to/8bpp/image.bmp"); // Bitmap ImageBitmap = Convert.Gra ImageBitmap.Con HoughCircleTransformation circleTransform = new HoughCircleTransformation(50); // apply Hough circle transform circleTransform.ProcessImage(binaryImage); Bitmap houghCirlceImage = circleTransform.ToBitmap(); // get circles using relative intensity HoughCircle[] circles = circleTransform.GetCirclesByRelativeIntensity(0.9); int numCircles = circleTransform.CirclesCount; label1.Text = numCircles.ToString(); pictureBox1.Image = houghCirlceImage; System.Drawing.Graphics g = System.Drawing.Graphics.FromImage(ImageBitmap); foreach (HoughCircle circle in circles) { g.DrawEllipse(Pens.Green, circle.X, circle.Y, 10,10); } pictureBox1.Image = ImageBitmap; // ImageBitmap.Dispose(); // binaryImage.Dispose(); } A: Try this python solution from here: import cv2 import numpy as np img = cv2.imread('test.jpg',0) cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR) circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,20, param1=50,param2=30,minRadius=0,maxRadius=0) circles = np.uint16(np.around(circles)) d=1 for i in circles[0,:]: # draw the outer circle cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2) # draw the center of the circle cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3) crop_img=img[i[0]-i[2]-2:i[0]+i[2]+2,i[1]-i[2]-2:i[1]+i[2]+2] cv2.imshow('cropped circle',crop_img) cv2.imwrite('test_%d.png'%d,crop_img) cv2.waitKey(0) d+=1 cv2.imshow('detected circles',cimg) print(len(circles[0,:])) cv2.waitKey(0) cv2.destroyAllWindows() OUTPUT: 16
doc_42
Based on the role in dashboard page there is a acl operation need to be validated, currently stored the user details and roles in the local storage. since local storage can be modified through console is there any best way to keep the detail? is there any best way to keep the data results in service and access in the components?
doc_43
Sample Input XML <chap> <p><b>The</b> <b>Attorney</b> General <b>(Fees)</b><b>a</b><b>b</b><b>c</b> Determination 2012 No 110 commenced on 1 <b>July</b> <b>2012</b> <b>and</b> was repealed on <b>1</b> <b>July</b> <b>2013</b>. The Determination is yet to be amended by:</p> </chap> XSLT <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:guru="Self" exclude-result-prefixes="xs" version="2.0"> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates/> </xsl:copy> </xsl:template> <xsl:template match="b[preceding-sibling::node()[1][self::b]] |b[preceding-sibling::node()[1][normalize-space(.) = '']][preceding-sibling::node()[2][self::b]]"/> <xsl:template match="b"> <xsl:copy> <xsl:apply-templates/> <xsl:if test="following-sibling::node()[1][self::b]"> <xsl:copy-of select="guru:wrap(following-sibling::node()[1], '')"/> </xsl:if> <xsl:if test="following-sibling::node()[1][normalize-space(.) = '']/following-sibling::node()[1][self::b]"> <xsl:copy-of select="guru:wrap(following-sibling::node()[2], following-sibling::node()[1])"/> </xsl:if> </xsl:copy> </xsl:template> <xsl:function name="guru:wrap"> <xsl:param name="b_data"/> <xsl:param name="space"/> <xsl:value-of select="$space"/> <xsl:value-of select="$b_data"/> <xsl:if test="$b_data/following-sibling::node()[1][self::b]"> <xsl:copy-of select="guru:wrap($b_data/following-sibling::node()[1], '')"/> </xsl:if> <xsl:if test="$b_data/following-sibling::node()[1][normalize-space(.) = '']/following-sibling::node()[1][self::b]"> <xsl:copy-of select="guru:wrap($b_data/following-sibling::node()[2], $b_data/following-sibling::node()[1])"/> </xsl:if> </xsl:function> </xsl:stylesheet> Output <chap> <p><b>The Attorney</b> General <b>(Fees)abc</b> Determination 2012 No 110 commenced on 1 <b>July 2012 and</b> was repealed on <b>1 July 2013</b> . The Determination is yet to be amended by:</p> </chap> Desired Output <chap> <p><b>The Attorney</b> General <b>(Fees)abc</b> Determination 2012 No 110 commenced on 1 <b>July 2012 and</b> was repealed on <b>1 July 2013</b>. The Determination is yet to be amended by:</p> </chap> Thanks in Advance. A: I would suggest to use for-each-group group-adjacent="self::b or self::text()[. = ' ']": <xsl:template match="p"> <xsl:copy> <xsl:apply-templates select="@*"/> <xsl:for-each-group select="node()" group-adjacent="self::b or self::text()[. = ' ']"> <xsl:choose> <xsl:when test="current-grouping-key()"> <b> <xsl:apply-templates select="current-group()/node() | current-group()[self::text()]"/> </b> </xsl:when> <xsl:otherwise> <xsl:apply-templates select="current-group()"/> </xsl:otherwise> </xsl:choose> </xsl:for-each-group> </xsl:copy> </xsl:template> https://xsltfiddle.liberty-development.net/gWmuiHU/1 shows the result, it uses XSLT 3 but for XSLT 2 you simply have to remove the xsl:mode and keep the identity template <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates/> </xsl:copy> </xsl:template> you have in your code.
doc_44
Currently the built-in Text Translation cognitive skill supports up to 50,000 characters on the input. The documents that we have could contain up to 1 MB of text. According to the documentation it's possible to split the text into chunks using the built-in Split Skill, however there's no skill that could merge the translated chunks back together. Our goal is to have all the extracted text translated and stored in one index field of type Edm.String, not an array. Is there any way to translate large text blocks when indexing, other than creating a custom Cognitive Skill via Web API for that purpose? A: Yes, the Merge Skill will actually do this. Define the skill in your skillset like the below. The "text" and "offsets" inputs to this skill are optional, and you can use "itemsToInsert" to specify the text you want to merge together (specify the appropriate source for your translation output). Use insertPreTag and insertPostTag if you want to insert perhaps a space before or after each merged section. { "@odata.type": "#Microsoft.Skills.Text.MergeSkill", "description": "Merge text back together", "context": "/document", "insertPreTag": "", "insertPostTag": "", "inputs": [ { "name": "itemsToInsert", "source": "/document/translation_output/*/text" } ], "outputs": [ { "name": "mergedText", "targetName" : "merged_text_field_in_your_index" } ] } A: Below is a snippet in C#, using Microsoft.Azure.Search classes. It follows the suggestion given by Jennifer in the reply above. The skillset definition was tested to properly support translation of the text blocks bigger than 50k characters. private static IList<Skill> GetSkills() { var skills = new List<Skill>(); skills.AddRange(new Skill[] { // ...some skills in the pipeline before translation new ConditionalSkill( name: "05-1-set-language-code-for-split", description: "Set compatible language code for split skill (e.g. 'ru' is not supported)", context: "/document", inputs: new [] { new InputFieldMappingEntry(name: "condition", source: SplitLanguageExpression), new InputFieldMappingEntry(name: "whenTrue", source: "/document/language_code"), new InputFieldMappingEntry(name: "whenFalse", source: "= 'en'") }, outputs: new [] { new OutputFieldMappingEntry(name: "output", targetName: "language_code_split") } ), new SplitSkill ( name: "05-2-split-original-content", description: "Split original merged content into chunks for translation", defaultLanguageCode: SplitSkillLanguage.En, textSplitMode: TextSplitMode.Pages, maximumPageLength: 50000, context: "/document/merged_content_original", inputs: new [] { new InputFieldMappingEntry(name: "text", source: "/document/merged_content_original"), new InputFieldMappingEntry(name: "languageCode", source: "/document/language_code_split") }, outputs: new [] { new OutputFieldMappingEntry(name: "textItems", targetName: "pages") } ), new TextTranslationSkill ( name: "05-3-translate-original-content-pages", description: "Translate original merged content chunks", defaultToLanguageCode: TextTranslationSkillLanguage.En, context: "/document/merged_content_original/pages/*", inputs: new [] { new InputFieldMappingEntry(name: "text", source: "/document/merged_content_original/pages/*"), new InputFieldMappingEntry(name: "fromLanguageCode", source: "/document/language_code") }, outputs: new [] { new OutputFieldMappingEntry(name: "translatedText", targetName: "translated_text") } ), new MergeSkill ( name: "05-4-merge-translated-content-pages", description: "Merge translated content into one text string", context: "/document", insertPreTag: " ", insertPostTag: " ", inputs: new [] { new InputFieldMappingEntry(name: "itemsToInsert", source: "/document/merged_content_original/pages/*/translated_text") }, outputs: new [] { new OutputFieldMappingEntry(name: "mergedText", targetName: "merged_content_translated") } ), // ... some skills in the pipeline after translation }); return skills; } private static string SplitLanguageExpression { get { var values = Enum.GetValues(typeof(SplitSkillLanguage)).Cast<SplitSkillLanguage>(); var parts = values.Select(v => "($(/document/language_code) == '" + v.ToString().ToLower() +"')"); return "= " + string.Join(" || ", parts); } }
doc_45
I'm wondering if returning 400 would be appropriate, because the request which contains a bad password isn't a bad request per se. The server understands the request perfectly well, and a bad password is totally anticipated, just like a good one; it's just that we need to tell the client the password isn't good to be used for creating account or changing password. Is there any HTTP status that is meant for this purpose? Or should both good and bad password be responded with 200 but the details should be in the response body, e.g. {"validPassword": true}, or {"validPassword": false, "reason": "too long"}? NOTE: This question is not about invalid data for a request. This is about calling an API that's specifically designed to check if a password is of valid format. An invalid password is still valid data for the request. Please do not suggest the other question about "invalid request". A: HTTP status codes belong to the transfer of documents over a network domain, not to the domain the documents are about. For example, if I have a document that describes whether the password "walrus" complies with some policy, I might try to retrieve that document with a request like GET /policy?password=walrus The document returned to me might be an explanation that the "walrus" is compliant with the policy, or it might be a list of policy violations. But in the transfer of documents domain, the response to the query is a current representation of the document, and therefore the appropriate status code is 200. 400 would NOT be appropriate, because that code indicates that (a) the message body of the response is a representation of an explanation of an error (rather than being a copy of the document we asked for) and (b) more precisely indicates that the request was improperly formed. More generally, a successfully retrieved document that explains that you are not on your domain's happy path is still a successfully retrieved document, and should get a 2xx status code, just the same as if you were simply downloading a web page about that password from a website. There are more possibilities when you are sending changes to the server, rather than fetching information from it. POST /e363c9c3-03a9-43fa-9e1c-fe5cad95fb04 HTTP/x.y Content-Type: application/x-www-form-urlencoded action=changePassword&password=walrus Here, it is perfectly reasonable to report a client error, on the grounds that the payload is supposed to be a changePassword message, and "walrus" is semantically nonsense as a value in the message because it violates policy. So 422 Unprocessable Entity might be an option, or 409 Conflict, or 403 Forbidden -- all valid ways of announcing to the transfer of documents over a network domain that the request is unsatisfactory. Imagine a puzzle game, like mastermind, where the player tries to solve the puzzle. Such a request might look like POST /puzzles/59bfc5a5-8e5b-4bf9-b6fd-52c7b3634b3c HTTP/x.y Content-Type: application/x-www-form-urlencoded guess=red,blue,blue,white Even when this guess is "wrong", it's still a valid request within the game domain; the game itself updates, the player gets hints, a row of the decoding board is consumed, and so on. A 2xx success code is still appropriate even though the player didn't win the game. POST /puzzles/59bfc5a5-8e5b-4bf9-b6fd-52c7b3634b3c HTTP/x.y Content-Type: application/x-www-form-urlencoded guess=red,blue,blue,walrus Here, we'd return a 422, because the request itself is broken (walrus is "semantically erroneous" in this context).
doc_46
But it doesn't load the icon in address bar (even after i logged in). When i write a full path at the address line of my icon, i get the image on the browser. So i think that i have permission to load it. The path of icon is true, because i have another image in the same folder and it works. So why it doesn't work? And now my code. the jsp code which defines the icon: this tag is written in head tag <link rel="shortcut icon" href="img/icon0.png"> and the public permissions in web.xml are: <security-constraint> <web-resource-collection> <web-resource-name>public zone</web-resource-name> <url-pattern>/img/*</url-pattern> </web-resource-collection> and the admin role has permision for all files: <security-constraint> <web-resource-collection> <web-resource-name>adminzone</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> <role-name>student</role-name> </auth-constraint> thanks A: Your config is fine. You haven't mentioned which browser you are using but if it's Firefox it's probably a caching issue. There are a lot of articles about clearing the favicon cache of Firefox. Here is one: Clear Favicon Cache From Firefox. (I haven't tested.) Maybe restarting the browser also could help.
doc_47
it is unclear to me if the usage of Arrays.fill() in the finalize() method is correct or if it is a bug. Some answers suggest that a reachabilityFence should be used, but does this mean that it was a bug, or does this mean that the reachabilityFence is a workaround around a bug in the VM? Anybody who can clarify/comment? Copied from https://docs.oracle.com/javase/specs/jls/se9/html/jls-12.html#jls-12.6: "Furthermore, none of the pre-finalization reads of fields of that object may see writes that occur after finalization of that object is initiated." This suggests that the code for NewlyAllocatedArrayFilledByOtherInstanceFinalizer in JDK-8191002 is correct, and that the failure is due to JVM. Or not? A: In short, this is a bug in the Java code, not a bug in the JVM. This code pattern has been abstracted in JDK-8191002 with static class ArrayHolder { private byte[] _bytes; ArrayHolder(final byte[] bytes) { _bytes = bytes.clone(); } byte[] getBytes() { return _bytes.clone(); } @Override protected void finalize() throws Throwable { if (_bytes != null) { Arrays.fill(_bytes, (byte) 'z'); _bytes = null; } super.finalize(); } } where getBytes() may indeed spuriously return z filled arrays instead of an array reflecting the original contents (in theory, it could even return partially filled arrays). The “reading of a field” is the reading of the array reference. The cloning of the array (or any processing of the array) happens afterwards, hence, doesn’t prevent the owner of the field from being garbage collected. Since there is no action that enforces inter-thread memory visibility, this “reading of a field” is not even required to actually happen, the thread may reuse a previously read value (still talking of the value of the reference here), allowing an even earlier collection of the object. This still obeys the requirement of not perceiving writes to the field made by the finalizer, if the finalizer changed the reference. As said, this doesn’t say anything about the array’s contents, as it isn’t the array that has been garbage collected. The array and the object containing a reference to the array are two entirely different objects. Placing a reachability fence on holder after the cloning of the array creates a new dependency that wasn’t there in that the array holder can’t get collected before the cloning of the array has been completed. byte[] getBytes() { byte[] result = _bytes.clone(); Reference.reachabilityFence(this); return result; } Without it, the last access to the object is before the invocation of clone(), but as said, that access might get optimized away by reusing a previously read reference. As stated by JLS §12.6.1.: Optimizing transformations of a program can be designed that reduce the number of objects that are reachable to be less than those which would naively be considered reachable.
doc_48
A: You need to write your own PDF tool then. Which is not exactly an easy task. Honestly, 3rd party tools make your job much easier, why don't you want to use one? If you change your mind, I can suggest iTextSharp. I've used it in the past with great success. Here are some example to get you going: http://www.codeproject.com/Articles/12445/Converting-PDF-to-Text-in-C ps. there are 3 tools used in there.
doc_49
see code below: excerpt from the main method System.out.println("Enter the number of the account that you would like to modify:"); number=keyboard.nextLong(); keyboard.nextLine(); firstName=null; lastName=null; try{ if(aBank.getAccount(number)!=null){ System.out.println("Account information is listed below"); System.out.println(aBank.getAccount(number).toString()); System.out.println("Modify first name y or n"); answer=keyboard.nextLine(); if(answer.equals("Y")||answer.equals("y")){ System.out.println("Enter first name:"); firstName=keyboard.nextLine(); } System.out.println("Modify last name y or n"); answer=keyboard.nextLine(); if(answer.equals("Y")|| answer.equals("y")){ System.out.println("Enter last name:"); lastName=keyboard.nextLine(); } aBank.changeName(number,firstName,lastName); } else{ System.out.println("Account not found"); } } catch(Exception e){ System.out.println("Unable to process request.\n" + e.getMessage()); } applicable bank class methods: public Account getAccount(long accountNumber ) throws Exception { boolean found=false; for(int i=0;i<accounts.size();i++){ if(accounts.get(i).getAccountNumber().compareTo(accountNumber)==0){ found=true; return accounts.get(i).clone(); } } public void changeName(Long accountNumber, String firstName, String lastName) throws Exception{ if (getAccount(accountNumber)!=null){ accounts.get(accounts.indexOf(getAccount(accountNumber))).getCustomer().modifyName(firstName, lastName); } else{ throw new Exception("Account not found"); } applicable account class methods private Account (Account a){ //copy constructor this.accountNumber=a.accountNumber; this.startBalance=a.startBalance; this.customer=a.customer; this.trans=a.trans; } public Customer getCustomer() { return this.customer.clone(); } public void modifyName(String firstName, String lastName){ if(firstName!=null){ customer.setFirstName(firstName); } if(lastName!=null){ customer.setLastName(lastName); } } applicable customer class methods private Customer(Customer c){ //copy constructor this.customerNumber=c.customerNumber; this.socialSecurityNo=c.socialSecurityNo; this.firstName=c.firstName; this.lastName=c.lastName; } A: It looks like your code shouldn't work at all, because you clone the Customer object, then modify the clone. The same goes for accounts. Some simplification might help with debugging. Much of this code can probably be simplified using the standard collections, rather than iterating and using compareTo. The logic is also a little strange - if the task is to modify the customer details, then why start with an account number and then go to the customer? if (getAccount(accountNumber)!=null){ accounts.get(accounts.indexOf(getAccount(accountNumber))).getCustomer().modifyName(firstName, lastName); } else{ throw new Exception("Account not found"); } could be simplified to something like: getAccount(accountNumber).getCustomer().setName(firstName, lastName); if getAccount and getCustomer threw an exception if the item wasn't found. The getAccount method could be reduced to something like: public Account getAccount(long accountNumber ) { return accounts.get(accountNumber ) } (barely worth having a method for!) if accounts was a Map<Long,Account> A: Since you use .clone in the getter of the account and mainly the customer you do get only a copy of the customer and change the names there. Since String is an immutable object you replace the names in the copy and only there.
doc_50
They said that i can set up min 15 mins cron on their shared hosting. But i tested a cron that run in every 1 min: (for testing perpose) */1 * * * * php -q /home1/john/mydomain.com/cron.php In that cron.php i wrote: <?php $file = "cron.html"; if (!unlink($file)) { echo ("Error deleting $file"); } else { echo ("Deleted $file"); } ?> This cron successfully deleted cron.html file if found in every min. This test is success but i can't run cron on another domain where my laravel 5.7 app is present. I tested my code on local: When i run php artisan php artisan schedule:run , it run on local successfully. I want to run migration on hostgator shared hosting. That's why i changed console/kernel.php file : protected function schedule(Schedule $schedule) { \Log::info('cron worked'); // $schedule->command('inspire') // ->hourly(); $schedule->command('migrate') ->everyMinute(); } And i tried few cron on hostgator: */1 * * * * /usr/bin/php /var/www/home1/john/domain.com/artisan schedule:run /usr/local/bin/php /home1/john/domain.com/artisan schedule:run >> /dev/null 2>&1 Also tried following this article: Please provide me your tested solution if you do this on hostgator shared hosting.
doc_51
document.attachEvent("onclick", get_func(obj, "on_body_click", "event_target_here")); I am attaching a global function get_func() to the event onclick. get_func() returns reference to a function defined in a function object something like this.. function get_func(obj, funcname, params) { if(funcname == "check") { return obj.checkTarget(params); } } Now in the function checkTarget(), I need to check which DOM object was clicked upon. How can I do this? I understand that I need to send the target of onclick event to the global function somehow. Can somebody throw some light on this? Regards A: If you want to pass the object that was clicked on into the global function, you can modify your event attachment to use the click event's source element to something like this: document.attachEvent("onclick", function(e) { get_func((e || event).srcElement, "on_body_click", "event_target_here") }); Then you could pass that object down to the checkTarget function.
doc_52
<android.support.design.widget.CoordinatorLayout android:id="@+id/container" android:layout_width="match_parent" android:layout_height="match_parent"> <android.support.design.widget.AppBarLayout android:layout_width="match_parent" android:layout_height="wrap_content"> <android.support.v7.widget.Toolbar android:id="@+id/home_toolbar" android:layout_width="match_parent" android:layout_height="wrap_content" app:layout_scrollFlags="scroll|enterAlways|snap" app:titleTextColor="@color/colorWhite" /> <android.support.design.widget.TabLayout android:id="@+id/home_tab_layout" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="@color/colorPrimary" app:tabIndicatorColor="@color/colorWhite" app:tabIndicatorHeight="@dimen/tabIndicatorHeight" app:tabMinWidth="@dimen/tabMinWidth" app:tabMode="scrollable" app:tabSelectedTextColor="@color/colorWhite" app:tabTextColor="@color/colorWhite80" /> </android.support.design.widget.AppBarLayout> <android.support.v4.view.ViewPager android:id="@+id/home_view_pager" android:layout_width="match_parent" android:layout_height="match_parent" app:layout_behavior="@string/appbar_scrolling_view_behavior" /> </android.support.design.widget.CoordinatorLayout> In CoordinatorLayout, there are 2 childs. One is AppBarLayout and another one is ViewPager. When I don't have so much thing to show in the ViewPager at that time also AppBarLayout is scrolling. I want to stop that scrolling. How can I achieve this? Thanks in advance.
doc_53
A: assuming your IDs are in A1:A10, and the numeric part is always the last 4 characters: =MAX(--right(A1:A10,4)) use CONTROL+SHIFT+ENTER when confirming the formula instead of just ENTER. You will know you have done it right when { } show up around your formula. Note that the { } cannot be added manually. UPDATE: Based on Solar Mikes comment ="xxx"&RIGHT("0000"&MAX(--right(A1:A10,4)),4) With the assumption that xxx does not change...if xxx changes then it a little more complicated.
doc_54
Please refer the screenshot for the error. 1. And 2. By clicking run->Edit Configuration in menu. I have searched a lot on Google and found many articles regarding the same. All are suggested to set the edit configurations option like below. Module: app Package: Deploy default APK Activity: Launch default Activity Target Device: USB Device I did the same, but no one is working for me. After spending the whole day to make it work, I came here. This error is due to the Launcher activity, because there is no launcher activity in Google glass project instead it uses Voice Trigger intent filter. It is similar to wearable apps they also don't have the launcher activity. A: Android Studio won't let you use a standard run configuration without specifying a launcher in the manifest. You could try using the gradle command line instead: ./gradlew installDebug A: In case you haven't resolved the issue. I came across similar issue before, two ways worked for me: * *Choose "Nothing" as the option. You will be able to run the app manually on the Glass, but your debugger won't work straight away. You have to manually attach the debugger to the process. You may also have to use logcat if there is a bug in the launching process. *Specify an activity (whichever is associated with voice trigger). Also, I noticed that you created the app using Kotlin. Kotlin activity didn't work in my case, throwing exceptions associated with nullable var when resuming the activity. Non-activity classes worked fine. Please let me know if Kotlin activity worked in your case.
doc_55
How can I able to do that? I have used ABCPdf component to generate pdf files. Dim theDoc As Doc = New Doc() theDoc.HtmlOptions.Timeout = 300000 theDoc.HtmlOptions.ImageQuality = 101 'total w=612 h=792 theDoc.Rect.Position(50, 70) theDoc.Rect.Width = 532 theDoc.Rect.Height = 610 Dim theid As Integer = theDoc.AddImageUrl("http://google.com") While theDoc.Chainable(theid) theDoc.Page = theDoc.AddPage() theid = theDoc.AddImageToChain(theid) End While theDoc.Save(System.Web.HttpContext.Current.Server.MapPath("/1.pdf")) theDoc.Clear()
doc_56
Any ideas what is wrong with the original settings? How 127.0.0.1 gets redirected to another IP? Where to look? My computer is w7 64 bits laptop. Don't know who is 10.114.5.20. It is not my local ip. It is not any of my dns servers. It is not my default gateway. Pac script could return two proxies: or 10.114.5.11 or 10.114.5.14. Not sure what the 10.114.5.20 machine could be, maybe it is the default gateway of the proxies... Just a last update: I download and saved the PAC script locally, and changed the http://... URI to a local file://C:/... one. And surprisingly enough, it worked. I mean the http://localhost now goes to my computer, not to the strange http://10.114.5.20/. Reading the PAC script file I notice that when opened with Notepad all text goes into one line, but opening with Wordpad I see several lines. Opening in binary, the carriage return is a unix style one (0A) instead of the windows one (0D0A). So I supose the explanation comes down to say the Automatic configuration script setting in IE doesn't understand unix style carriage returns When parsing the pac script, so it always returned the proxy, never DIRECT. A: The problem is in the corporate proxy pac script. It does not allow the "bypass proxy for local addresses". Try ping -a 10.114.5.20 or tracert 10.114.5.20 to try to figure out who is at that address. FYI... if you want to use the default settings for the proxy, then don't use "localhost", instead, use your real IP address. The Proxy server should redirect the request back to your own machine. A: Sounds like corporate IT has an entry in your hosts file that's redirecting you. Open Windows Explorer and browse to C:\Windows\System32\drivers\etc. Open the hosts file with Notepad or another text editor. Do you see an entry that directs http://127.0.0.1 somewhere else? If so, commenting that out should fix the issue.
doc_57
import {Injectable} from '@angular/core'; import {Http, Response} from '@angular/http'; import {Car} from './car'; import 'rxjs/add/operator/toPromise'; @Injectable() export class CarService { constructor(private http: Http) {} getCarsSmall() { return this.http.get('./cars-small.json') .toPromise() .then(res => <Car[]> res.json().data) .then(data => { return data; }); } } Here is the src from their site. I did have to import rxjs toPromise and modify the angular core package definition. import {Injectable} from 'angular2/core'; import {Http, Response} from 'angular2/http'; import {Car} from '../domain/car'; @Injectable() export class CarService { constructor(private http: Http) {} getCarsSmall() { return this.http.get('/showcase/resources/data/cars-small.json') .toPromise() .then(res => <Car[]> res.json().data) .then(data => { return data; }); } } Using the complete path solved the issue: return this.http.get('/src/app/cars-small.json') A: For some reason making this change worked- return this.http.get('/src/app/cars-small.json') I don't really understand why I had to go up two directories, when the file is at the same level. I tried app/cars-small.json, and that didn't work either. A: Maybe you should add the folder into webpack package as an asset dir. Then you can request as http.get('api/config.json')... "apps": [ { "root": "src", "outDir": "dist", "assets": [ "api", "assets", "favicon.ico" ],....
doc_58
Method1(); then Method2(); then Method3(); then Method4(); then Method5(); I've also 5 threads running numbered from 1 to 5 I want to implement the following scenario: I want thread one to start using method1 then move to method2 [In parallel I want thread 2 to start using method1 which is now not used] then when thread one moves to method3, and thread two proceeds to method 2, thread 3 should start using the now free method one, and so on. public void Execute(object OPCounter) { //Method 1 lock (thisLock) { FetchedInstructionQueue[PCounter] = Stager.Stage1(InstructionsMemory); } //Method 2 lock (thisLock) { DecordedInstructionQueue[PCounter] = Stager.Stage2(FetchedInstructionQueue, regMem); } //Method 3 lock (thisLock) { ALUResultQueue[PCounter] = Stager.Stage3(DecordedInstructionQueue); } lock (thisLock) { MemoryQueue[PCounter] = Stager.Stage4(DecordedInstructionQueue, memory, ALUResultQueue); } lock (thisLock) { object obj = Stager.Stage5(DecordedInstructionQueue, ALUResultQueue, regMem, memory, MemoryQueue); InternalWriter(PCounter, obj); } } ///This is the initiator of threads private void ExecuteBtn_Click(object sender, EventArgs e) { InstructionsMemory = InstructionsTextBox.Text.Split('\n'); FetchedInstructionQueue = new string[InstructionsMemory.Length]; DecordedInstructionQueue = new Instruction[InstructionsMemory.Length]; ALUResultQueue = new int[InstructionsMemory.Length]; MemoryQueue = new int[InstructionsMemory.Length]; Thread[] threads = new Thread[InstructionsMemory.Length]; for (APCounter = 0; APCounter < InstructionsMemory.Length; APCounter = 5 + APCounter) { if (APCounter + 5 < InstructionsMemory.Length) { object s1 = APCounter; object s2 = APCounter + 1; object s3 = APCounter + 2; object s4 = APCounter + 3; object s5 = APCounter + 4; threads[APCounter] = new Thread(new ParameterizedThreadStart(Execute)); threads[APCounter + 1] = new Thread(new ParameterizedThreadStart(Execute)); threads[APCounter + 2] = new Thread(new ParameterizedThreadStart(Execute)); threads[APCounter + 3] = new Thread(new ParameterizedThreadStart(Execute)); threads[APCounter + 4] = new Thread(new ParameterizedThreadStart(Execute)); threads[APCounter].Start(s1); threads[APCounter + 1].Start(s2); threads[APCounter + 2].Start(s3); threads[APCounter + 3].Start(s4); threads[APCounter + 4].Start(s5); } } A: I'm writing this as an answer since the clarification required won't fit in a comment. You seem to have a pipeline of work to be done (on a particular object, which may or may not mutate). You also have a number of threads to do this pipeline. The pipeline consists of 5 stages. In general, with pipelines, you want one thread per step in the pipeline (that is, one thread for step 1, one thread for step 2, one thread for step 3 and so on). Let's call this Option A. You seem to want to set it up so that the thread follows the object being worked on. So thread one covers object 1 through all 5 stages, then thread 2 covers object 2 and so on. It's not clear why you'd want to do this, but let's run with it anyway. Let's call this Option B. I'll show options using 3 threads and 3 stages for simplicity. Option A: Traditional Pipeline 3 stages, 1 thread per stage, object moves between stages. void Main() { var stage1Queue = new BlockingCollection<object>(new ConcurrentQueue<object>()); var stage2Queue = new BlockingCollection<object>(new ConcurrentQueue<object>()); var stage3Queue = new BlockingCollection<object>(new ConcurrentQueue<object>()); var threads = new Thread[] {new Thread(() => Stage1Worker(stage1Queue, stage2Queue)), new Thread(() => Stage2Worker(stage2Queue, stage3Queue)), new Thread(() => Stage3Worker(stage3Queue)) }; foreach (var thread in threads) thread.Start(); stage1Queue.Add("*"); stage1Queue.Add("*"); stage1Queue.Add("*"); Console.ReadKey(); } public void Stage1Worker(BlockingCollection<object> queue, BlockingCollection<object> next) { foreach (var task in queue.GetConsumingEnumerable()) { Console.WriteLine(task); // do work here, even mutating task if needed next.TryAdd(task.ToString() + "*"); // will always succeed for a ConcurrentQueue } } public void Stage2Worker(BlockingCollection<object> queue, BlockingCollection<object> next) { foreach (var task in queue.GetConsumingEnumerable()) { Console.WriteLine(task); // do work here, even mutating task if needed next.TryAdd(task.ToString() + "*"); // will always succeed for a ConcurrentQueue } } public void Stage3Worker(BlockingCollection<object> queue) { foreach (var task in queue.GetConsumingEnumerable()) { Console.WriteLine(task); // do work here, even mutating task if needed // no more work! } } Option B: Synchronised Method Access Pipeline This is quite a strange one, and without knowing the 'why' of this it's hard to find a suitable solution. The following ensures that a single task is executed by a single thread, and the threads wait for access to each method. However, it does not guarantee that thread 1 does task 1, thread 2 task 2 etc.. whichever thread is ready will pick up the 'next' task. object stage1Lock = new object(); object stage2Lock = new object(); object stage3Lock = new object(); void Main() { var tasks = new BlockingCollection<object>(new ConcurrentQueue<object>()); var threads = new Thread[] {new Thread(() => Worker(1, tasks)), new Thread(() => Worker(2, tasks)), new Thread(() => Worker(3, tasks)) }; foreach (var thread in threads) thread.Start(); tasks.Add("*"); tasks.Add("**"); tasks.Add("***"); tasks.Add("****"); tasks.Add("*****"); LINQPad.Util.ReadLine(); } public void Worker(int id, BlockingCollection<object> tasks) { foreach (var task in tasks.GetConsumingEnumerable()) { Console.WriteLine(id + " got task: " + task); lock (stage1Lock){ Console.WriteLine(id + " - Stage 1: " + task); } lock (stage2Lock){ Console.WriteLine(id + " - Stage 2: " + task); } lock (stage3Lock){ Console.WriteLine(id + " - Stage 3: " + task); } } }
doc_59
A: You currently cannot bind to a low port (1-1024) because the tcp proxying service runs as an unprivileged user. If you look in your logs you should see an error similar to: E1030 07:10:54.345547 05091 proxier.go:411] Failed to get a socket for playground: listen tcp 0.0.0.0:80: bind: permission denied This is why the examples all use high number ports. You can try port 8080 or 8443 for standard unprivileged http/s ports until GKE supports binding to low numbered ports.
doc_60
I guess I have to generate a table first with all products for all dates and then join my data on that. Not sure how to construct this though. Example of the products table: id - product 1 - test1 2 - test2 3 - test3 example of the daily data: product - date - value test1 - 2020-01-01 - 10 test2 - 2020-01-01 - 8 test3 - 2020-01-01 - 9 test1 - 2020-01-02 - 9 test3 - 2020-01-02 - 10 test2 - 2020-01-03 - 6 test3 - 2020-01-03 - 5 Result I'm looking for: product - date - value test1 - 2020-01-01 - 10 test2 - 2020-01-01 - 8 test3 - 2020-01-01 - 9 test1 - 2020-01-02 - 9 test2 - 2020-01-02 - 0 test3 - 2020-01-02 - 10 test1 - 2020-01-03 - 0 test2 - 2020-01-03 - 6 test3 - 2020-01-03 - 5 A: Yuo could use a subquery and cross join for find the missing combination select p.product, d.date, ifnull(value,0) from ( select distinct p.produtc, d.date from product cross join date ) t inner join product p on t.product = d.product inner join data d on t.date = d.date A: You can cross join to get the rows and then left join to bring in the data you want: select p.id, c.date, coalesce(d.value, 0) as value from products p cross join (select distinct date from daily) c left join daily d on d.product = p.id and d.date = c.date; If there are dates that are not in the table, you can generate the dates using generate_series(): select p.id, c.date, coalesce(d.value, 0) as value from products p cross join (select generate_series(min(d.date), max(d.date), interval '1 day') as date from daily ) c left join daily d on d.product = p.id and d.date = c.date;
doc_61
returned format: handleEmployeeResponse({ "records": [ { "fullDesc": "Records for employe", "id": "Emp_1", "name": "Jack" } ] }); in a firebug i can see as a response text handleEmployeeResponse({"records":[{"fullDesc":"Records for employe","id":"Emp_1","name":"Jack"}]}); if i will parse the above response using JSONObject jObject = new JSONObject(jString); i am surely gonna get JSON parsing error as the above response is not valid json at all so i have to remove "handleEmployeeResponse , ( , ); " form the response string then i need to pass it so JSONObject can anyone tell me how to parse json with a callback function in android A: Have a look here : you should use the JSONTokener Class and thus get a JSONObject corresponding to your structure. TOKENER the example is pretty self-explanatory. A: It looks like your service is returning a response in the JSONP format (JSON with Padding). You either need to regex out the JSON message, or find out a way to ask the service not to return the padding.
doc_62
A: If you use http://docs.unity3d.com/Documentation/ScriptReference/Gizmos.DrawIcon.html it should be automatic. Otherwise you can try http://docs.unity3d.com/Documentation/ScriptReference/HandleUtility.GetHandleSize.html to get a correct size.
doc_63
Variable is car speed so if its exceeded a limit( say 35 kmph) it will continue to stay above that limit for sometime before the speed comes down to normal again (0). so I need to exclude such consecutive events & only count it as once everytime it exceeds that limit. Can someone please help..I tried with dplyr to filter & put a condition around the threshold but I am not able to succeed.. Sample Data Timestamp Speed Threshold 1 2014-04-03 09:23:57 30.07929 0 2 2014-04-03 09:23:55 35.63192 1 3 2014-04-03 09:23:59 34.92283 0 . . . . 4 2014-04-03 09:33:01 37.30859 1 5 2014-04-03 09:33:02 38.58576 1 6 2014-04-03 09:33:03 39.51970 1 7 2014-04-03 09:33:04 38.02424 1 8 2014-04-03 09:33:05 33.12697 0 9 2014-04-03 09:33:39 30.21950 0 10 2014-04-03 09:33:40 31.27000 0 11 2014-04-03 09:33:41 32.00667 1 12 2014-04-03 09:33:42 32.94374 1 13 2014-04-03 09:33:43 33.25141 1 14 2014-04-03 09:33:44 32.76980 1 15 2014-04-03 09:33:45 30.11010 0 16 2014-04-03 09:33:56 31.63525 0 17 2014-04-03 09:33:57 34.61222 0 18 2014-04-03 09:33:58 37.52020 1 19 2014-04-03 09:33:59 40.48424 1 20 2014-04-03 09:34:00 43.43828 0 ............................................................. Output should look like CAR ID Time (Sec) Count XXXX 2014-04-03 09:23:00 1 xxxx 2014-04-03 09:33:00 3 . . . . . . A: If you want to group it by every 10 minutes starting from 3rd minute you can do it like this: library(tidyverse) library(lubridate) df %>% group_by(Timestamp = str_sub(ymd_hms(Timestamp) - minutes(3), 1, 15)) %>% summarise(Count = sum(Treshhold)) %>% mutate(Timestamp = str_c(Timestamp, '3')) A: We can group_by CAR_ID and cut the Timestamp column into groups of every "10 minutes" and calculate how many times the value exceeds the Threshold separately (excluding consecutive enteries) using rle. library(dplyr) df %>% group_by(CAR_ID, group = cut(Timestamp, breaks = "10 mins")) %>% summarise(Count = sum(with(rle(Threshold), values == 1))) Make sure that Timestamp column is of datetime or POSIXct class and not string.
doc_64
[ {:day=>4,:name=>'Jay'}, {:day=>1,:name=>'Ben'}, {:day=>4,:name=>'Jill'} ] What is the best way to convert it to a hash with sorted day values as the keys: { :1=>[{:day=>1,:name=>'Ben'}], :4=>[{:day=>4,:name=>'Jay'},{:day=>4,:name=>'Jill'}] } I'm using Ruby 1.9.2 and Rails 3.1.1 A: Personally, I wouldn't bother "sorting" the keys (which amounts to ordering-by-entry-time in Ruby 1.9) until I actually needed to. Then you can use group_by: arr = [{:day=>4,:name=>'Jay'}, {:day=>1,:name=>'Ben'}, {:day=>4,:name=>'Jill'}] arr.group_by { |a| a[:day] } => {4=>[{:day=>4, :name=>"Jay"}, {:day=>4, :name=>"Jill"}], 1=>[{:day=>1, :name=>"Ben"}]} Instead, sort the keys when you actually need them. A: Assuming you array is called is list, here's one way using the reduce method: list.reduce({}) { |hash, item| (hash[item[:day]] ||= []) << item; hash } Here's another using the map method, but you have to carry a holder variable around: hash = {} list.each { |item| (hash[item[:day]] ||= []) << item } Once you have the unsorted hash say in variable foo, you can sort it as, Hash[foo.sort] A: Simple answer: data = [ {:day=>4,:name=>'Jay'}, {:day=>1,:name=>'Ben'}, {:day=>4,:name=>'Jill'} ] #expected solution sol = { 1=>[{:day=>1,:name=>'Ben'}], 4=>[{:day=>4,:name=>'Jay'},{:day=>4,:name=>'Jill'}] } res = {} data.each{|h| res[h[:day]] ||= [] res[h[:day]] << h } p res p res == sol #check value p res.keys == sol.keys #check order Problem with this solution: The hash is not sorted as requested. (Same problem has Anurags solution). So you must modify the answer a bit: res = {} data.sort_by{|h| h[:day]}.each{|h| res[h[:day]] ||= [] res[h[:day]] << h } p res p res == sol #check value p res.keys == sol.keys #check order A: In Rails you can use OrderedHash: ActiveSupport::OrderedHash[arr.group_by { |a| a[:day] }.sort_by(&:first)] Update: In fact in Ruby 1.9 hash is ordered, so using ActiveSupport extension is not required: Hash[arr.group_by { |a| a[:day] }.sort_by(&:first)]
doc_65
'checkbox' => [ 'displayCond' =>'FIELD:uid:!IN:SELECT uid_foreign FROM tx_myext_object_object_mm', 'exclude' => 0, 'label' => 'checkbox', 'config' => [ 'type' => 'check', 'items' => [ '1' => [ '0' => 'LLL:EXT:lang/locallang_core.xlf:labels.enabled' ] ], 'default' => 0 ] ], can this syntax be corrected or is it impossible (this snippet does not work) A: Since TYPO3 7.6 userFunc is available as display Condition. In your case I recommend for your TCA configuration: 'checkbox' => [ 'displayCond' =>'USER:VendorName\\Myext\\DisplayConditionMatcher->displayIfTxMyextObjectHasNoMMRelation', 'exclude' => 1, 'label' => 'Checkbox', 'config' => [ 'type' => 'check', 'default' => 0 ] ], And a PHP class named DisplayConditionMatcher.php located in your extension EXT:myext/Classes/ with following content: <?php namespace VendorName\Myext; /** * Class DisplayConditionMatcher * * @package TYPO3 * @subpackage tx_myext * @author 2016 J.Kummer <typo3 et enobe dot de> * @copyright Copyright belongs to the respective authors * @license http://www.gnu.org/licenses/gpl.html GNU General Public License, version 3 or later */ class DisplayConditionMatcher { /** * Checks for already existing mm relation of tx_myext_object * Returns true, if no mm relation found * * @param array $configuration * @param \TYPO3\CMS\Backend\Form\FormDataProvider\EvaluateDisplayConditions $evaluateDisplayConditions * @return bool */ public function displayIfTxMyextObjectHasNoMMRelation(array $configuration, $evaluateDisplayConditions = null) { $result = true; if (isset($configuration['record']['uid'])) { $countRows = $GLOBALS['TYPO3_DB']->exec_SELECTcountRows( 'uid_foreign', 'tx_myext_object_object_mm', 'uid_foreign = ' . intval($configuration['record']['uid']) ); if ($countRows > 0) { $result = false; } } if (isset($configuration['conditionParameters'][0]) && $configuration['conditionParameters'][0] === 'negate') { $result = !$result; } return $result; } } You can pass additional parameters separated by colon for displayCondition of type userFunc, as described in TYPO3 CMS TCA Reference. For example negation, as already implemented in PHP class: 'displayCond' =>'USER:VendorName\\Myext\\DisplayConditionMatcher->displayIfTxMyextObjectHasNoMMRelation:negate', Adapt names for extension, path and vendor that matches your needs.
doc_66
* *Is there a type alias for shapes I can import from TensorFlow/Keras? *If not, what would be the correct type hint for a shape? What I currently have is this, can I do better? from collections.abc import Sequence from typing import TypeAlias Shape: TypeAlias = Sequence[int | None] input_shape: Shape = (None, 32, 32, 3) # Batch of RGB images from CIFAR100 dataset Edit: I know numpy.typing.NDArray uses typing.Any as a type alias for shape. I am hoping to do better than this.
doc_67
Requird dataframe Output if input file having colone we need find beetween number ex: 1:10 ->1,2,3,4,5,6,7,8,9,10 it is posible do with regex . i dont get clear picture for that please some one help for that A: UDF: def myudf2=(input:String)=>{ val regex = "('\\d+':'\\d+')".r val out = new ListBuffer[String]() input.replaceAll("'", "").split(",").map(x=>{ if(x.matches("(\\d+:\\d+)")){ val colon = x.split(":") out += (colon(0).toInt to colon(1).toInt).mkString(", ") } else { out += x } }) out.mkString(",").replaceAll("\\[|\\]", "") } val df = Seq((1,"'1':'5','6','7':'10'"),(2,"'1':'6','7','8':'12'")).toDF("id","number") scala> df.show +---+--------------------+ | id| number| +---+--------------------+ | 1|'1':'5','6','7':'10'| | 2|'1':'6','7','8':'12'| +---+--------------------+ val myCostumeudf = udf(myudf2) scala> val outDF = df.withColumn("output", myCostumeudf(df("number"))) scala> outDF.show(5,false) +---+--------------------+---------------------------------------+ |id |number |output | +---+--------------------+---------------------------------------+ |1 |'1':'5','6','7':'10'|1, 2, 3, 4, 5, 6, 7, 8, 9, 10 | |2 |'1':'6','7','8':'12'|1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 | +---+--------------------+---------------------------------------+ Try something like above.
doc_68
I've had to upgrade the U-boot bootloader on my devices, and naturally I've had to make things fit again. And right now I'm trying to get the U-Boot environment variables accessible from Linux/Android. Long story short, I've created a file /etc/fw_env.config that points to U-Boot's env section in Flash. As documented here: https://elinux.org/U-boot_environment_variables_in_linux This has not been successful, and so I started adding print statements to the source code to debug my device. I tracked the error down to a get_config() function, which as one could imagine, opens the /etc/fw_env.config and writes the values within to the necessary variables. I've narrowed it further down to the sscanf() function, which returns 0, as in 0 variables read and/or written. So, as a sanity check, I isolated the function and made my own little program separately from my source code (variable names and structures I kept exactly the same). /* sscanf example */ #include <stdio.h> struct envdev_s { const char *devname; /* Device name */ long long devoff; /* Device offset */ unsigned long env_size; /* environment size */ unsigned long erase_size; /* device erase size */ unsigned long env_sectors; /* number of environment sectors */ unsigned int mtd_type; /* type of the MTD device */ }; static struct envdev_s envdevices[2] = {}; #define DEVNAME(i) envdevices[(i)].devname #define DEVOFFSET(i) envdevices[(i)].devoff #define ENVSIZE(i) envdevices[(i)].env_size #define DEVESIZE(i) envdevices[(i)].erase_size #define ENVSECTORS(i) envdevices[(i)].env_sectors #define DEVTYPE(i) envdevices[(i)].mtd_type int main () { char dump [] = "/dev/mtd1 0xc0000 0x2000 0x2000\n"; char *devname; int i = 0; int rc; printf("I was here in get_config : dump = %s\n", dump); printf("I was here in get_config : i = %d\n", i); rc = sscanf(dump, "%ms %lli %lx %lx %lx", &devname, &DEVOFFSET(i), &ENVSIZE(i), &DEVESIZE(i), &ENVSECTORS(i)); printf("I was here in get_config : rc = %d\n", rc); return 0; } Also recreated here: http://cpp.sh/5ckms Now, when I run this independently, it works as I expect it should, particularly outputting: I was here in get_config : rc = 4 4 being successful, as char dump [] = "/dev/mtd1 0xc0000 0x2000 0x2000\n"; But when I compile this and run it on my device, it returns: I was here in get_config : rc = 0 Computer says NO! And no other error messages to work with. I'm obviously missing some fundamental understanding here. Either some permissions, or some setup-variables somewhere, but I wouldn't know in where to start. Could someone please point me in the right direction? A: For completeness, I am answering this question based on the help I have received here on StackOverflow: As stated in the comments, %ms was not added until Android 9, and I was working on Android 6. It still did not produce any compiler errors which is particularly misleading. I ended up using %s, which worked fine.
doc_69
PreviewHolder = CameraPreview.getHolder(); PreviewHolder.addCallback(this); if(Build.VERSION.SDK_INT < Build.VERSION_CODES.HONEYCOMB) PreviewHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); A: For 3.0 (honeycomb) and higher, you do not need to call this method at all. As the docs state, it is ignored and set automatically. SurfaceHolder.setType is deprecated… But required?
doc_70
REGEX: .replace(/((<)(\/|)([a-zA-Z-Z0-9]+))/gi,'\n$1') What does this do? INPUT: <div id="page"><div id="header"><h1><a href="#">Burger Pointer</a></h1><ul class="left"><li><a href="#">Menu</a></li><li><a href="#">Location</a></li><li><a href="#">About Us</a></li><li><a href="#">BP Gear</a></li></ul></div></div> OUTPUT: <div id="page"> <div id="header"> <h1> <a href="#">Burger Pointer </a> </h1> <ul class="left"> <li> <a href="#">Menu </a> </li> ... QUESTION Is there a way to check if group 1, 4th capturing group is NOT a|h1|etc... using regexes so the output would be: <div id="page"> <div id="header"> <h1><a href="#">Burger Pointer</a></h1> <ul class="left"> <li> <a href="#">Menu</a> </li> ... PROGRESS Not currently working, see example here .replace(/(<|<\/)([a-zA-Z-Z0-9]+)/gi,function($0, $1, $2) { if (["h1","a"].indexOf($2)) { return "$0" } else { return "/n$1$2" } }) A: If I've understood your problem correctly you want to remove linebreaks inside elements of certain tags. One way to do this correctly is to convert it to HTML then manipulate the tags. To do that you can create a temporary HTML element and inject your HTML into it. You'll notice that apart from removing the linebreaks, this method will also close your div tags, since the HTML you provided is invalid. This isn't a complete solution or a neat architecture, just a proof of concept of how this type of problem could be solved. Supplying a pure javascript and a jquery version (since you specify jquery even though you have no jquery code). To find out what the individual commands do, read up on them in the jquery documentation or MDN reference. jQuery var temporaryElement = $("<body />").html(inputString); temporaryElement.find("h1, a").each(function() { $(this).html($(this).html().replace(/\n/g, ""))); } console.log(temporaryElement.html()); Pure Javascript var inputString = `<div id="page"> <div id="header"> <h1> <a href="#">Burger Pointer </a> </h1> <ul class="left"> <li> <a href="#">Menu </a> </li>`; function removeLinebreaksInTag(parent, tagName) { var elements = parent.getElementsByTagName(tagName); for (var i = 0 ; i < elements.length ; i++) { elements[i].innerHTML = elements[i].innerHTML.replace(/\n/g, ""); } } function cleanUpHtml(html) { var temporaryElement = document.createElement("body"); temporaryElement.innerHTML = html; removeLinebreaksInTag(temporaryElement, "h1"); removeLinebreaksInTag(temporaryElement, "a"); return temporaryElement.innerHTML; } console.log(cleanUpHtml(inputString)); A: From your examples, you need to * *capture <a> <h1> tag but don't catch </a> and </h1> tag (since in your output there is a newline before <h1> and<a> tags. you can achieve it with Negative Lookahead . The Regex is (?!<\/a|<\/h1)((<)(\/|)([a-zA-Z-Z0-9]+)) You can find a demo here Input is <!-- Comments Testing --> <div id="page"><div id="header"><h1><a href="#">Burger Pointer</a></h1><ul class="left"><li><a href="#">Menu</a></li><li><a href="#">Location</a></li><li><a href="#">About Us</a></li><li><a href="#">BP Gear</a></li></ul></div></div> Output is <!-- Comments Testing --> <div id="page"> <div id="header"> <h1> <a href="#">Burger Pointer</a></h1> <ul class="left"> <li> <a href="#">Menu</a> </li> <li> <a href="#">Location</a> </li> <li> <a href="#">About Us</a> </li> <li> <a href="#">BP Gear</a> </li> </ul> </div> </div> The issue is it also captures <a> inside <h1> tag. Since javascript doesn't support lookbehinds, I cant find a way to eliminate these matches. If you want to negate all <a> and <h1> tags like you asked in ur question then you can try this regex ((<)(\/|)(?!a|h1)([a-zA-Z0-9]+)) The output for this would be <!-- Comments Testing --> <div id="page"> <div id="header"><h1><a href="#">Burger Pointer</a></h1> <ul class="left"> <li><a href="#">Menu</a> </li> <li><a href="#">Location</a> </li> <li><a href="#">About Us</a> </li> <li><a href="#">BP Gear</a> </li> </ul> </div> </div> you can find the demo here
doc_71
ProjectionList projectionList = Projections.projectionList(); projectionList.add(Projections.property("b.*")); Getting this exception, could not resolve property: * of: MyClassName
doc_72
I would like to see debug information from within the sshj library to help me determine what's failing :- username, key exchange or something else. I am familiar with log4j and I can put logging statements within my code, but I can't find an example (simple to follow) which shows me how to hook up log4j to sfl4j and then tell sshj to use the logger. ''' SSHClient sshClient = new SSHClient(); try { String username = "testuser"; File privateKey = new File("/mykeys/keyname"); KeyProvider keys; sshClient.addHostKeyVerifier(new PromiscuousVerifier()); keys = sshClient.loadKeys(privateKey.getPath()); sshClient.connect("1.2.3.4", 22); sshClient.authPublickey(username, keys); SFTPClient sftpClient = sshClient.newSFTPClient(); sftpClient.put("./send/file1.xml", "file1.xml"); sshClient.close(); } catch (UserAuthException e) { // TODO Auto-generated catch block System.out.println(e.getMessage()); } catch (TransportException e) { // TODO Auto-generated catch block System.out.println(e.getMessage()); } catch (IOException e) { // TODO Auto-generated catch block System.out.println(e.getMessage()); } ''' A: Adding <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.6.6</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.6.6</version> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.16</version> </dependency> to pom.xml and creating a log4j.proerties files did the trick for me: # Define the root logger with appender file log = ssh-test.log log4j.rootLogger = DEBUG, FILE # Define the file appender log4j.appender.FILE=org.apache.log4j.FileAppender log4j.appender.FILE.File=${log}/log.out # Define the layout for file appender log4j.appender.FILE.layout=org.apache.log4j.PatternLayout log4j.appender.FILE.layout.conversionPattern=%m%n
doc_73
A: After checking the documentation you provided and assuming it's what you followed, https://cs50.readthedocs.io/projects/check50/en/latest/ states "Under Windows, please install the Linux subsystem. Then install check50 within the subsystem." But in your picture you are using PowerShell (PS) so you can either start wsl from it by using the command wsl or you can open a new terminal from the vscode gui but make sure you selected wsl this time !
doc_74
class DashboardTableViewController: UITableViewController { var userData = [String]() var secondUserData = [String]() var name: String = "" override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) } /* This function gets the user account information from a JSON in a simple request */ private func loadUserData(completion: @escaping (_ myArray: [String]) -> Void) { let session = Twitter.sharedInstance().sessionStore.session() let client = TWTRAPIClient.withCurrentUser() let userInfoURL = "https://api.twitter.com/1.1/users/show.json" let params = ["user_id": session?.userID] var clientError : NSError? let request = client.urlRequest(withMethod: "GET", url: userInfoURL, parameters: params, error: &clientError) client.sendTwitterRequest(request) { (response, data, connectionError) -> Void in if connectionError != nil { print("Error: \(connectionError)") } do { let json = JSON(data: data!) if let userName = json["name"].string, let description = json["description"].string, let followersCount = json["followers_count"].int, let favouritesCount = json["favourites_count"].int, let followingCount = json["friends_count"].int, let lang = json["lang"].string, let nickname = json["screen_name"].string { self.userData.append(userName) self.userData.append(nickname) self.userData.append(String(followersCount)) self.userData.append(String(followingCount)) self.userData.append(String("22")) self.userData.append(lang) self.userData.append(description) self.userData.append("No country") completion(self.userData) } } } } /* This closure helps us to fill the labels once the request has been finished in loadUserData */ func manageUserData(label: UILabel, index: Int) { loadUserData { (result: [String]) in label.text = result[index] } } // MARK: - Table view data source override func numberOfSections(in tableView: UITableView) -> Int { // #warning Incomplete implementation, return the number of sections return 1 } override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { // #warning Incomplete implementation, return the number of rows return self.userData.count } override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let rowNumber = indexPath.row let cellIdentifier = "TableViewCell" guard let cell = tableView.dequeueReusableCell(withIdentifier: cellIdentifier, for: indexPath) as? TableViewCellController else { fatalError("The dequeued cell is not an instance of TableViewCellController.") } switch rowNumber { case 0: cell.titlePlaceholder.text = "Name:" manageUserData(label: cell.valuePlaceholder, index: rowNumber) case 1: cell.titlePlaceholder.text = "Nickname:" manageUserData(label: cell.valuePlaceholder, index: rowNumber) case 2: cell.titlePlaceholder.text = "Followers:" manageUserData(label: cell.valuePlaceholder, index: rowNumber) case 3: cell.titlePlaceholder.text = "Following:" manageUserData(label: cell.valuePlaceholder, index: rowNumber) case 4: cell.titlePlaceholder.text = "Tweets:" manageUserData(label: cell.valuePlaceholder, index: rowNumber) case 5: cell.titlePlaceholder.text = "Language:" manageUserData(label: cell.valuePlaceholder, index: rowNumber) case 6: cell.titlePlaceholder.text = "Biography:" manageUserData(label: cell.valuePlaceholder, index: rowNumber) case 7: cell.titlePlaceholder.text = "Country:" manageUserData(label: cell.valuePlaceholder, index: rowNumber) default: cell.titlePlaceholder.text = "?" cell.valuePlaceholder.text = "?" } return cell } } Any idea why this is happening ? Thank you very much!! UPDATE! I solved this problem adding self.tableView.reloadData() after completion() inside loadUserData() function. Hope it helps! A: Swift solution: tableView.reloadData() If you are calling it not from the main thread: DispatchQueue.main.async { self.tableView.reloadData() } A: You must reload table view data after receiving request with [self.tableView reloadData];
doc_75
$('.holder').mousedown(function() { return false; }); $('.holder').mousedown({ sel = window.getSelection(); selText = sel.toString(); $("a[href='http://www.uniqueUrlForSelectionShareTwitter.com']").attr('href', 'https://twitter.com/intent/tweet?text=' + encodeURIComponent(selText.trim())) }); The js fiddle http://jsfiddle.net/fmtcwrwb/5/ A: The 'click' event fires before the href redirection is executed, so you can do: $('your-seletor').click(function(){ $('your-a-element').attr('href', 'http://new-url-goes-here.com') .click(); });
doc_76
That is, for a Date data type is it true that the following two are logically equivalent? :x > to_date('20210605 00:00:00','YYYYMMDD HH24:MI:SS') :x >= to_date('20210605 00:00:01','YYYYMMDD HH24:MI:SS') Oracle 12.1 Enterprise Edition. Thanks in advance. Setup: (though this is probably a bit too much for this question, but I was using this setup for other tests) drop table p; create table p ( grp CHAR(3 byte) , dt DATE , payment_amount NUMBER ) nocompress tablespace data partition by LIST (grp) subpartition by range (dt) ( partition P_P_A values ('A') ( subpartition P_P_A_20210410 values less than (to_date('20210410 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_A_20210424 values less than (to_date('20210424 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_A_20210508 values less than (to_date('20210508 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_A_20210522 values less than (to_date('20210522 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_A_20210605 values less than (to_date('20210605 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_A_20210619 values less than (to_date('20210619 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_A_20210703 values less than (to_date('20210703 00:00:01','YYYYMMDD HH24:MI:SS')) ) ,partition P_P_B values ('B') ( subpartition P_P_B_20210410 values less than (to_date('20210410 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_B_20210424 values less than (to_date('20210424 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_B_20210508 values less than (to_date('20210508 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_B_20210522 values less than (to_date('20210522 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_B_20210605 values less than (to_date('20210605 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_B_20210619 values less than (to_date('20210619 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_B_20210703 values less than (to_date('20210703 00:00:01','YYYYMMDD HH24:MI:SS')) ) ,partition P_P_C values ('C') ( subpartition P_P_C_20210410 values less than (to_date('20210410 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_C_20210424 values less than (to_date('20210424 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_C_20210508 values less than (to_date('20210508 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_C_20210522 values less than (to_date('20210522 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_C_20210605 values less than (to_date('20210605 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_C_20210619 values less than (to_date('20210619 00:00:01','YYYYMMDD HH24:MI:SS')) ,subpartition P_P_C_20210703 values less than (to_date('20210703 00:00:01','YYYYMMDD HH24:MI:SS')) ) ) ; insert into p ( grp ,dt ,payment_amount ) with "D" as (select /*+ materialize */ 1 from dual connect by level <= 1000) select case mod(rownum, 3) when 1 then 'A' when 2 then 'B' when 0 then 'C' end , to_date('20210410','YYYYMMDD') + 14 * mod(rownum, 7) , dbms_random.value(500, 5000) from "D", "D" ; commit; exec dbms_stats.gather_table_stats(null, 'P'); Query 1 - Strict Inequality select sum(P.payment_amount) from p "P" where P.grp = 'B' and P.dt > to_date('20210605 00:00:00','YYYYMMDD HH24:MI:SS') and P.dt < to_date('20210703 00:00:01','YYYYMMDD HH24:MI:SS') ; ---------------------------------------------------------------------------------------------------- | id | Operation | name | rows | Bytes | cost (%CPU)| time | Pstart| Pstop | ---------------------------------------------------------------------------------------------------- | 0 | select statement | | 1 | 34 | 2152 (1)| 00:00:01 | | | | 1 | sort AGGREGATE | | 1 | 34 | | | | | | 2 | partition LIST single | | 111K| 3689K| 2152 (1)| 00:00:01 | key | key | | 3 | partition range ITERATOR| | 111K| 3689K| 2152 (1)| 00:00:01 | 5 | 7 | |* 4 | table access full | p | 111K| 3689K| 2152 (1)| 00:00:01 | | | ---------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - filter("P"."DT">to_date(' 2021-06-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) Query 2 - Non-Strict Inequality: select sum(P.payment_amount) from p "P" where P.grp = 'B' and P.dt >= to_date('20210605 00:00:01','YYYYMMDD HH24:MI:SS') and P.dt <= to_date('20210703 00:00:00','YYYYMMDD HH24:MI:SS') ; ---------------------------------------------------------------------------------------------------- | id | Operation | name | rows | Bytes | cost (%CPU)| time | Pstart| Pstop | ---------------------------------------------------------------------------------------------------- | 0 | select statement | | 1 | 34 | 1436 (1)| 00:00:01 | | | | 1 | sort AGGREGATE | | 1 | 34 | | | | | | 2 | partition LIST single | | 158K| 5270K| 1436 (1)| 00:00:01 | key | key | | 3 | partition range ITERATOR| | 158K| 5270K| 1436 (1)| 00:00:01 | 6 | 7 | |* 4 | table access full | p | 158K| 5270K| 1436 (1)| 00:00:01 | | | ---------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - filter("P"."DT"<=to_date(' 2021-07-03 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) Query 3 - No Predicate: select sum(P.payment_amount) from p "P" where P.grp = 'B' and P.dt >= to_date('20210605 00:00:01','YYYYMMDD HH24:MI:SS') and P.dt < to_date('20210703 00:00:01','YYYYMMDD HH24:MI:SS') ; ---------------------------------------------------------------------------------------------------- | id | Operation | name | rows | Bytes | cost (%CPU)| time | Pstart| Pstop | ---------------------------------------------------------------------------------------------------- | 0 | select statement | | 1 | 34 | 1436 (1)| 00:00:01 | | | | 1 | sort AGGREGATE | | 1 | 34 | | | | | | 2 | partition LIST single | | 158K| 5270K| 1436 (1)| 00:00:01 | key | key | | 3 | partition range ITERATOR| | 158K| 5270K| 1436 (1)| 00:00:01 | 6 | 7 | | 4 | table access full | p | 158K| 5270K| 1436 (1)| 00:00:01 | | | ---------------------------------------------------------------------------------------------------- Edit: Addendum to clarify use case and motivation for the above experimental subpartition definition. The application's current subpartition definitions have the following form. ,partition P_P_B values ('B') ( subpartition P_P_B_20210410 values less than (to_date('20210411','YYYYMMDD')) ,subpartition P_P_B_20210424 values less than (to_date('20210425','YYYYMMDD')) ,subpartition P_P_B_20210508 values less than (to_date('20210509','YYYYMMDD')) ,subpartition P_P_B_20210522 values less than (to_date('20210523','YYYYMMDD')) ,subpartition P_P_B_20210605 values less than (to_date('20210606','YYYYMMDD')) ,subpartition P_P_B_20210619 values less than (to_date('20210620','YYYYMMDD')) ,subpartition P_P_B_20210703 values less than (to_date('20210704','YYYYMMDD')) ,subpartition P_P_B_20210717 values less than (to_date('20210718','YYYYMMDD')) ) The application generates queries with predicates as follows. P.dt between to_date('20210606 00:00:00','YYYYMMDD HH24:MI:SS') and to_date('20210703 23:59:59','YYYYMMDD HH24:MI:SS') This causes a filter predicate on the upper end of the range. 4 - filter("P"."DT"<=to_date(' 2021-07-03 23:59:59', 'syyyy-mm-dd hh24:mi:ss')) Thus, the confusion as to the seemingly non-equivalence of the following two expressions prompted this post. P.dt <= to_date('20210703 23:59:59','YYYYMMDD HH24:MI:SS') and P.dt < to_date('20210704 00:00:00','YYYYMMDD HH24:MI:SS') This filter predicate can be avoided if the application is modified to generate the predicate as follows. P.dt >= to_date('20210606 00:00:00','YYYYMMDD HH24:MI:SS') and P.dt < to_date('20210703 00:00:00','YYYYMMDD HH24:MI:SS') + 1 Having to do this seems "buggy" or "hackish". Was hoping to get clarification from this community.
doc_77
http://www.ficksworkshop.com/blog/14-coding/65-installing-gcc-on-mac Everything seems to work, but 'select set' is not updating the right (AFAIK) link. $ gcc -version i686-apple-darwin11-llvm-gcc-4.2: no input files $ which gcc /usr/bin/gcc $ ls -l /usr/bin/gcc lrwxr-xr-x 1 root wheel 12 Jul 14 2013 /usr/bin/gcc -> llvm-gcc-4.2 According to the guide, Macports installs to /opt/local/bin. Select is changing that link accordingly: $ ls -l /opt/local/bin/gcc lrwxr-xr-x 1 root admin 25 Sep 28 12:20 /opt/local/bin/gcc -> /opt/local/bin/gcc-mp-4.7 But make is calling /usr/bin/gcc. Can I manually change the symbolic link or is there a more elegant solution? A: You need to put /opt/local/bin earlier in your PATH than /usr/bin. As I recall, the installer package for MacPorts modifies your ~/.profile or ~/.bash_profile to do this for you. However, that will only affect shells which are started after installing MacPorts. So, the first thing to try is to simply open a new shell and see if things are working as you expect there.
doc_78
Right now, admin and regular web site are sitting in the same web application. if I only made a small change for the admin pages, i still need to compile everything. is it a way separate them, so I only need to upload admin without touching the web site. the only solution I can think of is to create another web app. is there another way to do it? A: You can separate your mvc application into Areas http://msdn.microsoft.com/en-us/library/ee671793.aspx Areas help you better organize your application it won't let deploy just that area, but VS2010 does have some good deployment features. Her's a good article http://weblogs.asp.net/scottgu/archive/2010/07/29/vs-2010-web-deployment.aspx
doc_79
the documentation, I couldn't find anything that would accomplish this for Cell. Maybe this is an antipattern? Code: After digging into the code I added the following to Cell's implementation to see if it would work. pub fn cmp_and_set(&self, old: T, new: T) -> T { unsafe { ::intrinsics::atomic_cxchg(self.value.get(), old, new) } } // ^ there are many many reasons why this is not so great // this is just an example of what I'm looking for simple usage fn main() { let c0 = Cell::new(10); let val0 = c0.cmp_and_set(11, 5); assert_eq!(c0.get(), 5); let val1 = c0.cmp_and_set(10, 42); assert_eq!(c0.get(), 42); } As far as I can tell, for very basic cases it works, but again there are many many reasons why the particular implementation is less than stellar. The fact that I edited the standard library to get what I was looking for means I'm certainly attempting to implement some sort of antipattern. Background: This was prompted from re-reading the following from The Rust Programming Language It is still possible to violate your own invariants using this wrapper, so be careful when using it. If a field is wrapped in Cell, it's a nice indicator that the chunk of data is mutable and may not stay the same between the time you first read it and when you intend to use it. A: TL;DR: No, there's not, because it's unnecessary. Compare and Set is only valuable in two actors (or more) are modifying the object in parallel. While Cell allows internal mutability, it is not thread-safe, so you will never be in a situation where two actors will be attempting to modify it in parallel. Therefore, you can just use get(), compare, and set() if it suits you. And nobody will change the value between your get() and set(), providing you do not call other code yourself.
doc_80
I want to get the reference to a node or item in my data. After checking the docs I found the following. const listRef = db.list('items'); Notice the user of the .list() method. The return type of the above statement is AngularFireList{[]}. I was hoping to get the return type of Reference. Is this the correct way to get a reference to a node so that I can perform CRUD to it? A: You need to use db.object() to get a single firebase.database.Reference. const item = db.object('items/itemID').valueChanges(); Check the official doc You can perform the CRUD like const itemRef = db.object('items/itemID'); itemRef.remove(); itemRef.set({ name: 'new name!'}); itemRef.update({ age: newAge });
doc_81
But after a new update of dependencies of dagger hilt, the project is not able to run. I am getting this error for the activities and fragments in which I have implemented Dagger and Viewmodels with Dependency Injection. Earlier before the dependency update, it was working fine. Here is the screenshot for the same: Here is the class that is created by Dagger and that has the error: If anyone has faced this same issue. I don't know what I am doing wrong. I have followed this tutorial. This tutorial is also working fine for me, but I am getting the above errors after updating the dependencies. A: Ok, I find out what was the problem. The problem is in the official documentation of Dagger Hilt here See the screenshot of that document below. The problem is in the code of documentation. It's really frustrating why they don't update their official documentation!!! Let it be....... We have to change FROM @HiltViewModel class MyViewModel @Inject constructor( private val mainRepository: MainRepository ) : ViewModel() { TO @HiltViewModel class MyViewModel @Inject constructor( private val mainRepository: MainRepository ) : ViewModel(), LifecycleObserver { They have missed LifecycleObserver and because of that, I was facing errors. Not only this, You should also note that following dependency's version should also be the same. in App Level Gradle: implementation "com.google.dagger:hilt-android:2.35.1" kapt "com.google.dagger:hilt-android-compiler:2.35.1" in Project Level Gradle: classpath "com.google.dagger:hilt-android-gradle-plugin:2.35.1" A: dependency version for project level and module level must be same .
doc_82
Approach 1: a wrapper script that uses a Bash builtin (a la history or fc -ln -1) to grab the last command and write it to a log file. I have not been able to figure out any way to do this, as the shell builtin commands do not appear to be recognized outside of the interactive shell. Approach 2: a wrapper script that pulls from ~/.bash_history to get the last command. This, however, requires setting up the Bash shell to flush every command to history immediately (as per this comment) and seems also to require that the history be allowed to grow inexorably. If this is the only way, so be it, but it would be great to avoid having to edit the ~/.bashrc file on every system where this might be implemented. Approach 3: use script. My problem with this is that it requires multiple commands to start and stop the logging, and because it launches its own shell it is not callable from within another script (or at least, doing so complicates things significantly). I am trying to figure out an implementation that's of the form log_this.script other_script other_arg1 other_arg2 > file, where everything after the first argument is logged. The emphasis here is on efficiency and minimizing syntax overhead. EDIT: iLoveTux and I both came up with similar solutions. For those interested, my own implementation follows. It is somewhat more constrained in its functionality than the accepted answer, but it also auto-updates any existing logfile entries with changes (though not deletions). Sample usage: $ cmdlog.py "python3 test_script.py > test_file.txt" creates a log file in the parent directory of the output file with the following: 2015-10-12@10:47:09 test_file.txt "python3 test_script.py > test_file.txt" Additional file changes are added to the log; $ cmdlog.py "python3 test_script.py > test_file_2.txt" the log now contains 2015-10-12@10:47:09 test_file.txt "python3 test_script.py > test_file.txt" 2015-10-12@10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt" Running on the original file name again changes the file order in the log, based on modification time of the files: $ cmdlog.py "python3 test_script.py > test_file.txt" produces 2015-10-12@10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt" 2015-10-12@10:48:01 test_file.txt "python3 test_script.py > test_file.txt" Full script: #!/usr/bin/env python3 ''' A wrapper script that will write the command-line args associated with any files generated to a log file in the directory where the files were made. ''' import sys import os from os import listdir from os.path import isfile, join import subprocess import time from datetime import datetime def listFiles(mypath): """ Return relative paths of all files in mypath """ return [join(mypath, f) for f in listdir(mypath) if isfile(join(mypath, f))] def read_log(log_file): """ Reads a file history log and returns a dictionary of {filename: command} entries. Expects tab-separated lines of [time, filename, command] """ entries = {} with open(log_file) as log: for l in log: l = l.strip() mod, name, cmd = l.split("\t") # cmd = cmd.lstrip("\"").rstrip("\"") entries[name] = [cmd, mod] return entries def time_sort(t, fmt): """ Turn a strftime-formatted string into a tuple of time info """ parsed = datetime.strptime(t, fmt) return parsed ARGS = sys.argv[1] ARG_LIST = ARGS.split() # Guess where logfile should be put if (">" or ">>") in ARG_LIST: # Get position after redirect in arg list redirect_index = max(ARG_LIST.index(e) for e in ARG_LIST if e in ">>") output = ARG_LIST[redirect_index + 1] output = os.path.abspath(output) out_dir = os.path.dirname(output) elif ("cp" or "mv") in ARG_LIST: output = ARG_LIST[-1] out_dir = os.path.dirname(output) else: out_dir = os.getcwd() # Set logfile location within the inferred output directory LOGFILE = out_dir + "/cmdlog_history.log" # Get file list state prior to running all_files = listFiles(out_dir) pre_stats = [os.path.getmtime(f) for f in all_files] # Run the desired external commands subprocess.call(ARGS, shell=True) # Get done time of external commands TIME_FMT = "%Y-%m-%d@%H:%M:%S" log_time = time.strftime(TIME_FMT) # Get existing entries from logfile, if present if LOGFILE in all_files: logged = read_log(LOGFILE) else: logged = {} # Get file list state after run is complete post_stats = [os.path.getmtime(f) for f in all_files] post_files = listFiles(out_dir) # Find files whose states have changed since the external command changed = [e[0] for e in zip(all_files, pre_stats, post_stats) if e[1] != e[2]] new = [e for e in post_files if e not in all_files] all_modded = list(set(changed + new)) if not all_modded: # exit early, no need to log sys.exit(0) # Replace files that have changed, add those that are new for f in all_modded: name = os.path.basename(f) logged[name] = [ARGS, log_time] # Write changed files to logfile with open(LOGFILE, 'w') as log: for name, info in sorted(logged.items(), key=lambda x: time_sort(x[1][1], TIME_FMT)): cmd, mod_time = info if not cmd.startswith("\""): cmd = "\"{}\"".format(cmd) log.write("\t".join([mod_time, name, cmd]) + "\n") sys.exit(0) A: You can use the tee command, which stores its standard input to a file and outputs it on standard output. Pipe the command line into tee, and pipe tee's output into a new invocation of your shell: echo '<command line to be logged and executed>' | \ tee --append /path/to/your/logfile | \ $SHELL i.e., for your example of other_script other_arg1 other_arg2 > file, echo 'other_script other_arg1 other_arg2 > file' | \ tee --append /tmp/mylog.log | \ $SHELL If your command line needs single quotes, they need to be escaped properly. A: OK, so you don't mention Python in your question, but it is tagged Python, so I figured I would see what I could do. I came up with this script: import sys from os.path import expanduser, join from subprocess import Popen, PIPE def issue_command(command): process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True) return process.communicate() home = expanduser("~") log_file = join(home, "command_log") command = sys.argv[1:] with open(log_file, "a") as fout: fout.write("{}\n".format(" ".join(command))) out, err = issue_command(command) which you can call like (if you name it log_this and make it executable): $ log_this echo hello world and it will put "echo hello world" in a file ~/command_log, note though that if you want to use pipes or redirection you have to quote your command (this may be a real downfall for your use case or it may not be, but I haven't figured out how to do this just yet without the quotes) like this: $ log_this "echo hello world | grep h >> /tmp/hello_world" but since it's not perfect, I thought I would add a little something extra. The following script allows you to specify a different file to log your commands to as well as record the execution time of the command: #!/usr/bin/env python from subprocess import Popen, PIPE import argparse from os.path import expanduser, join from time import time def issue_command(command): process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True) return process.communicate() home = expanduser("~") default_file = join(home, "command_log") parser = argparse.ArgumentParser() parser.add_argument("-f", "--file", type=argparse.FileType("a"), default=default_file) parser.add_argument("-p", "--profile", action="store_true") parser.add_argument("command", nargs=argparse.REMAINDER) args = parser.parse_args() if args.profile: start = time() out, err = issue_command(args.command) runtime = time() - start entry = "{}\t{}\n".format(" ".join(args.command), runtime) args.file.write(entry) else: out, err = issue_command(args.command) entry = "{}\n".format(" ".join(args.command)) args.file.write(entry) args.file.close() You would use this the same way as the other script, but if you wanted to specify a different file to log to just pass -f <FILENAME> before your actual command and your log will go there, and if you wanted to record the execution time just provide the -p (for profile) before your actual command like so: $ log_this -p -f ~/new_log "echo hello world | grep h >> /tmp/hello_world" I will try to make this better, but if you can think of anything else this could do for you, I am making a github project for this where you can submit bug reports and feature requests.
doc_83
Can anyone please tell me what attribute will be populated for an active directory user if they are assigned to a mail box. Thanks A: If an AD user is assigned a mailbox, then attribute homeMDB will be populated. See the snapshot- You can also read article sharing the same point at Check if a user has an Exchange mailbox Update Based on my research and found, I have discovered that following are the attributes that let you know if a user has been assigned a mailbox or not. * *homeMDB *homeMTA *legacyExchangeDN *mail *mailNickname *msExchHomeServerName *msExchMailboxGuid *msExchMailboxSecurityDescriptor *proxyAddresses
doc_84
In that, I have in a subfolder somedir/modules/MyModule a CMakeLists.txt which should add some test executables. cmake wants to put them in some subdirectory binary folder, but I want to place them in the common binary folder under ${CMAKE_BINARY_DIR}/x64 So what I'm doing is this (in the CMakeLists.txt in the somedir/modules/MyModules directory): ADD_EXECUTABLE(MyTest MyTest.cpp) set_target_properties(MyTest PROPERTIES RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}/x64") TARGET_LINK_LIBRARIES(MyTest SomeLibraries...) ADD_TEST(MyTest ${CMAKE_BINARY_DIR}/x64/MyTest) Under Linux this works nicely, but under Windows I simply cannot get it to build into the ${CMAKE_BINARY_DIR}/x64 folder. I've checked via MESSAGE, the ${CMAKE_BINARY_DIR}/x64 does point to the right folder. I also tried changing the CMAKE_RUNTIME_OUTPUT_DIRECTORY (or even the per-target variables, e.g. CMAKE_MyTest_OUTPUT_DIRECTORY, MyTest_OUTPUT_DIRECTORY_Release, MyTest_OUTPUT_DIRECTORY_Debug, as mentioned here: https://stackoverflow.com/a/25328001/671366). Tested both before or after ADD_EXECUTABLE, doesn't change anything. The output directory stays fixed on somedir/modules/x64/. I'm out of ideas what I need to do, or even where the output directory it insists on using is coming from. Any ideas? At which point in time is the output directory decided in cmake? How does this relate to subdirectories? The executables specified in the parent folder CMakeLists.txt files get built in the desired directory, but if that is by mere chance I can't really say. A: Config-specific property RUNTIME_OUTPUT_DIRECTORY_<CONFIG> has priority over common one RUNTIME_OUTPUT_DIRECTORY. Both types of properties are initialized from corresponded CMAKE_* variable(if it is set) when executable target is created. So, having e.g CMAKE_RUNTIME_OUTPUT_DIRECTORY_DEBUG config-specific variable being set makes this variable to be used for Debug configuration even if RUNTIME_OUTPUT_DIRECTORY property is explicitely set. The only way to redefine output directory in that case is to set RUNTIME_OUTPUT_DIRECTORY_DEBUG config-specific property.
doc_85
First error is on line 61 and is related to the format of the 'find' command. The command is this: FILES_TO_FORMAT := $(shell find . -not -path './littlefs/*' \( -name '*.c' -o -name '*.cpp' \)) The error is this: "FIND: Parameter format not correct" Second error is on line 66: # clang doesn't seem to handle -D "ARG=\"foo bar\"" correctly, so replace spaces with \x20: BUILD_CONFIG_STR := $(shell echo $(CPPFLAGS) | sed 's- -\\\\x20-g') Error says: 'sed' is not recognized as an internal or external command, operable program or batch file.
doc_86
What I want to achieve: Find the city where my tv ads have the best performance (high volume of signups at a lower cost). What I am doing: In order to see the best combination of the metrics, I attributed a score for each metric column. Then, I sum all the scores (column M). The highest score is the city with the best performance. What I would like to do: To automate the score columns. Is the a way to make those scores with formulas or code? (the score goes from 1 to [number of lines]). Also, I'd love to know if theres a better way to make this correlation. A: If it's a regular table, you can use RANK(). So, add a column like "rank" to your table. and put this into the first row under "rank": =RANK([@[Total Score]];[Total Score];0) It basically takes the total score of the current row, and checks it's ranking in the whole "total score"-column. The zero is to give the highest value the lowest rank. Have fun! ^_^
doc_87
Rewrite the following C code in a more readable style: while (*dst++ = *src++); I want to check that I am understanding this line correctly. By assigning *src to *dst, it is checking if *dst (and, by extension, *src) is a non-zero (or zero) value. If non-zero, the while body is executed, otherwise it is skipped. My answer, based on the above understanding: *dst = *src; while (*dst) { [do stuff] *dst++; *src++; *dst = *src; } I realize that the position of the post-increment for both variables is important for a real program, but I don't think it matters here. A: Yes this is exactly what is being done (chars are copied along with null terminator) on the one-line code. But in your code *dst++; *src++; can be replaced by dst++; src++; You have dereferenced it unnecessarily. To explain a bit further the value assigned is the value of the assignment expression which is being checked in the while condition which when turns out to be \0 terminates the loop but the \0 is copied in the dst also. The implementation you wrote also will do one check and one assignment even if it is empty string just like the original one. And yes in your case, the post increment or pre increment doesn't matter but in original implementation it does. By the last line I meant,++dst;++src;will be same as dst++;src++; in your case. But in original implementation if you have used *++dst = *++src This would be wrong and meaningless for the purpose of copying. Think of the case when it is empty string. You will access array index out of bound. while(*dst++ = *src++); This is readable - in fact for the second code you wrote I have to think twice of the edge cases - here it is clean simple logic. Less code Less confusion. Readability doesn't mean more code - it is the clean code which is more readable. A: As you may know, the original loop is a common C idiom for copying a zero-terminated string. Any C programmer should recognize it at a glance. So in that sense it it already quite readable. The revised version is harder to understand, with the repeated assignment outside and inside the loop. If you want to write it out step by step in the most detailed manner possible, while also keeping exactly the same behavior as the original, I suggest this, assuming that src and dst are char* pointers. // This extra pair of curly braces keeps the `char c` local to this code { char c; // Change this if the type is different do { c = *src; *dst = c; src = src + 1; dst = dst + 1; } while( c ); } BTW one problem with the original is that many modern C compilers will issue a warning on the assignment, the idea being that it may be a typo for an intended == comparison. You can usually suppress this warning with an extra set of parentheses: while(( *dst++ = *src++ )) ; A: *dst++; *src++; is same as *dst; <<dereferencing for no use. dst++; *src; <<dereferencing for no use. src++; A: By assigning *src to *dst, it is checking if *dst (and, by extension, *src) is a non-zero (or zero) value. If non-zero, the while body is executed, otherwise it is skipped. That's correct as far as it goes, but as far as the purpose of such code goes, it misses the point. A high-level description of the function of the code would be: copy successive elements of the array pointed into by src to successive elements of the array pointed into by dst until an element with value 0 has been copied. In particular, if src and dst are pointers to char then this is a possible implementation of the strcpy() function. But your characterization does not describe all the effects of the code, which include incrementing the src and dst pointers to one position past the last element copied. Your version does not match this. Moreover, your version is a bit odd in how it performs the pointer increments, in that after it performs the increments, it pointlessly dereferences the new pointer values and ignores the results. That's not good style. There are a lot of alternatives that I would consider more readable, but your instructor may be looking for some particular characteristics. I expect first of all that they would want a version that does not use an assignment expression in boolean context. I imagine that they would also want to see the pointer dereferences and pointer increments performed in separate expressions. Here's a version that has those characteristics, that produces all the side effects of the original code, and that minimizes code duplication: do { *dst = *src; dst++; src++; } while (*(src - 1)); A: Your understanding of what the loop does is correct, but not your understanding of the post-increment, the post-increment has higher precedence than the *, so in the original while loop the post-increment increments the pointers (returning the original values), then dereferences those pointers assigning original *src to original *dest. I suggest doing it as a for loop (although the original is already highly idiomatic): for( ; *dst = *src; ++dst, ++src) { }
End of preview. Expand in Data Studio

CodeRAGStackoverflowPosts

An MTEB dataset
Massive Text Embedding Benchmark

Evaluation of StackOverflow post retrieval using CodeRAG-Bench. Tests the ability to retrieve relevant StackOverflow posts given code-related queries.

Task category t2t
Domains Programming
Reference https://arxiv.org/pdf/2406.14497

Source datasets:

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_task("CodeRAGStackoverflowPosts")
evaluator = mteb.MTEB([task])

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repository.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


    @misc{wang2024coderagbenchretrievalaugmentcode,
  archiveprefix = {arXiv},
  author = {Zora Zhiruo Wang and Akari Asai and Xinyan Velocity Yu and Frank F. Xu and Yiqing Xie and Graham Neubig and Daniel Fried},
  eprint = {2406.14497},
  primaryclass = {cs.SE},
  title = {CodeRAG-Bench: Can Retrieval Augment Code Generation?},
  url = {https://arxiv.org/abs/2406.14497},
  year = {2024},
}
    

@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Loïc and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("CodeRAGStackoverflowPosts")

desc_stats = task.metadata.descriptive_stats
{
    "train": {
        "num_samples": 47078036,
        "number_of_characters": 57728116335,
        "documents_text_statistics": {
            "total_text_length": 52714842616,
            "min_text_length": 0,
            "average_text_length": 2239.466515383097,
            "max_text_length": 267046,
            "unique_texts": 23368871
        },
        "documents_image_statistics": null,
        "queries_text_statistics": {
            "total_text_length": 5013273719,
            "min_text_length": 3,
            "average_text_length": 212.97718192832002,
            "max_text_length": 29007,
            "unique_texts": 23537082
        },
        "queries_image_statistics": null,
        "relevant_docs_statistics": {
            "num_relevant_docs": 23539018,
            "min_relevant_docs_per_query": 1,
            "average_relevant_docs_per_query": 1.0,
            "max_relevant_docs_per_query": 1,
            "unique_relevant_docs": 23539018
        },
        "top_ranked_statistics": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
24