text
string
meta
dict
Q: i want to remove the columns after colspan in dataTabl How to use colspan in dataTable? I want the colspan in the uraian column with the condition that if the column after it is zero, then // Add COLSPAN attribute $('td:eq(3)', row).attr('colspan', 6); i want to remove the columns after it this is the result i tried: https://i.stack.imgur.com/rzKzd.png this is my code : table = $('#dataTable').DataTable( { "lengthMenu": [[5, 10, 25, 50, -1], [5, 10, 25, 50, "All"]], "destroy": true, "paging": true, "sorting": true, "responsive": true, "ajax": { "type": "POST", "url": "modul/mod_instrumen/elemen_action.php", "data": {jenis:jenis}, "timeout": 120000, "dataSrc": function (json) { if(json !== null){ return json } else { return ""; } } }, columns: [ { "name": "No", "title": "No", "data": null, render: function (data, type, row, meta) { return meta.row + meta.settings._iDisplayStart + 1; } }, { "data": null, "name": "#", "title": "#", "width": "120px", "render": function (data, row, type, meta) { return `<button id="`+data.id+`" class="btn btn-warning btn-sm edit_data" title="Update Data"><i class="fa-solid fa-square-pen"></i></button> <button id="`+data.id+`" class="btn btn-primary btn-sm view_data" title="Lihat Data"> <i class="fa fa-search"></i></button> <button id="`+data.id+`" class="btn btn-danger btn-sm hapus_data" title="Hapus Data"><i class="fa-solid fa-trash-can"></i></button> `; } }, {data: 'kode', name: 'kode', title: 'Kode'}, {data: 'uraian', name: 'uraian', title: 'Uraian'}, {data: 'fakta_dan_analisis', name: 'fakta_dan_analisis', title: 'Fakta dan Analisis'}, {data: 'regulasi', name: 'regulasi', title: 'Regulasi'}, {data: 'observasi', name: 'observasi', title: 'Observasi'}, {data: 'wawancara', name: 'wawancara', title: 'Wawancara'}, {data: 'simulasi', name: 'simulasi', title: 'Simulasi'}, {data: 'dokbuk1', name: 'dokbuk1', title: 'Dokumen Bukti'} ], createdRow: function(row, data, dataIndex){ let text = data.fakta_dan_analisis; //let panjang = text.length; if(text == ''){ // Add COLSPAN attribute $('td:eq(3)', row).attr('colspan', 6); // Hide required number of columns // next to the cell with COLSPAN attribute $('td:eq(4)', row).css('display', 'none'); $('td:eq(5)', row).css('display', 'none'); $('td:eq(6)', row).css('display', 'none'); $('td:eq(7)', row).css('display', 'none'); $('td:eq(8)', row).css('display', 'none'); // Update cell data //this.api().cell($('td:eq(3)', row)).data(data.uraian); } }, }); this is the result i tried: https://i.stack.imgur.com/rzKzd.png
{ "language": "en", "url": "https://stackoverflow.com/questions/75640040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Changing array in function in c++ I am learning C++ and confused why a changes occurred in function gets reflected in main. Can anyone say how to change the array in function without reflecting in main module.. thx for the help #include <iostream> using namespace std; void function1(int x[],int n){ for(int i=0;i<n;[enter image description here](https://i.stack.imgur.com/G17A6.png)i++){ x[i]=0; } } int main(){ int n; cout<<"Enter the number of terms :"; cin>>n; int a[n]; for(int i=0;i<n;i++){ cin >> a[i]; } function1(a,n); //after calling a function //printing an array declared in main for(int i=0;i<n;i++){ cout<<a[i]<<endl; } return 0; } I wrote a code to check the changes of data in main and function module. After calling the function, i thought the data entered to the array before calling the function wouldn't be changed. However, changes in data in function, reflected in main module
{ "language": "en", "url": "https://stackoverflow.com/questions/75640042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Fill in using unique values in pandas groupby df = pd.DataFrame({'value': [1, np.nan, np.nan, 2, 3, 1, 3, np.nan, 3], 'name': ['A','A', 'B','B','B','B', 'C','C','C']}) name value 0 A 1 1 A NaN 2 B NaN 3 B 2 4 B 2 5 B 2 6 C 3 7 C NaN 8 C 3 In the dataframe above, I want to use groupby to fill in the missing values for each group (based on the name column) using the unique value for each group. It is gauranteed that each group will have a single unique value apart from NaNs. How do I do that?
{ "language": "en", "url": "https://stackoverflow.com/questions/75640043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Change the number of digits of p value on a ggsurvfit() graph Using ggsurvfit(), I generated this graph with the following code: library(ggsurvfit) survfit2(Surv(time, status) ~ sex, df_colon) %>% ggsurvfit() + add_pvalue("annotation", size = 12) I would like the p value has 2 digits like 0.37. In the document of add_pvalue() function, there is an argument pvalue_fun = format_p. I just do not know how to specify format_p here. Any suggestions would be appreciated. A: From the docs here and here you can try this: library(ggsurvfit) survfit2(Surv(time, status) ~ sex, df_colon) %>% ggsurvfit() + add_pvalue("annotation", size = 12, pvalue_fun = \(x) format_p(x, digits = 3)) A: using format() in add_pvalue(): survfit2(Surv(time, status) ~ sex, df_colon) %>% ggsurvfit() + add_pvalue("annotation", size = 12 ,pvalue_fun = \(pvalue) format(pvalue, digits=2))
{ "language": "en", "url": "https://stackoverflow.com/questions/75640045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to use a fixed number of multi-processes in python? I want to analyze the results of different benchmark evaluations. I have many benchmarks, and when running on the server, I want to evaluate 10 in parallel at a time. In my python script, there is a function that does the evaluation. def load_single_stat(benchmark,prefetcher, retry=False) But every time the function is called, the execution will continue only if the function returns. I can write a shell script to run ten python scripts. for((i=0;i<${#PREFETCH_METHODS[@]};i++)) do for ((j=1; j<=$BENCHMARK_NUM; j++)) do sleep 2 array=($(ps -aux | grep -o ${PREFETCH_METHODS[i]})) echo ${#array[@]} while [ ${#array[@]} -ge 10 ] do sleep 60 array=($(ps -aux | grep -o ${PREFETCH_METHODS[i]})) done cmd="python my_script.py ${PREFETCH_METHODS[i]} $BENCHMARK_NUM " $cmd & done done Can the above work be done in a python script? I can use multiple processes to run functions, but I can't control the number of them running (the server has other users, and I don't want to take up all the resources). How can I do it more efficiently? Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/75640046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to pass structures to functions without using pointers? Consider the following program: #include <stdio.h> #include <assert.h> #include <stdlib.h> #include <string.h> struct Person { char *name; int age; int height; int weight; }; // new struct Person* Person_create(char *name, int age, int height, int weight) { struct Person *who = malloc(sizeof(struct Person)); assert(who != NULL); who->name = strdup(name); who->age = age; who->height = height; who->weight = weight; return who; } // Delete void Person_destroy(struct Person* who) { assert(who != NULL); free(who->name); free(who); } // Print void Person_print(struct Person* who) { printf("Name : %s\n",who->name); printf("\tAge: %d\n",who->age); printf("\theight: %d\n",who->height); printf("\tweight: %d\n",who->weight); } int main(int argc, char* argv[]) { // make two people structures struct Person* joe = Person_create( "Joe Alex", 32, 64, 140); struct Person* frank = Person_create( "Frank Blank", 20, 72, 180); // print them out and where they are in memory printf("Joe is at memory location %p:\n",joe); Person_print(joe); printf("Frank is at memory location %p:\n",frank); Person_print(frank); // make everyone age 20 years and print them again joe->age += 20; joe->height -=2; joe->weight += 40; Person_print(joe); frank->age += 20; frank->weight += 20; Person_print(frank); // destory them both so we clean up Person_destroy(joe); Person_destroy(frank); // try to transfer NULL to Person_destroy // Person_destroy(NULL); return 0; } The above program is correct, but I would like to know how to complete this program without using pointers and malloc function: 1. I want to create the structure on the stack. 2. I don't want to use pointers to pass the structure to other functions. For the first problem, creating structures on the stack, I think we can use the alloca function. For the second question, I wonder if it is possible to use reference passing to accomplish this thing.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Do I need to manually disconnect the sender and the response function after the sender being released? In Qt programming, this is a simple tcp server new connection coming code, void TcpServerTest::onSeverReadyRead(int clientId) { qDebug() << clientId; qDebug() << clientSocket[clientId]->readAll(); } void TcpServerTest::onNewConnection(/*QTcpSocket* socket*/) { QTcpSocket* serverRecordClient = server->nextPendingConnection(); static int iClientIdGenerator = 1000; int iClientId = iClientIdGenerator++; clientSocket.insert(iClientId, serverRecordClient); connect(serverRecordClient, &QTcpSocket::readyRead, [this, iClientId](){ onSeverReadyRead(iClientId); }); } When a new connection is coming, onNewConnection will be triggered and we bind a network data process function onSeverReadyRead(iClientId) with this client -- serverRecordClient. I think when the client disconnect from the server, do I need to explicitly disconnect the binding too? disconnect(serverRecordClient, &QTcpSocket::readyRead)
{ "language": "en", "url": "https://stackoverflow.com/questions/75640048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: CS50 readability - calculation is off I am doing CS50 Readability problem set. I am trying to count the words in a given passage. When I code the calculation within main I get the correct answer: #include <cs50.h> #include <stdio.h> #include <ctype.h> #include <string.h> #include <stdlib.h> #include <math.h> int letters; int main(void)// { // user prompted for input string input = get_string("Input: "); //prompt user to input text for (int i=0; i < strlen(input); i++) { if (isalpha(input[i])) { letters++; } } printf("letters %i\n", letters); } But when I do the calculation outside of main it returns 0. I cannot see why this is happening #include <cs50.h> #include <stdio.h> #include <ctype.h> #include <string.h> #include <stdlib.h> #include <math.h> int count_letters(string input); int letters; int main(void)// { // user prompted for input string input = get_string("Input: "); //prompt user to input text printf("Letters: %i\n", letters); } int count_letters(string input) { letters = 0; for (int i = 0; i < strlen(input); i++) { if (isalpha(input[i])) { letters++; } } return letters; } A: You didn't not call your count_letter function in the second program in the main. So that the letters variable is not calculated. You just defined your function on your top of the program. Make sure to call your function int main(void)// { // user prompted for input string input = get_string("Input: "); //prompt user to input text // Called your function here printf("Letters: %i\n", count_letters(input)); } A: Your second program doesn't actually call the function. Add: letters = count_letters(input);
{ "language": "en", "url": "https://stackoverflow.com/questions/75640049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: closure #2 thunk for @escaping @callee_guaranteed () -> () I get this in my Crashlytics report closure #2 in my function and next line is thunk for @escaping @callee_guaranteed () -> () () This is my function : func myFunction() { var isLogin = false let group = DispatchGroup() group.enter() let greetUser = { (name: String) in if name != nil || name.trimmingCharacters(in: .whitespacesAndNewlines).count > 0 { isLogin = true group.leave() } else { isLogin = false group.leave() } } greetUser("reportMe") group.notify(queue: .main, work: DispatchWorkItem(block: { [weak self] in Crashlytics.crashlytics().setCustomValue(Date(), forKey: "Date") Crashlytics.crashlytics().setCustomValue("https://example.com/", forKey: "URL") Crashlytics.crashlytics().setCustomValue("wah ada error nih", forKey: "error") Crashlytics.crashlytics().setCustomValue(true, forKey: "is_login") Crashlytics.crashlytics().setCustomValue("10.10.10.101", forKey: "ip_address") Crashlytics.crashlytics().setUserID("123456") let error = NSError(domain: "https://example.com/", code: 101, userInfo: nil) Crashlytics.crashlytics().log("Report Me") Crashlytics.crashlytics().record(error: error) self?.showToast(message: "Crash Reported for Firebase Crashlytics to test whatever it received or not in Crashlytics", font: .systemFont(ofSize: 12.0)) })) } How to repair code in my function so closure #2 thunk for @escaping @callee_guaranteed () -> () not appear again in my crashlytics report?
{ "language": "en", "url": "https://stackoverflow.com/questions/75640051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How can I configure specific domain request to local proxy in axios? I have an axios (a nodejs module) requesting a specific domain via https. However, I can not access the domain directly on this server. So, I use gost to set up a tunnel, connecting to another server that can visit the specific domain. Gost also enables me to listen a local port, and handle the tunnel automatically. Now I have a local proxy with port 8123 that can visit the specific domain. I use the following command to double check that the proxy works: curl --proxy localhost:8123 'https://api.myip.com/' Now, the question is: How I can let the axios module to visit the specific domain via my local proxy? I can not modify the code of axios module. I didn't try the env varibles because I don't want all https requests to go through the proxy. export https_proxy=http://localhost:8123/ I only want the specific domain to go through this proxy.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why the following two Julia codes gave different answers? Version 1: A=[1 2;3 4] for ii in 1:3 B=A B[:,:]=B[:,:].+1 end display(A) Version 2: A=[1 2;3 4] for ii in 1:3 B=A B=B.+1 end display(A) Version 1 gives 2×2 Array{Int64,2}: 4 5 6 7 Version 2 gives 2×2 Array{Int64,2}: 1 2 3 4 I think it has to do with reference and copy. In which step is a copy created in version 2? Thank you if anyone could answer. I expect version 1 and 2 should give identical answers. I looked up the reference manual but still don't understand when a copy is created in version 2. A: B.+1 creates a copy in both instances. Call this copy X In version 1, B[:,:]= mutates in-place the data associated with B to be equal to X. It just so happens that the data associated with B is exactly the data associated with A, which is why you see A change. Version 2 reassigns the variable B itself to instead have its associated data be X rather than A. This is why A does not change. If you want to avoid creating a copy altogether, you can write this as A .+= 1
{ "language": "en", "url": "https://stackoverflow.com/questions/75640053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can MongDB make sure that an ObjectID is unique all the time? When adding a document to a collection in MongoDB database, MongoDB will create a long code as a Unique Object ID. But is it unique only for this collection or for all the collections on this planet created by people all over the world all the time until the end of time? I am just curious to know how unique this line of ID could potentially be. A: ObjectID is a 96-bit number which is composed as follows: a 4-byte timestamp value representing the seconds since the Unix epoch (which will not run out of seconds until the year 2106) a 5-byte random value, and a 3-byte incrementing counter, starting with a random value. Therefore, it is impossible for two records to have the same ID even if they are created at the same time.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a way to code out an IOS app to scan and take out relevant information from a document? I am thinking of creating an IOS app that can scan a receipt and take out and save relevant information from the receipt such as product name and purchased date. Are there any useful codes that can help with this? I know the code for scanning the document but I am not sure about how to get and save the relevant information from it.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Local environment .htaccess php not redirecting Am creating a MVC framework and I have a public folder with the HTACCESS file. I want to be able to type for example "localhost/mvc/public/adjoasdjaos" and it should redirect me to the index.php page like "localhost/mvc/public/" I have made several attempts which I commented out and it hasn't been working. <IfModule mod_rewrite.c> DirectoryIndex index.php # turn on rewriting RewriteEngine on # check that the request isn't actually a real file (e.g. an image) RewriteCond %{REQUEST_FILENAME}\.php !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([A-Za-z0-9-]+)?$ index.php?url=$1 [L] # RewriteRule ^(.*)$ /public/index.php [L] # redirect requests for BLAH to /edit.php?item=BLAH # RewriteRule ^(.*)$ /public/index.php?url=$1 [L] </IfModule> A: It seems like you want to redirect any request that is not a file or directory to the index.php file with the requested URL as a parameter. To achieve this, you can modify your .htaccess file as follows: <IfModule mod_rewrite.c> DirectoryIndex index.php # turn on rewriting RewriteEngine on # check that the request isn't actually a real file (e.g. an image) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # redirect all requests to index.php with URL parameter RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] </IfModule> Here, we are using the QSA flag to append any existing query string to the URL parameter. The L flag indicates that this is the last rule to be processed, so no further rules will be applied. With this configuration, any request that is not a file or directory will be redirected to index.php with the requested URL as the url parameter. For example, localhost/mvc/public/adjoasdjaos will be redirected to localhost/mvc/public/index.php?url=adjoasdjaos.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to keep user logged in wordpress, through unity? I've a wordpress-based website, with the default register/login forms. I also have a project I'm working on, a game in unity, which has a login form for things like sending the user's highscore and backup their save data. Issue is, the user is required to insert name and password every time they want to do so currently, and that can be annoying for the user. I know there's no good way to store the password locally, and that ideally the game would simply store something like a token to use each time it needs to communicate with the server, and the server would store that + username + IP to ensure that the person doing an action IS who they say they are, but I don't want to make an entire new table in my database(since its a webhost with a very limited 1GB of storage for the MySQL) just for that. What could I do?
{ "language": "en", "url": "https://stackoverflow.com/questions/75640062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why is my second graphics method not being shown in the output? I'm writing code in java to rotate an image around a circle (to create a flower), and I need to print out 6 flowers. When running the first method - redFlowers(g) - it prints out just fine. However, when I try to run the first method after the second - which is orangeFlowers(g) - the second method doesn't get printed out at all. I'm using g2d.rotate() and g2d.translate. Does anyone know why this is happenning? Here is my code: `` public void redFlowers(Graphics g) { g.setColor(Color.RED); g.drawOval(80, 330, 65, 63); g.fillOval(80, 330, 65, 63); g.setColor(c); ((Graphics2D) g).setStroke(new BasicStroke(4)); g.drawLine(112, 395, 112, 575); Image im = new ImageIcon("flower1.jpg").getImage(); // g.drawImage(im, 61, 300, 40, 50, null); Graphics2D g2d = (Graphics2D) g; g2d.translate(61, 300); g2d.rotate(-3.14159 / 2); // top left g2d.drawImage(im, -92, -50, 46, 70, this); g2d.rotate(3.14159 / 3.5); // middle left g2d.drawImage(im, -20, -20, 46, 70, this); g2d.rotate(-3.14159 / 1); // bottom right g2d.drawImage(im, -22, -181, 46, 70, this); } public void orangeFlowers(Graphics g) { g.setColor(Color.ORANGE); g.drawOval(280, 330, 65, 63); g.fillOval(280, 330, 65, 63); g.setColor(c); ((Graphics2D) g).setStroke(new BasicStroke(4)); g.drawLine(312, 395, 312, 575); } ``
{ "language": "en", "url": "https://stackoverflow.com/questions/75640063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How to create a Currency converter app in React native? I want to implement multicurruncy.for example When user select to rupees whole app convert into this how to achieve this? How to add logic I don't know. Can anyone explain me with example in react native. A: You can find a good example of currency convertor Here in GitHub
{ "language": "en", "url": "https://stackoverflow.com/questions/75640067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: AttributeError: 'ArtistList' object has no attribute 'pop' >>> import jsbgym >>> import gymnasium as gym >>> env = gym.make("JSBSim-HeadingControlTask-Cessna172P-Shaping.STANDARD", render_mode="human") >>> env.reset() JSBSim Flight Dynamics Model v1.1.13 [GitHub build 986/commit a09715f01b9e568ce75ca2635ba0a78ce57f7cdd] Dec 3 2022 12:36:17 [JSBSim-ML v2.0] JSBSim startup beginning ... (array([ 5.00000000e+03, 1.21430643e-17, 1.50920942e-16, 2.02536000e+02, 4.44089210e-15, -5.32907052e-15, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, -3.72529030e-09, 1.25629209e-15, 0.00000000e+00, 2.99000000e+02]), {}) >>> env.render() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\noahs\AppData\Local\Programs\Python\Python311\Lib\site-packages\gymnasium\wrappers\order_enforcing.py", line 52, in render return self.env.render(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\noahs\AppData\Local\Programs\Python\Python311\Lib\site-packages\gymnasium\wrappers\env_checker.py", line 53, in render return env_render_passive_checker(self.env, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\noahs\AppData\Local\Programs\Python\Python311\Lib\site-packages\gymnasium\utils\passive_env_checker.py", line 384, in env_render_passive_checker result = env.render() ^^^^^^^^^^^^ File "C:\Users\noahs\Coding\AI\RL\JSBGym\jsbgym\environment.py", line 161, in render self.figure_visualiser.plot(self.sim) File "C:\Users\noahs\Coding\AI\RL\JSBGym\jsbgym\visualiser.py", line 64, in plot data = subplot.lines.pop() ^^^^^^^^^^^^^^^^^ AttributeError: 'ArtistList' object has no attribute 'pop' I tried rendering this environment with human https://github.com/sryu1/jsbgym but I keep getting this error. I understand that I can't use pop() with ArtistList but I don't know what is using ArtistList, could someone check my repo and see if there's any solution to that? Thanks :)
{ "language": "en", "url": "https://stackoverflow.com/questions/75640068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: What is difference between two formulae? #include <stdio.h> int main(void) { int d, i1, i2, i3, i4, i5, j1, j2, j3, j4, j5, first_sum, second_sum, total; printf("Enter the first single digit : "); scanf("%1d", &d); printf("Enter the first group of five digits : "); scanf("%1d%1d%1d%1d%1d",&i1, &i2,&i3, &i4, &i5 ); printf("Enter the second group of five digits : "); scanf("%1d%1d%1d%1d%1d",&j1, &j2,&j3, &j4, &j5 ); first_sum = d + i2 + i4 + j1 + j3 + j5; second_sum = i1 + i3 + i5 + j2 + j4; total = 3 * first_sum + second_sum; printf("check digit : %d\n : ", 9 - ((total - 1) % 10)); // 9 - ((total - 1) % 10)) return 0; } This is my code and my input is Enter the first single digit : 0 Enter the first group of five digits : 13800 Enter the second group of five digits : 15173 If I change the formula 9 - ((total - 1) % 10)) into 10 - (total % 10) then I think in some cases it has different results; if the total is 10, I'll get a different result. But how do I explain it to someone? I calculated those on my book, and I was Googling but I have no idea. A: the two formulas are different there are mathematical laws such as bodmass rule to determine which of addition/substraction/multiplication/division will take place first. over the top of that in c language too there are right and left precedence in c language. however, sometimes wen you get the same answer it is just because of coincidence. if you could state the exact question about what you are supposed to perform in the code than we can help you provide which is an appropriate formula according to your requirement.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: property and setter decorators are not working and I can't figure out why? Hi I am facing an issue while working with @property and @left.setter even when I have the Cable class derived from object. from port import Port class Cable(object): cableNumber = 1 def __init__(self, connLeft = None, connRight = None): self._left = connLeft if isinstance(connLeft, Port) else None self._right = connRight if isinstance(connRight, Port) else None self._cableId = Cable.cableNumber Cable.cableNumber += 1 def __repr__(self): outputString = '{} <----({})----> {}' return outputString.format(self._left, self._cableId, self._right) @property def left(self): return self._left @left.setter def left(self, port): self._left = None if isinstance(port, Port): self._left = port port._cable = self #@property def right(self): return self._right #@right.setter def setRight(self, port): self._right = None if isinstance(port, Port): self._right = port port._cable = self In the main.py file: from port import Port from cable import Cable from switch import Switch p1 = Port('Gi', 1, 1, 0) p2 = Port('Gi', 4, 12, 0) c1 = Cable() print(repr(p1)) print(repr(c1)) print(repr(p2)) print() c1.left(p1) #Throws error with this c1.setRight(p1) #Works I get the following error with c1.left(p1) Traceback (most recent call last): File "blablabla", line 18, in c1.left(p1) TypeError: 'NoneType' object is not callable I've been trying to get this working, but no luck. I'll really appreciate your help.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Check if lat/lng are within an area I am trying to find a way of checking if a pair of lat/lng coordinates are within an area (generated by other lat/lng coordinates). For example, if my area is a rectangle generated with these coordinates: 43.672162 -79.43585 43.629845 -79.314585 And I wanted to check if these coordinates are within that area: 43.651989 -79.371993 I have tried using this package but I can't make it work: github.com/kellydunn/golang-geo p1 := geo.NewPoint(coords[0].LatX, coords[0].LonX) p2 := geo.NewPoint(coords[0].LatY, coords[0].LonY) geo.NewPolygon(p1, p2) I was wondering if anyone has an implimentation of this they can share, or any resources that can point me in the right direction? I am open to using google maps API as well. A: It’s just a math, compare coordinates. If A coordinate inside of a rectangle, then A.x must be less then rect.x1 and greater then rect.x2 or otherwise. And similar algorithm to the y coordinates. A: I don't know Go, but you can use the cross product to determine if your point of interest is on the left or right of a vector. The vector can be p1 -> p2, then p2 -> p3, etc. If all of them are on the same side then the point sits inside your polygon shape. Keep in mind you'll have to account for the earth's meridian and possibly other things. This also assumes your polygon points form a convex shape (of a hull). If it is not a hull, then it may be more difficult. // This is mock code, I dont know Go // This is a single iteration of a loop // Vector p1->p2 vx1 = p2.lat - p1.lat vy1 = p2.long - p1.long // Vector p1->pm vx2 = pm.lat - p1.lat vy2 = pm.long - p1.long // Cross product crp = vx1*vy2 - vx2*vy1 if(cpr > 0){ // main point sits to the left of vector p1->p2 if(first_iteration){ left_side = true first_ieration = false } else { // check if also left side, if true continue, if false return false if(! left_side){ return false } } } else { // main point sits to the right (or on the line) of vector p1->p2 if(first_iteration){ left_side = false first_ieration = false } else { if(left_side){ return false } } } // outside loop, at the end of method return true
{ "language": "en", "url": "https://stackoverflow.com/questions/75640075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Python Tkinter Combobox Does not Filter Values Properly From the below mentioned code I expect to get 3 fields, where I choose value from the first field, and according to that, values in other two fields should get updated accordingly (All the data is in the database). When I run the code and choose the value combobox_var.get() always returns an empty value and cannot figure out what is wrong and why does this happen. Here is the part of the code (This is part of the larger code, but I believe error is somewhere here): #connection to postgre database conn = psycopg2.connect( host="", database="", user="", password="") conn.autocommit = True cursor = conn.cursor() cursor.execute('''SELECT * from company_code''') company = cursor.fetchall() company = pd.DataFrame(company) company.columns = ['name', 'code', 'bank_account'] # Create the form input fields legal_entity_id_frame = tk.Frame(upload_root) legal_entity_id_frame.grid(row=1, column=1, padx=10, pady=10, sticky="WE") entry1 = tk.Entry(legal_entity_id_frame, font=('bold', 15)) entry1.pack(side="left", expand=True, fill="x") account_number_frame = tk.Frame(upload_root) account_number_frame.grid(row=2, column=1, padx=10, pady=10, sticky="WE") entry2 = tk.Entry(account_number_frame, font=('bold', 15)) entry2.pack(side="left", expand=True, fill="x") # Define function to update entry widgets based on combobox selection def update_entries(*args): print("combobox_var:", combobox_var.get()) selected_value = combobox_var.get() print("Selected value:", selected_value) # Check if a selection has been made before attempting to update entries if not selected_value: print("No selection has been made") return filtered_company = company.loc[company['name'] == selected_value] if filtered_company.empty: entry1.delete(0, tk.END) entry2.delete(0, tk.END) print("No matching row found in company dataframe") else: entry1.delete(0, tk.END) entry1.insert(0, str(filtered_company.iloc[0]['code'])) entry2.delete(0, tk.END) entry2.insert(0, str(filtered_company.iloc[0]['bank_account'])) print("Entry widgets updated successfully") legal_entity_frame = tk.Frame(upload_root) legal_entity_frame.grid(row=0, column=1, padx=10, pady=10, sticky="WE") #options = company['name'].astype(str).tolist() options = company['name'].astype(str).tolist() combobox_var = tk.StringVar(value="Select a value") combobox = ttk.Combobox(legal_entity_frame, values=options, state='normal', textvariable=combobox_var) combobox_var.trace('w', update_entries) combobox.bind("<Return>", update_entries) combobox.bind("<<ComboboxSelected>>", update_entries) combobox.pack(side="left", expand=True, fill="x") I have also separate .py file, where I separately tested how combobox works, and there it works perfectly - It gets values from the DB right, after changing field, other fields get updated immediately, etc. The thing is that, both of the functions is identical, thus I cannot figure out what causes error, when I try to implement it into the program. Here is the separate .py file as well, that works perfectly: import tkinter as tk import pandas as pd from tkinter import ttk import psycopg2 root = tk.Tk() root.title("Dataframe Example") conn = psycopg2.connect( host="", database="", user="", password="") conn.autocommit = True cursor = conn.cursor() cursor.execute('''SELECT * from company_code''') company = cursor.fetchall() company = pd.DataFrame(company) company.columns = ['name','code','bank_account'] # Create sample dataframe # Define function to update entry widgets based on combobox selection def update_entries(*args): # Get selected value from combobox selected_value = combobox_var.get() # Filter company dataframe based on selected value filtered_company = company.loc[company['name'] == selected_value] if filtered_company.empty: # Clear entry widgets if there is no matching row entry1.delete(0, tk.END) entry2.delete(0, tk.END) else: # Update entry widgets with values from filtered row entry1.delete(0, tk.END) entry1.insert(0, str(filtered_company.iloc[0]['code'])) entry2.delete(0, tk.END) entry2.insert(0, str(filtered_company.iloc[0]['bank_account'])) # Create tkinter window # Create combobox widget options = company['name'].astype(str).tolist() combobox_var = tk.StringVar(value="") combobox = ttk.Combobox(root, values=options, state='normal', textvariable=combobox_var) combobox_var.trace('w', update_entries) combobox.bind("<Return>", update_entries) combobox.pack() # Create first entry widget entry1 = tk.Entry(root) entry1.pack() # Create second entry widget entry2 = tk.Entry(root) entry2.pack() root.mainloop()
{ "language": "en", "url": "https://stackoverflow.com/questions/75640078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Removing rows in a pandas dataframe after groupby based on number of elements in the group I'm stuck trying to figure out the following: Given a pandas dataframe, I would like to group by by one of the columns, remove the first row in each group if the group has less than n rows, but remove the first and last row in each group if the group has n or more rows. Is there an efficient way to achieve this? A: You can achieve this by applying conditional logic like this : n = 3 # or any other desired value for n df.groupby('column_to_group_by').apply(lambda x: x.iloc[1:-1] if len(x) >= n else x.iloc[1:]) A: You can use boolean masks: n = 3 # Conditions first = df['col1'].ne(df['col1'].shift()) last = first.shift(-1, fill_value=True) greater = df.groupby('col1').transform('size').gt(n) # reverse the mask with '~' to keep rows with loc[] instead of using drop() out = df.loc[~(first | (last & greater))] Output: >>> out col1 col2 1 A 1 2 A 2 3 A 3 4 A 4 7 B 1 8 B 2 9 B 3 12 C 1 13 C 2 15 D 1 >>> df.join(pd.concat([first, last, greater], keys=['first', 'last', 'greater'], axis=1)) col1 col2 first last greater 0 A 0 True False True # drop (first) 1 A 1 False False True 2 A 2 False False True 3 A 3 False False True 4 A 4 False False True 5 A 5 False True True # drop (last & greater) 6 B 0 True False True # drop (first) 7 B 1 False False True 8 B 2 False False True 9 B 3 False False True 10 B 4 False True True # drop (last & greater) 11 C 0 True False False # drop (first) 12 C 1 False False False 13 C 2 False True False 14 D 0 True False False # drop(first) 15 D 1 False True False
{ "language": "en", "url": "https://stackoverflow.com/questions/75640079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Capturing a string only if it's repeating I want to extract a text from a string if and only if it was repeating, for example: \sqrt{2} \sin{ \sqrt{\pi} \sqrt{x} } \tan{x} I want to extract only \sqrt{\pi} \sqrt{x} because \sqrt{...} repeated. How can I do that? I tried a couple of formulas I know and searched online but didn't get anywhere.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: why i'm getting AttributeError when using 'enchant.DictWithPWL()' from 'enchant' module? I'm using 'enchant' module. It has an attribute 'DictWithPWL' to add personal word list to exsisting dictionary that is pre-defined. But i'm getting below error: ` def dictionary_words_finder(word_lst): meaningful_word = [] # creating a dictionary object for checking a word is in dictionary or not and also adding # our personal word list from words.txt eng_dictionary = enchant.DictWithPWL("en_US", 'words.txt') for word in word_lst: if not eng_dictionary.check(word): continue else: meaningful_word.append(word) return meaningful_word` But i'm getting below error: AttributeError: module 'enchant' has no attribute 'DictWithPWL' Please let me know if there any other way for doing this task if you don't have a solution to above problem. I've installed the 'enchant' module using !pip install enchant. 'Enchant' module got installed successfylly but the problem is not resolved. Also I've search for other methods for doing same task but can't find any such module or methods. I expect that above written code should not throw 'AttributeError' by making some changes to the above code. I've also gone through documentation here --> https://pyenchant.github.io/pyenchant/tutorial.html enchant.DictWithPWL is still in their documentation but can't get where i'm doing it wrong. Please help. A: You installed the wrong package. The package you're trying to use is named PyEnchant in PyPI. Try uninstalling enchant and installing PyEnchant.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is an empty class an abstract class? I have an empty class (without attributes and methods), declared without the keyword "abstract" but It has a child class. Is it an abstract class or just a simple parent class. public class Vehicle{ } A: An empty class is not necessarily an abstract class. For a Java class to be abstract, it has to be declared abstract using the abstract keyword. Here is some documentation on abstract classes from the Java tutorials: https://docs.oracle.com/javase/tutorial/java/IandI/abstract.html An abstract class is a class that is allowed to have abstract methods. An abstract method is a method that is declared without an implementation. Any subclass of an abstract class that is not abstract itself must implement the abstract methods. In the example you gave, the class Vehicle does not have the "abstract" modifier in its class declaration, so the class Vehicle is not abstract. You might ask, "Why does Java have abstract classes?" One use case of abstract classes is in the AWT ("abstract window toolkit") framework. The java.awt.Graphics and java.awt.Graphics2D classes are both abstract. The java.awt.Window class has a paint method that takes a Graphics instance. This method is inherited by java.awt.Frame and also by javax.swing.JFrame. The Graphics class is often used in applets, AWT applications, and swing applications. So that's a little extra information on abstract classes. A: An empty class without any attributes or methods is just a simple class with no functionality. It is not considered an abstract class, even if it has a child class. To make a class abstract, you need to use the "abstract" keyword in the class declaration. An abstract class is one that cannot be instantiated directly, and it serves as a base for other classes to inherit from. In summary, the class "Vehicle" in your example is a simple parent class and not an abstract class.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Vegafusion No module named 'vl_convert' error python I am using vegafusion with altair and python to display a chart that won't show because of the max rows limit, but I have installed vegafusion and imported and enabled it. When I go to run the visualisation, I get this error ~\anaconda3\lib\site-packages\vegafusion\compilers.py in vl_convert_compiler(vegalite_spec) 16 try: ---> 17 import vl_convert as vlc 18 except ImportError: ModuleNotFoundError: No module named 'vl_convert'
{ "language": "en", "url": "https://stackoverflow.com/questions/75640090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: alternate of indexof in javascript to find items? I have set of paths in a array .So some path are constants and some paths are dynamic . constant path mean Example "/booking-success" Dynamic path mean Example '/arrival/view/:job_order_number' --> here job_order_number is dynamice. so /arrival/view/123 , /arrival/view/456. I need to check whether url present or not .I am trying like this const children = [ '/booking-success', '/back-request', '/arrival/view/:job_order_number' ] let path = "/booking-success"; // outout true ----> correct console.log(children.indexOf(path) !=-1) let path1 = "/arrival/view/1233"; // outout false ----> wrong // expected true console.log(children.indexOf(path1) !=-1) I am not getting expected output.expected output is true which method I can you ? A: You could use findIndex for this, comparing paths by splitting them on / and comparing each individual element and the overall length of the path, allowing path elements which start with : to match anything: const children = [ '/booking-success', '/back-request', '/arrival/view/:job_order_number' ] const findChild = (path) => children .findIndex((child, _, arr) => { childParts = child.split('/') pathParts = path.split('/') return childParts.every((v, i) => v[0] == ':' || v == pathParts[i]) && childParts.length == pathParts.length }) !== -1 paths = [ "/booking-success", "/arrival/view/1233", "/arrival/jobs/1233", "/booking-success/1234", "/departure/view/1233", "/arrival/view", "/back-request" ] paths.forEach(p => console.log(`${p} : ${findChild(p)}`))
{ "language": "en", "url": "https://stackoverflow.com/questions/75640091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How to Extract data from Unstructured PDF using Python I have PDF files of Student Course and College Reviews that contains college details in Tabular format. while reviews are in text or paragraphs. Activity and Events is given in table they all records have different lengths. I wanna to extract those data and store that into .csv files I tried: PyPDF2 Pdfplumber PDFMiner tika parser PDFrw. What I am expected to give me some ideas and hint to extract that data.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How to get keyword ideas from google keyword planner api via rest api I'm trying to fetch keyword ideas from google ads api, But I can't find any rest api resources, I found some code on github which can fetch the keywords but it was using python, not rest api, I'm using ads api v13, I need a api endpoint where I can make api request to fetch the keywords expecting list of keywords
{ "language": "en", "url": "https://stackoverflow.com/questions/75640094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Merge two wave files as bytes, so they play at the same time in Python? I'm making a Discord bot, and I need to record audio from a voice channel and produce one single wave audio file, where all the audio is playing at the same time, like the Discord client hears when others are in the channel. I'm using the Pycord library, and all the audio files are sent as separate users. The closest I've got to the result that I want is the code below, which just overlays one onto another, so you can only really hear one audio stream at a time. I want to be able to hear every audio stream at once (This code was generated with the help of ChatGPT, by the way). async def after_recording(self, sink: discord.sinks.WaveSink, channel: discord.VoiceChannel): audio_segments = [] for audio_data in sink.audio_data.values(): audio_data.file.seek(0) audio_segments.append(AudioSegment.from_file(audio_data.file)) # Determine the maximum duration of the audio files max_duration = max([len(segment) for segment in audio_segments]) # Pad the shorter audio files with silence for i, segment in enumerate(audio_segments): if len(segment) < max_duration: audio_segments[i] = segment + AudioSegment.silent(duration=max_duration - len(segment)) # Overlay the audio files overlaid_segment = audio_segments[0] for segment in audio_segments[1:]: overlaid_segment = overlaid_segment.overlay(segment) # Export the overlaid audio to a BytesIO object overlaid_bytes = BytesIO() overlaid_segment.export(overlaid_bytes, format='wav') overlaid_bytes.seek(0) # Send the overlaid audio as a message attachment to the specified channel await channel.send(file=discord.File(overlaid_bytes, filename='overlaid_audio.wav')) # Cleanup for audio_data in sink.audio_data.values(): audio_data.file.close() NOTE: While this code was generated with ChatGPT, I looked it over and made edits multiple times. Answers such as this were the only similar answers I could find anywhere as well, it's not generated by it alone.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to run apache beam dataflow pipeline synchronously? Here is the apache beam custom pipeline code (dataflow). last_sync_data = ( p | "Read last sync data" >> ReadFromBigQuery( query='SELECT MAX(time) as last_sync FROM ' '[wisdomcircle-350611:custom_test_data.last_sync]') | "Extract last sync time" >> beam.Map(lambda elem: elem['last_sync']) ) p = beam.Pipeline(options=options) wisgen_data = p | "wisgen job" >> ReadFromJdbc ( jdbc_url=jdbc_url, username=username, password=password, driver_class_name='org.postgresql.Driver', query="""SELECT users.id AS user_id, CONCAT(users.first_name,' ', users.last_name) AS full_name""", table_name="users" ) recruiter_data = p | "recruiter job" >> ReadFromJdbc( jdbc_url=jdbc_url, username=username, password=password, driver_class_name='org.postgresql.Driver', query="""SELECT users.id AS user_id, '""", table_name="users" ) wisgen_data | "Convert TableRow to dict(wisgen data)" >> beam.Map( lambda row: row._asdict() ) | "Write to BigQuery in wisgen data table" >> WriteToBigQuery( wisgen_table, write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND, create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED, schema='user_id:INTEGER,full_name:STRING) recruiter_data | "Convert TableRow to dict(recruiter data)" >> beam.Map( lambda row: row._asdict()) | "Write to BigQuery in recruiter data table" >> WriteToBigQuery( recruiter_table, write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND, create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED, schema='user_id:INTEGER,full_name:STRING' ) _ = (p | "Create current timestamp" >> beam.Create([{'time': datetime.datetime.utcnow()}]) | "Write to last_sync" >> WriteToBigQuery, last_sync_tab schema='time:TIMESTAMP', write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE, create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED, ) The pipeline operates asynchronously or completes all job operations in parallel, but I want to first execute the starting job last_sync_data (which is written at the beginning of the above code), then just the jobs below should run, and at last, my timstamp (which is written at the end of the above code) job should run when all the above operations are finished. Can somebody assist me in rewriting the code I have above to meet my needs ?
{ "language": "en", "url": "https://stackoverflow.com/questions/75640104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: change nepali unicode to english trying to convert names of districts stored in unicode to english in python [ { "province_id": 1, "name_np": "\u0908\u0932\u093e\u092e", "name_en": "" }, { "province_id": 1, "name_np": "\u0909\u0926\u092f\u092a\u0941\u0930", "name_en": "" }, { "province_id": 1, "name_np": "\u0913\u0916\u0932\u0922\u0941\u0919\u094d\u0917\u093e", "name_en": "" } ] this is my json file and i am trying to convert the name_np unicode to english and store it into name_em
{ "language": "en", "url": "https://stackoverflow.com/questions/75640105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Cyrillic text in Java I'm trying to transfer Russian text to an Excel or SQlite or to any other program. The result is always the same: Абиссинская кошка. I understand that something with the encoding. Tried String myString = "some cyrillic text"; byte bytes[] = type.getBytes("UTF-8"); String value = URLEncoder.encode(new String(bytes, "Windows-1251"), "Windows-1251"); but that doesn't help either. Help me to understand. I am newbie. A: String myString = "some cyrillic text"; byte bytes[] = type.getBytes("UTF-8"); Now bytes contains a UTF-8 encoding of the string. If you were to call new String(bytes, "UTF-8") you would get back an equivalent string to the original one. But ... String value = URLEncoder.encode( new String(bytes, "Windows-1251"), // HERE "Windows-1251"); ... at HERE you are decoding with the wrong character encoding. The String constructor takes your word for it ... and the result is mangled characters. Understand this: The bytes array contains just the encoded text. It doesn't contain anything to identify the encoding scheme. So the String constructor has no way of knowing what the correct encoding is ... apart from what you tell it. And it has no (reliable) way of knowing if the encoding you told it is correct. Let alone fixing your mistake. The correct way to do what your code does is this: String myString = "some cyrillic text"; String value = URLEncoder.encode(myString, "Windows-1251"); However ... we don't have sufficient context to know whether that is what is actually required for your application.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Why does pdb sometimes skip stepping into multi-part conditional? I have a Minimal Working Example code : test.py : a = 4 b = 6 l = [4, 6, 7, 8] for t in l : if t == a or t == b: continue print(t) I am stepping through the code using pdb (python-3.9.2) : local: ~ $ python -m pdb test.py > test.py(1)<module>() -> a = 4 (Pdb) b 5 Breakpoint 1 at test.py:5 (Pdb) c > test.py(5)<module>() -> if t == a or t == b: (Pdb) p t,a (4, 4) (Pdb) p t == a True (Pdb) t == a or t == b True (Pdb) n #### <<--- Conditional is True, why doesn't it explicitly step into it? > test.py(4)<module>() -> for t in l : (Pdb) n > test.py(5)<module>() -> if t == a or t == b: (Pdb) p t==a, t==b (False, True) (Pdb) t == a or t == b True (Pdb) n #### <<--- Conditional is True, and it explicitly steps into it > test.py(6)<module>() -> continue (Pdb) t == a or t == b True QUESTION : * *Why does pdb explicitly step into the conditional (i.e. it explicitly goes to line 6, the continue statement) when t==b and not when t==a? Is this an optimization? A: When the t == a check passes, the bytecode jumps straight back to the for line, skipping the continue. Here's the output you get if you examine the bytecode with dis.dis on CPython 3.9: 1 0 LOAD_CONST 0 (4) 2 STORE_NAME 0 (a) 2 4 LOAD_CONST 1 (6) 6 STORE_NAME 1 (b) 3 8 BUILD_LIST 0 10 LOAD_CONST 2 ((4, 6, 7, 8)) 12 LIST_EXTEND 1 14 STORE_NAME 2 (l) 4 16 LOAD_NAME 2 (l) 18 GET_ITER >> 20 FOR_ITER 30 (to 52) 22 STORE_NAME 3 (t) 5 24 LOAD_NAME 3 (t) 26 LOAD_NAME 0 (a) 28 COMPARE_OP 2 (==) 30 POP_JUMP_IF_TRUE 20 32 LOAD_NAME 3 (t) 34 LOAD_NAME 1 (b) 36 COMPARE_OP 2 (==) 38 POP_JUMP_IF_FALSE 42 6 40 JUMP_ABSOLUTE 20 7 >> 42 LOAD_NAME 4 (print) 44 LOAD_NAME 3 (t) 46 CALL_FUNCTION 1 48 POP_TOP 50 JUMP_ABSOLUTE 20 >> 52 LOAD_CONST 3 (None) 54 RETURN_VALUE Note that for the t == a comparison, 5 24 LOAD_NAME 3 (t) 26 LOAD_NAME 0 (a) 28 COMPARE_OP 2 (==) 30 POP_JUMP_IF_TRUE 20 the code jumps to the instruction at bytecode index 20, which is the FOR_ITER instruction. The continue line is skipped entirely. On the other hand, if the t == a check fails and the t == b check passes, then the code falls through the POP_JUMP_IF_TRUE for the first comparison (because the first result is false) and the POP_JUMP_IF_FALSE for the second comparison (because the second result is true). The code ends up reaching this instruction: 6 40 JUMP_ABSOLUTE 20 which is what the continue on line 6 compiles to. On Python 3.10, the jump targets are different. On 3.10, we instead see 5 24 LOAD_NAME 3 (t) 26 LOAD_NAME 0 (a) 28 COMPARE_OP 2 (==) 30 POP_JUMP_IF_TRUE 20 (to 40) with the POP_JUMP_IF_TRUE now jumping to bytecode index 40, which is the continue. Thus, on 3.10, pdb now stops at the continue even when t == a. (Due to changes in instruction encoding, bytecode index 40 happens to be represented by an oparg of 20 on Python 3.10, which coincidentally happens to be the bytecode index the Python 3.9 bytecode was jumping to. This confused me for a while before I figured out what was going on.)
{ "language": "en", "url": "https://stackoverflow.com/questions/75640108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Sum values from different objects - javascript I am trying to sum values from different objects. Each object has its ID, but the IDs can repeat in some objects. What I need to do now is sum the values from the objects that have the same ID. Does someone have an idea for me? I tried to use mongoose aggregate, but with no success. Let's suppose I have the objects below. const array = [ { _id: "123456", value: 30 }, { _id: "123456789123456789", value: 12 }, { _id: "123456", value: 25 }, ]; I would need something that brings up the following result: ==> id: 123456 value: 55 ==> id: 123456789123456789 value: 12 A: Reduce the array into a map. Conveniently, map.set returns the map, so it's just a one-liner. const array = [{ _id: "123456", value: 30 }, { _id: "123456789123456789", value: 12 }, { _id: "123456", value: 25 }]; const sums = array.reduce((map, object) => map.set(object._id, map.get(object._id) ?? object.value), new Map()); console.log(sums.get("123456")); console.log(sums.get("123456789123456789")); // convert to normal object const sumsAsNormalObject = Object.fromEntries([...sums.entries()]); console.log(sumsAsNormalObject); Confused about syntax? See nullish coalescing and spread syntax. You can also convert the map into a plain object if you wanted to. A: its impossible to have duplicate _id in mongodb, but you can change that to something like dataId, id or else example: [ { _id: 1, id: 123456, value: 30 }, { _id: 2, id: 123456789123456789, value: 12 }, { _id: 3, id: 123456, value: 25 } ] then with wongo would be like: db.collection.aggregate([ { "$group": { "_id": "$id", "value": { "$sum": "$value" } } } ]) MONGO-PLAYGROUND Note: if some data can be handle by resources/database, use it instead. otherwise, create a function to handle it. A: For each object, it checks whether the "result" object already has a property with the same _id as the current object being looped over. const array = [ { _id: "123456", value: 30 }, { _id: "123456789123456789", value: 12 }, { _id: "123456", value: 25 }, ]; const result = {}; array.forEach(item => { if(result[item._id]) { result[item._id] += item.value; } else { result[item._id] = item.value; } }); console.log(result);
{ "language": "en", "url": "https://stackoverflow.com/questions/75640110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Memory limit issue, nodejs + axios i have a funtion to get title of a webpage and send it back to user but after getting requests for while i can notice the memory is getting piled up. can someone check this and see if there is an issue here or not or if there a better way to get the same result? thank you const axios = require("axios"); let cheerio = require("cheerio"); app.get("/title/*", (req, res) => { let url = req.originalUrl.substr(7); axios .get(url) .then((axi_res) => { var $ = cheerio.load(axi_res.data); var title = $("title").text(); title = removeYT(title); res.send('SUCCESS-'+title); }) .catch((error) => { console.error(error); res.send("SUCCESS-no-title"); }); }); function endsWith(str1, str2) { if (str1.indexOf(str2) == str1.length - str2.length) { return true; } else { return false; } } function removeYT(title) { let yt = "- YouTube"; if (endsWith(title, yt)) { return title.substr(0, title.length - yt.length); } else { return title; } (memory will be cleaned up but its not that quick, it will be like 5-6 requests can occupy about 90mb and i have to wait for 1-2min for that memory to be freed again although the result is sent back in less than 1sec. so having more requests can make me rich the memory limit restriction before the memory is clear again)
{ "language": "en", "url": "https://stackoverflow.com/questions/75640111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How do I determine if an emit is defined? I have a button that can handle something with and without longPress. Originally before I switched to emitters I used function callbacks since I had a React Native background. So I did the typings as export type UsePressableEmits = | { (e: "press", event: Event): void; (e: "longPress", event: Event): void; } | { (e: "press", event: Event): void; }; Right now this code block fails function fire(event: Event) { firing.value = true; emit("press", event); firing.value = false; } function fireLongPress(event: Event) { firing.value = true; emit("longPress", event); // does not pass type checks firing.value = false; }
{ "language": "en", "url": "https://stackoverflow.com/questions/75640117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you Get data from fs.readFile? function LoadMapFromText(route){ var str = ''; fs.readFile(route, (err, data) => { str = data.toString(); }); var pairs = str.split("-n"); console.log(str); } ive been trying to fix this problem for god knows how long, and i just don't know how to do it. the string is not updating from inside the arrow function. i already tried making the function async, and i already tried putting await before fs.readFile, but to no avail. i am at a loss of what to do. A: The problem is that fs.readFile is an asynchronous function that implements a callback function. The callback function gets executed only after the file has been read. In your case, your "arrow function" is the callback function, which gets executed only after the file has been read. But by this time, the lines following the callback, including your console.log(str) statement, have already been executed. Try using fs.readFileSync instead. function LoadMapFromText(route){ var str = fs.readFileSync(route, 'utf8'); var pairs = str.split("-n"); console.log(str); } A: You should use the function fs.readFileSync() in the proposed case if it is a light file, otherwise you should use the callback of fs.readFile with a promise, so your string was not updated. function LoadMapFromText(route){ let str = ''; str = String(fs.readFileSync(route)) var pairs = str.split("-n"); console.log(str, pairs); }
{ "language": "en", "url": "https://stackoverflow.com/questions/75640118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Find ideal window of values in pandas dataframe column using rolling window I have the following dataframe: df = col_1 col_2 col_3 1 5.6 2.1 2 4.3 2.3 3 6.2 2.25 4 5.2 2.15 5 4.7 2.11 6 5.1 2.10 7 4.4 2.24 8 6.1 2.12 Is there a way I can use pandas rolling or another function/technique to find a consecutive set of 3 or X number of rows based on col_3 where the values are stable (i.e. within a range of 0.05 from one another indicating that each of the samples (i.e. col_1) are showing consistency in results? Essentially specifying a window of stable values based on col_3 to extract and save the window values in a numpy array. I've tried to simplify if it to look for a set of consecutive values based on col_1 that are between 2.10 and 2.25, but haven't been able to get it to work. Is there a way to ensure the output values are consecutive? So in the example, I would get back the following as an output: col_1 col_2 col_3 4 5.2 2.15 5 4.7 2.11 6 5.1 2.10 which I can then concatenate the 3 values in col_3 as an array/list? A: The output of the rolling function must be a number, so you cannot return a list that is the rolling result into col_3 directly. You should use window.to_list() to get the windowed value list first and then produce the result. code: import pandas as pd def check_stable(args, thr=0.05, length=3): if len(args) != length: return None if max(args) - min(args) > thr: return None return args windowed = [window.to_list() for window in df["col_3"].rolling(3)] df["result"] = pd.Series(windowed).apply(check_stable, length=3) output:
{ "language": "en", "url": "https://stackoverflow.com/questions/75640121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Circlize in R: Multiple group chord diagram Using the package circlize from R, My objective is to be able to group a chord diagram by year within each sector. New to the package, I started from the beginning by following examples in the tutorial. According to the vignette, grouping is possible (even using data.frames) by passing a group flag to the chordDiagram() command. This is stated in Chapter 15.6 Multiple-group Chord diagram. Following the vignette, i was able to produce a chord diagram but I am stuck on how to group to get my desired result. I have put together an example of a visual of what I would like the chord diagram to look like here: As you can see, I aim to have each sector (OOP, UVA, WSE, FIN, MAT, OIC) grouped by the year (which is a column in the input data.frame. I can get the chord diagram, but without the years added. A reproducible example Creating a data.frame Types <- data.frame(Types = c("OOP", "UVA", "MAT", "OIC", "FIN", "WSE")) Type_Cols <- c(OOP = "#548235", UVA = "#660066", MAT = "#4472C4", OIC = "#002060", FIN = "#843C0C", WSE = "#C55A11") stack.df <- data.frame(Year = c(rep(2019, 1), rep(2020, 4), rep(2021, 7), rep(2022, 11), rep(2023, 11)), Invoice = c(paste0("2019.", "10", ".INV"), paste0("2020.", seq(from = 20, to = 23, by = 1), ".INV"), paste0("2021.", seq(from = 30, to = 36, by = 1), ".INV"), paste0("2022.", seq(from = 40, to = 50, by = 1), ".INV"), paste0("2023.", seq(from = 50, to = 60, by = 1), ".INV"))) stack.df <- cbind(stack.df, Org_1 = Types[sample(nrow(Types), nrow(stack.df), replace = TRUE), ], Org_2 = Types[sample(nrow(Types), nrow(stack.df), replace = TRUE), ]) Adding lty & colors for links stack.df$lty <- sample(x = rep(c(1,2), times = nrow(stack.df)), size = nrow(stack.df), replace = TRUE) stack.df$Link_cols <- stack.df$Year stack.df$Link_cols <- ifelse(stack.df$Link_cols == 2019, "#D9D9D9", ifelse(stack.df$Link_cols == 2020, "#B296B6", ifelse(stack.df$Link_cols == 2021, "#FFD966", ifelse(stack.df$Link_cols == 2022, "#D5469E", ifelse(stack.df$Link_cols == 2023, "#B4C2A7", stack.df$Link_cols))))) Re-arranging the stack.df stack.df <- stack.df[, c(3,4, 2, 1, 5, 6)] Graph the Chord Diagram library(circlize) chordDiagramFromDataFrame(stack.df[, c(1:2)], order = sort(union(stack.df$Org_1, stack.df$Org_2)), grid.col = Type_Cols, link.lty = stack.df$lty, directional = 1, direction.type = "arrows", link.arr.col = c("black", rep("white", nrow(stack.df) - 1))) This gives the following chord diagram: To produce the group parameter, the names in group should cover all sector names. This is where I am stuck, the vignette gives an example using a matrix, but not a data.frame. I have attempted to work around this buy writing similar codes like this: group <- structure(union(stack.df$Org_1, stack.df$Org_2), names = unique(stack.df$Year)) But an approach like this gets me nowhere. Any ideas to get me unstuck? Thank-you!
{ "language": "en", "url": "https://stackoverflow.com/questions/75640123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SQL, ROUND function SQL, How to use the ROUND function on this queries to rounded the all Unite Bought and Subtotal same as image enter image description here Subtotal Units Bought 9.9800 2.00 19.9000 3.00 14.9700 1.00 9.9500 5.00 SELECT INVOICE.CUS_CODE, INVOICE.INV_NUMBER,PRODUCT.P_DESCRIPT,LINE.LINE_UNITS AS 'Units Bought',LINE.LINE_PRICE AS 'Unit Price',LINE.LINE_UNITS * LINE.LINE_PRICE AS Subtotal FROM
{ "language": "en", "url": "https://stackoverflow.com/questions/75640124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: Look for an excel formula please help me with this problem. I have this excel sheet and I need to find fastest way to find the solution for my professor's request. I have no clue how to do this. Please help me!!!! The ranking is based on the importance of the status. I need to identify which one of the duplicates is considered as "GOOD" and the other as "BAD" in the result column. For example: there are two product id are 31 and they have different status, one is Contacts Activities Opportunity and one is Contacts Activities. Between the two product ID, the one with Contacts Activities Opportunity (ranking No.1) is "GOOD" and the one with Contacts Activities (ranking No.4) is "BAD". Please help me how to put together a formula or a rule to identify which one of the duplicate of Product ID is GOOD or BAD based on the ranking. This is my spreadsheet I created the ranking column so I can try to create a formular to identify the result column faster. I had an idea of comparing which product ID has more amount of letters would be considered as "GOOD" in the result column. But I don't know how to work it out within Excel.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: why line plot change when I combine them using ggplot? I have these 3 datasets for three different disease groups with a health score. When I combine them in one dataset and plot them, they appear differently compared to their individual plots (as shown below). After adding a variable called "disease" in each of them "healthy, HF, stroke", I combined the three datasets this way d1<- rbind(healthy_clean,HF_clean, Stroke_clean) then I checked the number of individuals in the new dataset (d1) compared to the original and they were the same. Also, I checked the mean, SD, max and min of the health score for each disease group in d1 and they were exactly as in the original. any idea why it is not the same lines on the overall line plot? this is my code for the separate plots and the combined one #####example of separate: ggplot(data=healthy_clean, aes(x=age, y=Mental.Health_T.score,fill=Gender, linetype=Gender)) + geom_smooth(alpha=0) + scale_x_log10() + scale_y_log10() + xlab("Age (Years)") + ylab("Health Global Score")+ ggtitle("healthy individuals")+ xlim(19,97) #####code for the combined plot ggplot(data=d1, aes(x=age, y=Mental.Health_T.score, color=disease , linetype=Gender)) + geom_smooth(alpha=0) + scale_x_log10() + scale_y_log10() + xlab("Age (Years)") + ylab("Health Global Score")+ xlim(19,97)+ scale_color_manual(values = c("#00AFBB", "#E7B800", "#FC4E07"))+ scale_linetype_manual(values = c('solid','dashed'))+ guides(lty = guide_legend(override.aes = list(col = 'black')))+ theme(legend.position = "top", legend.title = element_blank())
{ "language": "en", "url": "https://stackoverflow.com/questions/75640126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Trouble solving captcha on Discord guild join using discord.py-self and capmonster. Error: 400 Bad Request: Captcha required I'm trying to solve a captcha on Discord guild join using discord.py-self and capmonster. However, I keep getting the following error: selfcord.errors.CaptchaRequired: 400 Bad Request (error code: -1): Captcha required Here is the code I'm using: class CaptchaSolver(selfcord.CaptchaHandler): async def fetch_token(self, data: dict, proxy: str, proxy_auth: aiohttp.BasicAuth) -> str: async with aiohttp.ClientSession() as session: capmonster = HCaptchaTask("") task_id = capmonster.create_task("https://discord.com/", data['captcha_sitekey']) result = capmonster.join_task_result(task_id) return result.get("gRecaptchaResponse") I've tried changing the URL in the create_task method to "https://discord.com/channels/@me", and also passing a user_agent with rqdata. However, neither of these solutions seem to have resolved the issue. I was expecting the code to successfully solve the captcha and return the gRecaptchaResponse token, but instead I'm still receiving the selfcord.errors.CaptchaRequired: 400 Bad Request (error code: -1): Captcha required error. I'm not sure what's causing the error or how to solve it. Any help would be appreciated. Thanks in advance!
{ "language": "en", "url": "https://stackoverflow.com/questions/75640131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Python encounters an error upon importing [RPA.Browser.Selenium] library I'm trying to setup my Robotframework on Visual Studio, however I'm encountering an error upon running it. It seems the error is unable to detect the imported library. Your response is highly appreciated. Thank you so much. Screenshot:
{ "language": "en", "url": "https://stackoverflow.com/questions/75640132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: JwtStrategy class is not working for cognito access token I'm trying to validate the user access tokens by running them through my JwtStrategy class in nestjs. Besides my approach I've tried quite a few of the tutorial ones with no luck. I keep getting a 401 even when using auth0. My LocalStrategy works fine and outputs the accesstoken as well as other expected information. Here is my JwtStrategy class: import { Injectable, UnauthorizedException } from '@nestjs/common'; import { PassportStrategy } from '@nestjs/passport'; import { Strategy, ExtractJwt } from 'passport-jwt'; import { passportJwtSecret } from 'jwks-rsa'; @Injectable() export class JwtStrategy extends PassportStrategy(Strategy) { constructor() { super({ secretOrKeyProvider: passportJwtSecret({ cache: true, rateLimit: true, jwksRequestsPerMinute: 3, jwksUri: `https://cognito-idp.${process.env.AWS_REGION}.amazonaws.com/${process.env.AWS_COGNITO_USER_POOL_ID}/.well-known/jwks.json`, }), jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(), audience: process.env.AWS_COGNITO_CLIENT_ID, issuer: `https://cognito-idp.${process.env.AWS_REGION}.amazonaws.com/${process.env.AWS_COGNITO_USER_POOL_ID}`, algorithms: ['RS256'], }); } validate(payload: any) { if (!payload) { throw new UnauthorizedException(); } return payload; } } Here is my AuthModule: import { Module } from '@nestjs/common'; import { JwtStrategy } from './jwt.strategy'; import { LocalStrategy } from './local.strategy'; import { PassportModule } from '@nestjs/passport'; @Module({ imports: [PassportModule.register({ defaultStrategy: 'jwt' })], providers: [JwtStrategy, LocalStrategy], exports: [], }) export class AuthModule {} This is the endpoint I'm trying to hit: import { Controller, Get, UseGuards } from '@nestjs/common'; import { AppService } from './app.service'; import { AuthGuard } from '@nestjs/passport'; @Controller() export class AppController { constructor(private readonly appService: AppService) {} @Get() @UseGuards(AuthGuard('jwt')) getHello(): string { return this.appService.getHello(); } } This is my jwt auth guard: import { Injectable } from '@nestjs/common'; import { AuthGuard } from '@nestjs/passport'; @Injectable() export class JwtAuthGuard extends AuthGuard('jwt') {} * *Removing the @UseGuards(AuthGuard('jwt')) line in the intended endpoint yields the expected result *I tried using tokens/implementations from both cognito and auth0 so I don't believe this is an idp issue *I verified that this https://cognito-idp.${process.env.AWS_REGION}.amazonaws.com/${process.env.AWS_COGNITO_USER_POOL_ID}/.well-known/jwks.json, })` indeed produced the correct JSON result *I have double and triple checked the env variables used in the JwtStrategy class *The access token is a valid jwt token
{ "language": "en", "url": "https://stackoverflow.com/questions/75640133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to Remove Lightbox in Blogger? the codes doesnt exist So I want to remove CSS Lightbox from my Blogger but, when I search for the code it's not there. Previously, I deleted it but I don't know why it's back again screenshoot the lightbox I've been looking for it on google but nothing, I just want the lightbox to disappear so it can display images immediately. Because the Lightbox itself has a problem where it doesn't display an image, only a black blank.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: extracting hour in a 24h format from a dataset hi trying to extract the hour from the column date in a 24h format from the following dataset and it doesn't seem to work I seem to be getting only am Chicago crime df["timestamp"] = pd.to_datetime(df["date"]) # Convert timestamp to AM/PM format df["timestamp"] = df["timestamp"].apply(lambda x: x.strftime("%Y-%m-%d %I:%M:%S %p")) df['hour'] = df['timestamp'].dt.hour The hour part is from 1-12 and I don’t seem to have am/pm only utc in the time stamp the data in the time stamp column-date A: Using the following version you can extract the hour from the sample without the need to convert the datatype of entire column. from datetime import datetime sample = "2019-08-29 06:40:00 UTC" parsed_sample = datetime.strptime(sample, "%Y-%m-%d %H:%M:%S %Z") hour = parsed_sample.hour Here %Z represents the time zone.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Karate framework does not delete json key my requests is created using csv file I need to modify/delete few values in a json request, I have a function that should make Modify/Delete the Key Values however the delete obj[key] generates error Feature: Validate TCs Background: Given url 'https://'+env_apiHost Given path '/abc/pqr-stu/v1/wxy-zzzz' Scenario Outline: Validate Functional Data * def CHK_NULL = 'CHK_NULL' * def CHK_MAND = 'CHK_MAND' * def modify_pop_key = """ function(obj) { for(var key in obj) { if (typeof obj[key] === 'object') { modify_pop_key(obj[key]) }else if (obj[key] === CHK_MAND) { delete obj[key] }else if (obj[key] === CHK_NULL) { obj[key] = '' } } } """ * def request_string = """ { "Key1": "<key1>", "key2": "<key2>", "key3": "<key3>", "key4": "<key4>" } """ * def req1 = call modify_pop_key request_string And request request_string When method post Then status 200 * print response Examples: read('classpath:data/jsnVals.csv')| data/jsnVals.csv | key1 | key2 | key3 | key4 | | HAPPY | Val_12 | Val_13 | Val_14 | | TC_001 |CHK_MAND| Val_23 | Val_24 | | TC_002 |CHK_NULL| Val_33 | Val_34 | | TC_003 | Val_42 | Val_43 | Val_44 | Error: * def req1 = call modify_pop_key request_string org.graalvm.polyglot.PolyglotException src/test/java/xxxxx/feature/pqr_xyx.feature:58
{ "language": "en", "url": "https://stackoverflow.com/questions/75640140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to 'git cat-file' using exclusively Python? I am trying to read out commit hashes, alongside their parent hashes, in order to build a commit graph from the .git/ directory. Currently, I have something like: import zlib import os ... for current, subs, files in os.walk('.'): for filename in files: # in format ##/#{38} path = os.path.join(current, filename)[2:] # 'info/' and 'pack/' exist # don't worry about packed files # assume empty (excluding . and ..) with open(path, 'rb') as file: # returns bytes object # assuming UTF-8 encoding (default) vs. legacy # https://git-scm.com/docs/git-commit#_discussion # .decode() also defaults to utf-8 print(zlib.decompress(file.read()).decode()) However, I am noticing that this is not what I want. The above code is meant to eventually go through all of .git/objects/ and parse into a list the commits and their parents to help me build the commit graph. As of right now, it seems like the zlib decompression is not producing output the way I like. I have read the relevant sections in Pro Git, specifically: http://git-scm.com/book/en/v2/Git-Internals-Git-Objects , which had instructions for Ruby. How can I accomplish this in Python? A: The linked documentation seems pretty clear. Git first constructs a header which starts by identifying the type of object — in this case, a blob. To that first part of the header, Git adds a space followed by the size in bytes of the content, and adding a final null byte: Ruby is very similar to Python, so when the documentation shows: >> content = "what is up, doc?" => "what is up, doc?" >> header = "blob #{content.bytesize}\0" => "blob 16\u0000" >> store = header + content => "blob 16\u0000what is up, doc?" The Python code is almost identical: >>> content = "what is up, doc?" >>> header = f"blob {len(content)}\0" >>> blob = header + content >>> blob 'blob 16\x00what is up, doc?' As both the prose and the code are showing us, when you read data from an object file you need to split it into a header and content. Something like: with open(path, 'rb') as fd: data = zlib.decompress(fd.read()) header, content = data.split(b'\0', 1) if header.startswith(b'commit'): print('found a commit in', path)
{ "language": "en", "url": "https://stackoverflow.com/questions/75640141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: OpenAI converting API code from GPT-3 to chatGPT-3.5 Below is my working code for the GPT-3 API. I am having trouble converting it to work with chatGPT-3.5. <?php include('../config/config.php'); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Chatbot</title> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/font/bootstrap-icons.css"> <link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-GLhlTQ8iRABdZLl6O3oVMWSktQOp6b7In1Zl3/Jr59b6EGGoI1aFkw7cmDA6j6gD" crossorigin="anonymous"> <link href="style.css" rel="stylesheet"> </head> <body> <div class="container py-5"> <h1 class="mb-5 text-center"> <div class="logo"> <img src="/images/Logo-PocketAI.svg" height="80" width="210" aria-label="PocketAI.Online Logo" title="PocketAI.Online Logo" alt="SPocketAI.Online Logo" class="img-fluid"> </div> </h1> <div class="form-floating mb-3"> <select class="form-select" id="tab-select" aria-label="Select your purpose"> <option value="exam" selected>Exam</option> <option value="feedback">Feedback</option> </select> <label for="tab-select">Select your purpose:</label> </div> <div class="input-group mb-3"> <div class="form-floating"> <textarea class="form-control" placeholder="Enter your question or comment here" id="prompt"></textarea> <label for="prompt">Enter your question or comment here</label> </div> <div class="input-group-append username w-100 mt-3 mb-4"> <button class="btn btn-outline-primary w-100" type="button" id="send-button">Send</button> </div> </div> <div id="output" class="mb-3" style="height: 300px; overflow: auto; border: 1px solid lightgray; padding: 10px;"></div> <div id="exam-instructions" class="mb-3" style="display: block;"> <h3>Exam</h3> <p>PocketAI can create multiple choice and true false questions in a format that enables import into Brightspace D2L quizzes using Respondus. Place PocketAI output into a Word document before importing with Respondus. Ask PocketAI questions like the following: <br> <br> Create 3 multiple choice questions about carbohydrates for a freshman Nutrition online college course.<br> Create 2 true false questions about business for a sophomore Business face to face college course.</p> </div> <div id="feedback-instructions" class="mb-3" style="display: none;"> <h3>Feedback</h3> <p>Enter text to receive writing feedback.</p> </div> </div> <script> const previousPrompts = []; const userName = "<strong>User</strong>"; const chatbotName = "<strong>PocketAI</strong>"; const selectDropdown = document.getElementById("tab-select"); selectDropdown.addEventListener("change", function() { const activeTabId = this.value; // hide all instruction sections document.querySelectorAll("[id$='-instructions']").forEach(function(instructionSection) { instructionSection.style.display = "none"; }); // show the instruction section for the active tab document.getElementById(`${activeTabId}-instructions`).style.display = "block"; }); document.getElementById("send-button").addEventListener("click", function() { const prompt = document.getElementById("prompt").value; const activeTabId = selectDropdown.value; const endpoint = "https://api.openai.com/v1/completions"; const apiKey = "<?=$OPEN_AI_KEY;?>"; document.getElementById("send-button").innerHTML = '<span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Sending...'; let promptText = ""; switch (activeTabId) { case "exam": promptText = "Create quiz questions in the following format: Begin each question with a number followed by a period, and then include the question wording. For each question, include four answer choices listed as letters (A, B, C, D) followed by a period and at least one space before the answer wording. Designate the correct answer by placing an asterisk (*) directly in front of the answer letter (do not put a space between the asterisk and the answer choice). Place the asterisk in front of the answer letter, only the front. It is important that correct answers are identified. Don't make up answers, only select factual answers. For example formatting (don't use this specific example), \"1. What is the recommended daily intake of dietary fiber? A. 10 grams B. 25 grams *C. 50 grams D. 75 grams\". Format true false questions the same way. If you are unsure of the correct answer, don't create the question. Every quiz question and answer must be 100% correct and factual. Do not make up answers. All answers must be correct."; break; case "feedback": promptText = "Can you provide feedback on the writing, grammar, sentence structure, punctuation, and style of this student's paper? The paper should be analyzed for its strengths and weaknesses in terms of written communication. Please provide suggestions for improvement and examples to help the student understand how to make the writing better. The feedback should be specific and provide actionable steps that the student can take to improve their writing skills. Please include at least three examples of areas that could be improved and specific suggestions for how to improve them, such as correcting grammar errors, restructuring sentences, or improving the use of punctuation."; break; } const requestData = { prompt: previousPrompts.join("\n") + promptText + "\n" + prompt, max_tokens: 400, model: "text-davinci-003", n: 1, stop: "", temperature: 0.5, top_p: 0.0, frequency_penalty: 0.0, presence_penalty: 0 }; const requestOptions = { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${apiKey}`, }, body: JSON.stringify(requestData), }; fetch(endpoint, requestOptions) .then(response => response.json()) .then(data => { const reply = data.choices[0].text; // Add the user message to the chat history const userMessage = `<div class="message-container"> <div class="username">${userName}:&nbsp;</div> <div class="user-message">${prompt}</div> </div>`; document.getElementById("output").innerHTML += userMessage; const chatbotMessage = `<div class="message-container"> <div class="username">${chatbotName}:&nbsp;</div> <div class="chatbot-message" style="white-space: pre-wrap">${reply}<i class="bi bi-clipboard-check copy-button" data-bs-toggle="tooltip" data-bs-placement="bottom" title="Copy to clipboard" data-text="${reply}" style="cursor: pointer;"></i></div> </div>`; document.getElementById("output").innerHTML += chatbotMessage; // Add an event listener to each "Copy to Clipboard" button document.addEventListener("click", function(event) { if (event.target.classList.contains("copy-button")) { const textToCopy = event.target.dataset.text; navigator.clipboard.writeText(textToCopy); } }); // Scroll to the bottom of the chat history document.getElementById("output").scrollTop = document.getElementById("output").scrollHeight; // Clear the user input field document.getElementById("prompt").value = ""; previousPrompts.push(prompt); // Clear the spinner and show the "Send" button again document.getElementById("send-button").innerHTML = 'Send'; }) .catch(error => { console.error(error); // Hide the spinner and show the "Send" button again document.getElementById("send-button").innerHTML = 'Send'; }); }); document.getElementById("prompt").addEventListener("keydown", function(event) { if (event.keyCode === 13) { event.preventDefault(); document.getElementById("send-button"). click(); } }); </script> </div> </div> <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js" integrity="sha384-w76AqPfDkMBDXo30jS1Sgez6pr3x5MlQ1ZAGC+nuZB+EYdgRZgiwxhTBTkF7CXvN" crossorigin="anonymous"></script> </body> </html> I have read https://openai.com/blog/introducing-chatgpt-and-whisper-apis and referred to this - OpenAI ChatGPT (gpt-3.5-turbo) API: How to access the message content? but still can't make it work. I've tried changing the requestData to this, but no luck: const requestData = { model: "gpt-3.5-turbo", messages: [ { role: "user", content: prompt } ], max_tokens: 400, temperature: 0.5, top_p: 1, frequency_penalty: 0, presence_penalty: 0 }; Any help will be greatly appreciated!
{ "language": "en", "url": "https://stackoverflow.com/questions/75640144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to create an app that will continue to run until uninstalled? With .apk decompile file I create an app (.apk) but I want to run it until the user uninstalled this app. But I don't know how I can do that. I again decompile my .apk file and now I want to modify it for "app that will continue to run until uninstalled". And also I want to keep it running even when the user turns off the phone.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Numpy error after applying pd.convert_dtypes() to dataframe This python code snippet works the way I want it to. import pandas as pd import numpy as np dfn = pd.read_csv("dirtydata.csv") bike_sales_ds = dfn.copy() # Create new age column with general age range groups age_conditions = [ (bike_sales_ds['Age'] <= 30), (bike_sales_ds['Age'] >= 31) & (bike_sales_ds['Age'] <= 40), (bike_sales_ds['Age'] >= 41) & (bike_sales_ds['Age'] <= 55), (bike_sales_ds['Age'] >= 56) & (bike_sales_ds['Age'] <= 69), (bike_sales_ds['Age'] >= 70) ] age_choices = ['30 or Less', '31 to 40', '41 to 55', '56 to 69', '70 or Older'] bike_sales_ds['Age_Range'] = np.select(age_conditions, age_choices, default='error') I tried to add the .convert_dtypes() method as follows and now get this error. df = pd.read_csv("dirtydata.csv") dfn = df.convert_dtypes() bike_sales_ds = dfn.copy() Traceback (most recent call last): File "C:\Users\dmcfa\PycharmProjects\Bike Sales Data Cleaning 01\main.py", line 43, in bike_sales_ds['Age_Range'] = np.select(age_conditions, age_choices, default=0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<array_function internals>", line 200, in select File "C:\Users\dmcfa\PycharmProjects\Bike Sales Data Cleaning 01\venv\Lib\site-packages\numpy\lib\function_base.py", line 845, in select raise TypeError( TypeError: invalid entry 0 in condlist: should be boolean ndarray The part I don't understand is that df.info() would imply that convert_dtypes() didn't change the type of the Age column. It was an Int64 before the method and after. Setting convert_integer to false fixes the problem but I don't understand why it should matter. Can someone explain what is going on behind the scenes in numpy or is this something to do with the pandas inference rules? A: Your code works fine for me with my input dataframe. However, you can use pd.cut to check if the problem persists: age_conditions = [0, 30, 40, 55, 69, np.inf] age_choices = ['30 or Less', '31 to 40', '41 to 55', '56 to 69', '70 or Older'] bike_sales_ds['Age_Range'] = pd.cut(bike_sales_ds['Age'], bins=age_conditions, labels=age_choices) Output: >>> bike_sales_ds Age Age_Range 0 87 70 or Older 1 25 30 or Less 2 70 70 or Older 3 55 41 to 55 4 33 31 to 40 .. ... ... 95 89 70 or Older 96 79 70 or Older 97 67 56 to 69 98 71 70 or Older 99 78 70 or Older [100 rows x 2 columns] Input: import pandas as pd import numpy as np np.random.seed(2023) bike_sales_ds = pd.DataFrame({'Age': np.random.randint(0, 100, 100)})
{ "language": "en", "url": "https://stackoverflow.com/questions/75640150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How to reduce CPU usage in listen and handle siutation? I have a process which need to listen and handle. to be specific, I need to listen from two shared memory, and whenever there are new data arrived. I need to handle it. meanwhile, i don't want one data block the other, so, I let them run in separate thread. here is a demo code: void RunFirst() { while (true) { std::string s; shm1_->Recv(&s); // handle 1 } } void RunSecond() { while (true) { std::string s; shm2_->Recv(&s); // handle 2 } } void start() { std::thread t(RunFirst); RunSecond(); } I have two shm object, which are two shared memory Recviver. they loop to listen if there is data. it runs well which can handle whenever data comes. and wont let one data blocks another. but the problem is the high CPU usage. it make two CPU runs in 100% usage. Is there any methods can decrease CPU usage and keep the unblock?
{ "language": "en", "url": "https://stackoverflow.com/questions/75640151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Child class with different signatures, how to reasonable resolve it without breaking the code? I am implementing machine learning algorithms from scratch using python. I have a base class called BaseEstimator with the following structure: from __future__ import annotations from typing import Optional, TypeVar import numpy as np from abc import ABC, abstractmethod T = TypeVar("T", np.ndarray, torch.Tensor) class BaseEstimator(ABC): """Base Abstract Class for Estimators.""" @abstractmethod def fit(self, X: T, y: Optional[T] = None) -> BaseEstimator: """Fit the model according to the given training data. Parameters ---------- X : array-like, shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape (n_samples,) or (n_samples, n_outputs), optional Target relative to X for classification or regression; None for unsupervised learning. Returns ------- self : object Returns self. """ @abstractmethod def predict(self, X: T) -> T: """Predict class labels for samples in X. Parameters ---------- X : array-like, shape (n_samples, n_features) Samples. Returns ------- C : array, shape (n_samples,) Predicted class label per sample. """ class KMeans(BaseEstimator): def fit(X: T) -> BaseEstimator: ... def predict(X: T) -> T: ... class LogisticRegression(BaseEstimator): def fit(X: T, y: Optional[T] = None) -> BaseEstimator: ... def predict(X: T) -> T: ... Now when I implemented the base class, I did not plan properly, some algorithms such as KMeans are unsupervised and hence do not need y at all in fit. Now a quick fix I thought of is to type hint y as Optional, so that it can be None, is that okay? In that case, in KMeans' fit method, I will also have to include the y: Optional[T] = None, which will never be used.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Unable to uncheck items in a custom MultiSelectListPreference dialog I have created a custom MultiSelectListPreference where I can handle each item from the list click and save it to shared preferences. If the user selects item 0 or 1 from the list, then I want to unselect all other items from the list. However, when I call listView.setItemChecked(i, false), it doesn't change anything on the list. I have also tried notifyDataSetChanged and invalidate, but still no luck. Here is the code for the custom MultiSelectListPreference: import android.content.Context; import android.content.DialogInterface; import android.util.AttributeSet; import android.util.Log; import android.util.SparseBooleanArray; import android.widget.ArrayAdapter; import android.widget.ListView; import androidx.appcompat.app.AlertDialog; import androidx.preference.MultiSelectListPreference; import java.util.HashSet; import java.util.Set; public class MultiSelectListPreferenceCustom extends MultiSelectListPreference { private final String TAG = "Custom Class Dialog"; public MultiSelectListPreferenceCustom(Context context, AttributeSet attrs) { super(context, attrs); } @Override protected void onClick() { AlertDialog.Builder builder = new AlertDialog.Builder(getContext()); builder.setTitle(getDialogTitle()); builder.setMultiChoiceItems(getEntries(), getSelectedItems(), new DialogInterface.OnMultiChoiceClickListener() { @Override public void onClick(DialogInterface dialog, int which, boolean isChecked) { Log.d(TAG, "onClick: Which Button == "+which+" "+isChecked); ListView listView = ((AlertDialog) dialog).getListView(); Log.d(TAG, "onClick: Get Item Count = "+listView.getCount()); if (which == 0 || which == 1) { // If "Public" or "All Friends" is selected, uncheck all other items for (int i = 0; i < listView.getCount(); i++) { if (i != which) { listView.setItemChecked(i, false); } } } else { // If any other item is selected, uncheck "Public" and "All Friends" listView.setItemChecked(0, false); listView.setItemChecked(1, false); } ArrayAdapter adapter = (ArrayAdapter) listView.getAdapter(); adapter.notifyDataSetChanged(); Log.d(TAG, "onClick: list check = "+listView.getCheckedItemPosition()); } }); builder.setPositiveButton(android.R.string.ok, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // Save selected items ListView listView = ((AlertDialog) dialog).getListView(); SparseBooleanArray checkedPositions = listView.getCheckedItemPositions(); Set<String> values = new HashSet<>(); for (int i = 0; i < checkedPositions.size(); i++) { if (checkedPositions.valueAt(i)) { int position = checkedPositions.keyAt(i); values.add(getEntryValues()[position].toString()); } } setValues(values); } }); builder.setNegativeButton(android.R.string.cancel, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // Do nothing } }); builder.show(); } } I have also tried adding the following code after calling listView.setItemChecked(i, false): ((ArrayAdapter) listView.getAdapter()).notifyDataSetChanged(); listView.invalidateViews(); But still, the items are not getting unchecked. Can anyone suggest what could be the issue here? Any help would be appreciated.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Daily budget of google ads doesn't add correctly in Google Looker Studio I am trying to visualize daily budgets in a Google Looker Studio table for google ads. I can see the daily budget for each day set to $1920.14 for the last week, but the sum doesn't seem right. Please see the table below. Also, how do I get/calculate the budget for a specific period?
{ "language": "en", "url": "https://stackoverflow.com/questions/75640156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: XOR implementation using SimpleRNN Keras - not able to acheive 100% accuracy easily I am trying to get 100% accuracy for XOR implementation using SimpleRNN from keras. However, the accuracy is only able to reach upto 75% most of the time and very rarely reaches 100% here is the standard code available over the internet & chatGPT from keras.models import Sequential from keras.layers import SimpleRNN, Dense import matplotlib.pyplot as plt # Define the SimpleRNN model model = Sequential() model.add(SimpleRNN(units=2, input_shape=(2, 1), activation='sigmoid')) model.add(Dense(units=1, activation='sigmoid')) # Compile the model model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Define the XOR dataset X = [[[0], [0]], [[0], [1]], [[1], [0]], [[1], [1]]] y = [[0], [1], [1], [0]] # Train the model and record the training history history = model.fit(X, y, epochs=1000, verbose=0) # Plot the training loss over epochs plt.plot(history.history['loss']) plt.title('Training Loss') plt.xlabel('Epoch') plt.ylabel('Loss') plt.show() == this one does behave randomly -- and reaching 100% only very few times. Is there any standard method which can ensure I get 100% every time? ==================================================================== Trials: added a custom absolute layer which gave some good results - but not consistent from keras.models import Sequential from keras.layers import SimpleRNN, Dense import tensorflow as tf # features X = np.array([[0,0],[0,1],[1,0],[1,1]]) # expected values y = np.array([[0], [1], [1], [0]]) print(f'training data shape: {X.shape}') print(f'targets data shape: {y.shape}') # Define a network as a linear stack of layers model = Sequential() # Add a recurrent layer with 2 units model.add(SimpleRNN(1, input_shape=(2, 1), activation = "tanh")) # Add the output layer with a tanh activation model.add(Dense(1, activation='tanh')) def custom_layer(tensor): return tf.abs(tensor) model.add(tf.keras.layers.Lambda(custom_layer, name="lambda_layer")) model.compile(optimizer='Adadelta', loss='mean_squared_error', metrics=['acc']) Trial #2 -- used fixed initialized weights - 100% results every time Add a recurrent layer with 2 units model.add(SimpleRNN(1, input_shape=(2, 1), activation = "tanh",kernel_initializer=Constant(value=-0.7), recurrent_initializer=Constant(value=-1.0), bias_initializer=Constant(value=0.0)))
{ "language": "en", "url": "https://stackoverflow.com/questions/75640158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Django showing error 'constraints' refers to the joined field I have two models Product and Cart. Product model has maximum_order_quantity. While updating quantity in cart, I'll have to check whether quantity is greater than maximum_order_quantityat database level. For that am comparing quantity with maximum_order_quantity in Cart Model But it throws an error when I try to migrate cart.CartItems: (models.E041) 'constraints' refers to the joined field 'product__maximum_order_quantity'. Below are my models class Products(models.Model): category = models.ForeignKey( Category, on_delete=models.CASCADE, related_name="products" ) product_name = models.CharField(max_length=50, unique=True) base_price = models.IntegerField() product_image = models.ImageField( upload_to="photos/products", null=True, blank=True ) stock = models.IntegerField(validators=[MinValueValidator(0)]) maximum_order_quantity = models.IntegerField(null=True, blank=True) ) class CartItems(models.Model): cart = models.ForeignKey(Cart, on_delete=models.CASCADE) product = models.ForeignKey(Products, on_delete=models.CASCADE) quantity = models.IntegerField() class Meta: verbose_name_plural = "Cart Items" constraints = [ models.CheckConstraint( check=models.Q(quantity__gt=models.F("product__maximum_order_quantity")), name="Quantity cannot be more than maximum order quantity" ) ] #Error SystemCheckError: System check identified some issues: ERRORS: cart.CartItems: (models.E041) 'constraints' refers to the joined field 'product__maximum_order_quantity'.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: React router to redirect new page using a button onClick I'm trying to allow the user to click on the "Sign up" button and be redirected to a completely new page that I can begin styling for a small project but I can't seem to figure out the react-router-dom routing correctly. Inside my App.js the components are rendered on the screen and within the Signup component there are two buttons. One "Sign up" and one "Login". My thinking is that when either button is clicked it should point the user to a new page. However, I see the correct route in the search bar being "http://localhost:5173/signup" when clicked but I do not see a new page which is frustrating because I'm not sure if I'm making the easiest mistake but I can't see it. I followed this code-along but I still couldn't figure out how to replicate it in my own use case for the life of me. Here is my repo: link A: I opened your code as you have attached to your question, you might need to change here, Since you have not written any path in the Route, then how would be able to navigate there already, Following solution might help you : <Routes> <Route path="/register" element={<Register />}></Route> <Route path="/signup" element={<SignUp />}></Route> </Routes> Here some changings are required too : <button onClick={() => navigate("/register")} </button> Rest of your code is Fine, Happy Day!
{ "language": "en", "url": "https://stackoverflow.com/questions/75640164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to use golang exec to download a nvm version whenever I run this function let's say with version 14 or any version I get an error of exit with status 3. I would Ideally want a solution that can run on windows, mac, linux. thank you func InstallNodeVersionWithNVM(version string) { fmt.Printf("\ninstalling node version: %s", version) nodeVersion := version var cmd *exec.Cmd if runtime.GOOS == "windows" { // Windows command to install nvm cmd = exec.Command("cmd", "/C", "powershell", "-Command", "iex (New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/npocmaka/batch.scripts/master/nvm/nvm_install.ps1'); nvm install 14.17.6") } else { // Unix-like command to install nvm cmd = exec.Command("bash", "-l", "-c", ". $HOME/.nvm/nvm.sh && nvm install "+version+" && nvm use "+version) } // Specify the full path to the nvm executable cmd.Path = "/bin/bash" cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr // Run the cmd and check for errors err := cmd.Run() if err != nil { fmt.Printf("\n error install node version %s", nodeVersion) fmt.Println(err) return } fmt.Println(color.HiGreenString("success"), "NVM version:", nodeVersion, "successfully!") }
{ "language": "en", "url": "https://stackoverflow.com/questions/75640165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Parse error finding the lowest value in an array and its index `[X, i] = lowest([19, 24, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 32]) error: parse error near line 41 of file C:\Users\carlo\Downloads\octave-7.2.0-w64 lowest.m syntax error This is my code: function [X,i] = lowest(A) msg = 'error' if size(A) ~= [1,length(A)] || size(A) ~= [length(A),1] disp(msg) elseif size(A) == [1,length(A)] || size(A) == [length(A),1] for j = 1:length(A) if A(1,j) < A(1,j+1) || A(1,j) < A(1,j-1) X = A(1,j); end end for k = 1:length(A) if A(1,k) == X i = k end end enter image description here`
{ "language": "en", "url": "https://stackoverflow.com/questions/75640169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: sns sends msg to an sqs but not the other I have 2 sqs queues subscribed to the same sns topic (all 3 resources are in the same account). One of the sqs queues receives msgs from the sns topic but the other sqs never does. I checked that the sqs access policies are exactly the same. What are some ideas to debug the issue? A: There are a few things you can check to debug this issue. * *Check that the subscription for the second SQS queue is active. You can do this by navigating to the SNS topic in the AWS console, selecting the "Subscriptions" tab, and verifying that the subscription is listed and marked as "Confirmed". If it's not confirmed, you may need to follow the confirmation link that was sent to the email address associated with the subscription. *Make sure that the second SQS queue has the correct permissions and configuration settings to receive messages from SNS. Specifically, check that the SQS queue has a policy that allows the SNS topic to send messages to it. You can also check the queue's settings to ensure that it is configured to receive messages. *Check the access policy for the SNS topic to ensure that it allows the second SQS queue to receive messages. Specifically, look for any "Deny" statements in the policy that may be blocking the second SQS queue from receiving messages. *Confirm that the messages are actually being published to the SNS topic. You can do this by navigating to the SNS topic in the AWS console, selecting the "Monitoring" tab, and reviewing the message delivery metrics. *Make sure that the second SQS queue's message visibility timeout is set high enough to allow it to process messages. If the timeout is set too low, the queue may not have enough time to process messages before they become invisible and are picked up by the first SQS queue. *You can use CloudWatch logs to see if there are any errors or issues occurring with the second SQS queue. Check the logs for any error messages or warnings that may be related to the issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Data structure of a pos application using firebase and react Data Structure of a pos application Hi, I create a pos system using react js and firebase. now I want to get individual product sales during a specific period of time x
{ "language": "en", "url": "https://stackoverflow.com/questions/75640172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Need values render to view as state changes, but it renders all at once when receiving stream So I have a react app, added the openAI api and receiving results works. Now I also tried the stream function via server-sent-events and added a button which calls new EventSource(url). On the server the data gets fetched and I receive the response in chunks, but all in one single string, which I have so split then. These chunks gets send back to the client where each chunk gets saved to the redux store. On my app I've set useSelector() to get the chunks from state. There is a useEffect(), which listens to the chunk state and append the chunks (strings) to a useRef(). The value of useRef() gets assigned to a useState(). When I try to render the value of useRef() or useState() to the view the value gets rendered as a whole. Is there a way to render it word by word? When I log it to the console I can see on each render the words appending until its finished, but on the app I just see the endresult instant, not word by word.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can't Override Default Font Globally on MUI Theme Objective: I'm trying to override the default font using MUI themes. Problem: After reading MUI documentation, and stack overflow research, I'm unable to override a self hosted font globally across the theme. Theme file: import { createTheme } from "@mui/material/styles"; import VCRMonoWoff2 from './fonts/VcrMono.woff2'; import VCRMonoWoff from './fonts/VcrMono.woff'; const theme = createTheme({ typography: { fontFamily: 'VCRMono', }, components: { MuiCssBaseline: { styleOverrides: ` @font-face { font-family: 'VCRMono'; font-weight: normal; font-style: normal; src: url(${VCRMonoWoff2}) format('woff2'), url(${VCRMonoWoff}) format('woff'); unicodeRange: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF; } `, }, }, }); export default theme; Usage: import { ThemeProvider } from 'react-bootstrap'; import { CssBaseline } from '@mui/material'; import Box from '@mui/material/Box'; import theme from './theme'; export default function Example() { return ( <ThemeProvider theme={theme}> <CssBaseline /> <Box>Example3</Box> </ThemeProvider> ); } This doesn't update my theme typography to the new font. However, if I run this code, the font is updated to VCRMono: export default function Example() { return ( <ThemeProvider theme={theme}> <CssBaseline /> <Box sx={{ fontFamily: 'VCRMono', }}>Example3</Box> </ThemeProvider> ); } Although, this doesn't meet my objective as I am trying to override the default font globally. MUI Self Hosted Fonts Documentation states: "you need to change the theme to use this new font. In order to globally define as a font face, the CssBaseline component can be used." I tried replicating these steps and wasn't able to accomplish the task. Any help is appreciated. Thank you. A: The code above was correct, although, the root was not wrapped in the ThemeProvider class. This is the fix: <React.StrictMode> <ThemeProvider theme={theme}><Example/></ThemeProvider> </React.StrictMode> Wrap ThemeProvider around the root to enforce it globally. This is present in the documentation around theming. Although, it's in a different location from typography so I missed it earlier.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to understand why CloudFront returns 301? I have a distribution in CloudFront pointing to a custom origin. It worked just fine for more than five years and just a few weeks ago started to return 301 for all requests. The origin works as before, SSL certificates are valid both at the CloudFront endpoint and at the origin. The configuration of the "behavior" I didn't change in CloudFront. What could be the problem and how can I understand where is it? If it helps, here is the URL: https://djk1be5eatcae.cloudfront.net/?u=https://www.yegor256.com/index.html. The origin that it points to is relay.jare.io. Thus, the URL to be used to fetch the content is this: https://relay.jare.io/?u=https://www.yegor256.com/index.html (works for me).
{ "language": "en", "url": "https://stackoverflow.com/questions/75640175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: APT_BadAlloc from Join Stage in Data Stage There is a ETL job dealing with over 43000000 rows and it often fails because of APT_BadAlloc when it process a JOIN stage. Here is the log. Join_Stage,0: terminate called after throwing an instance of 'APT_BadAlloc' Issuing abort after 1 warnings logged. Join_Stage,3: Caught exception from runLocally(): APT_Operator::UnControlledTermination: From: UnControlledTermination via exception... Join_Stage,3: Caught exception from runLocally(): APT_Operator::UnControlledTermination: From: UnControlledTermination via exception... Join_Stage,3: The runLocally() of the operator failed. Join_Stage,3: Operator terminated abnormally: runLocally() did not return APT_StatusOk Join_Stage,0: Internal Error: (shbuf): iomgr/iomgr.C: 2670 My question is about the first warning. The event type is warning and message ID is IIS-DSEE-USBP-00002. Join_Stage,0: terminate called after throwing an instance of 'APT_BadAlloc' After this warning, the job has failed and it often occurs. However, I couldn't figure out how to fix it. I only have at least 30 minutes for system resources free and it is effective most of the time. BTW, it is not a permanent solution, so I'm googling every day but I can't find out what is my first step to resolve the error and how to do it at all. I saw some options about Buffer size for the system. All the size has the default values. It is very important setting, so I couldn't touch any option here. Please let me know how I can figure out the root cause. I'm not a system admin. I have to contact someone else who can look into a detailed log file about the biggest row in the dataflow.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: I am getting this error Page not found on my ecommerce store which i am making using django `The problem being Django not able to find the that, I told it to, I am trying to access the add to cart feature which shows me this error Here's a look at my urls.py path('add-to-cart/`<int:product_id>`/', views.add_to_cart, name='add-to-cart'), path('cart/', views.show_cart, name='showcart')`, #Views.py def add_to_cart(request): user = request.user product_id = request.GET.get('product_id') product = Product.objects.get(id=product_id) Cart(user=user, product=product).save() return redirect("/cart") ``def `show_cart`(request):` user = `request.user` cart = Cart.objects.filter`(user=user) amount = 0``your text`` for p in cart: value = p.quantity * p.product.discounted_price amount = amount + value ` totalamount = amount + 40 return render(request, 'app/add-to-cart.html',locals())`` and my html file name is add-to-cart.html please help I tried to use the feature add to cart on my website when it showed me this error, i was expecting it to add this to the cart function`
{ "language": "en", "url": "https://stackoverflow.com/questions/75640177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Django Template dynamically generated sidebar I have created a two django views home() and machine_detail(). Home view renders an home.html template and pass it a dictionary containing equipment names. my side bar consists of the items in this dictionary which is dynamically generated below these names. The equipment model is related to Machines model and I have used django foreign key relation to make a drop down for each equipment showing all the machines related to that specific equipment. all the machines are in an anchor tag and upon click i want to show a detail of machine page but why cant i see my side bar contents in machine detail template it is extending home.html but still the sidebar is not showing anyhting. please help me Equipment Model name = models.CharField(max_length=255) quantity = models.IntegerField() manufacturer = models.ForeignKey(Manufacturer, on_delete=models.PROTECT) contractor = models.ForeignKey(Contractor, on_delete=models.PROTECT, null=True, blank=True) Machines Model name = models.CharField(max_length=50) type_of_machine = models.ForeignKey(Equipment, on_delete=models.CASCADE, related_name='typeOfMachine') spares = models.ManyToManyField(Spares,related_name='machines' ,blank=True) dop = models.DateField(verbose_name="Date of Purcahse", blank=True, null=True) purchase_cost = models.FloatField(default=0) model = models.CharField(max_length=50,blank=True, null=True) image = models.ImageField(upload_to='images', blank=True, null=True) Home View def home(request): eq_map = {} equipment = models.Equipment.objects.all() for e in equipment: eq_map[e] = e.typeOfMachine.all() return render(request, "user/sidebar.html",{'equipments':eq_map}) machine_Detail view def machine_detail(request,pk): machine_detail = models.Machines.objects.get(pk=pk) return render(request, "user/machine_detail.html", {"machine_detail":machine_detail}) home.html <ul class="list-unstyled components"> {% for equipment,machines in equipments.items %} <li> {{equipment|safe}} <i class="fa fa-caret-down"></i> {% for m in machines %} <a href="{% url 'machineDetail' pk=m.pk %}"><li>{{m}}</li></a> {% endfor %} </li> {% endfor %} </ul> machine-detail.html {% extends "user/home.html" %} {% load static %} {% block title %} Machine Detail {% endblock title %}
{ "language": "en", "url": "https://stackoverflow.com/questions/75640178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Sending multiple Google calendar invites using Google apps script Following my previous question, I was able to send a Google calendar invite using the script proposed by @Tanaike: function testNotification(){ var calendarId = "###"; var eventId = "###"; var email = "###@gmail.com" addGuestAndSendEmail(calendarId,eventId,email) } function addGuestAndSendEmail(calendarId, eventId, newGuest) { Calendar.Events.patch({ attendees: [{ email: newGuest }] }, calendarId, eventId, { sendUpdates: "all" }); } However, there is a slight glitch that I am not able to identify. When I try to send invites to multiple email addresses at the same time, it behaves unusually. Here is the new script: function SendMultiple(calendarId, eventId) { newGuests = ["[email protected]","[email protected]"]; newGuests.forEach(function(e){ Utilities.sleep(10000); Calendar.Events.patch({ attendees: [{ email: e.toString()}] }, calendarId, eventId, { sendUpdates: "all" }); }); } Output: when the SendMultiple() function finishes running, it sends 2 invites (event created, event canceled) to [email protected] and 2 invites (event created, event canceled) to [email protected], I am unable to identify why the event canceled invite is generated using this script. If I interchange the emails in newGuests array: newGuests = ["[email protected]","[email protected]"]; then it behaves the same, I would appreciate it if you help me identify the issue, thank you
{ "language": "en", "url": "https://stackoverflow.com/questions/75640179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to delete the values of flet python text fields after clicking the button? I use Flet Python framework And I want to delete its values after clicking on the button and store them in the data table def main(page: ft.Page): def btn_click(e): if not sstid.value: sstid.error_text = "err" page.update() else: my_dict["sstid"] = sstid.value page.update() page.add( ft.Container( height=250, # bgcolor="white10", bgcolor="white10", border=border.all(1,"#ebebeb"), border_radius=8, padding=15, content=Column( expand=True, controls=[ ft.ElevatedButton("add", on_click=btn_click), ], ) ) A: def main(page: ft.Page): my_dict = {} def btn_click(e): if not sstid.value: sstid.error_text = "err" page.update() else: my_dict["sstid"] = sstid.value sstid.set_value("") # clear the value of sstid page.update() sstid = ft.TextField(label="SSTID", name="sstid") page.add( ft.Container( height=250, bgcolor="white10", border=border.all(1,"#ebebeb"), border_radius=8, padding=15, content=Column( expand=True, controls=[ sstid, ft.ElevatedButton("Add", on_click=btn_click), ], ), ) )
{ "language": "en", "url": "https://stackoverflow.com/questions/75640189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to add text beside legend of doughnut chart using react-chartjs-2 I want to add some number text on the right of legend of doughnut chart using react-chartjs-2 like this image How can i achive this?
{ "language": "en", "url": "https://stackoverflow.com/questions/75640190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Breakpoints Don't Hit in Visual Studio Code - What am I Doing Wrong For some reason my VSC breakpoints don't hit. I am not sure why. My launch.json file has { "type": "chrome", "request": "launch", "name": "Launch Chrome against localhost", "url": "http://localhost:8080", "webRoot": "${workspaceFolder}", "file":"${workspaceFolder}/index.html" } I can set breakpoints in my JavaScript code. They show up (correctly) with red dots over on the left. However, they don't hit. Please note that I am only using JavaScript, not TypeScript. I tried a bunch of changes to my launch.json file and tried moving my JavaScript files around. Nothing worked.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Date input show always format "mm/dd/yyyy" I am creating a react component that allows user to pick a date using a simple "input" field with "date" type. The problem is that I can't manage to display dates in the format "dd/mm/yyyy". it is always displayed as "mm/dd/yyyy". const [date, setDate] = useState(new Date()); const handleChange = (event) => { setDate(new Date(event.target.value)); }; const formattedDate = date.toISOString().slice(0, 10); console.log("Date", formattedDate) return( <input type="date" value={formattedDate} onChange={handleChange} /> ) A: The reason why the date is displayed in the "mm/dd/yyyy" format is that the format of the date displayed in an input field of type "date" is determined by the user's browser and operating system settings. Therefore, it may not be possible to guarantee a specific date format across all devices and browsers. However, you can consider using a third-party library such as moment.js or date-fns to format the date before displaying it in the input field. Here's an example of how you could use date-fns: import { format } from 'date-fns'; const [date, setDate] = useState(new Date()); const handleChange = (event) => { setDate(new Date(event.target.value)); }; const formattedDate = format(date, 'dd/MM/yyyy'); return ( <input type="date" value={formattedDate} onChange={handleChange} /> ); In this example, the format function from date-fns is used to format the date variable in the "dd/MM/yyyy" format. You can adjust the format string according to your specific needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Products retrieval from multi nodes in firebase The given code is showing me same product multiple times, rather than showing all the available products. Can some one help me? package com.rent.shopping; import android.content.Intent; import android.os.Bundle; import android.view.LayoutInflater; import android.view.Menu; import android.view.MenuItem; import android.view.View; import android.view.ViewGroup; import android.widget.TextView; import androidx.annotation.NonNull; import androidx.appcompat.app.ActionBarDrawerToggle; import androidx.appcompat.app.AppCompatActivity; import androidx.appcompat.app.AppCompatDelegate; import androidx.appcompat.widget.Toolbar; import androidx.core.view.GravityCompat; import androidx.drawerlayout.widget.DrawerLayout; import androidx.recyclerview.widget.LinearLayoutManager; import androidx.recyclerview.widget.RecyclerView; import com.firebase.ui.database.FirebaseRecyclerAdapter; import com.firebase.ui.database.FirebaseRecyclerOptions; import com.google.android.material.floatingactionbutton.FloatingActionButton; import com.google.android.material.navigation.NavigationView; import com.google.firebase.database.DataSnapshot; import com.google.firebase.database.DatabaseError; import com.google.firebase.database.DatabaseReference; import com.google.firebase.database.FirebaseDatabase; import com.google.firebase.database.ValueEventListener; import com.rent.shopping.Model.Products; import com.rent.shopping.Prevalent.Prevalent; import com.rent.shopping.ViewHolder.ProductViewHolder; import com.squareup.picasso.Picasso; import java.util.ArrayList; import java.util.Objects; import de.hdodenhof.circleimageview.CircleImageView; import io.paperdb.Paper; public class HomeActivity extends AppCompatActivity implements NavigationView.OnNavigationItemSelectedListener { private DatabaseReference ProductsRef; DrawerLayout drawerLayout; NavigationView navigationView; Toolbar toolbar; private RecyclerView recyclerView; RecyclerView.LayoutManager layoutManager; ArrayList<String> vendorIds = new ArrayList<String>(); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_home); AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_YES); ProductsRef = FirebaseDatabase.getInstance().getReference().child("Vendors"); drawerLayout=findViewById(R.id.drawer_layout); navigationView=findViewById(R.id.nav_view); toolbar=findViewById(R.id.toolbar); androidx.appcompat.widget.Toolbar toolbar = (androidx.appcompat.widget.Toolbar) findViewById(R.id.toolbar); toolbar.setTitle("Home"); setSupportActionBar(toolbar); navigationView.bringToFront(); ActionBarDrawerToggle toggle=new ActionBarDrawerToggle(this,drawerLayout,toolbar,R.string.navigation_drawer_open,R.string.navigation_drawer_close); drawerLayout.addDrawerListener(toggle); toggle.syncState(); navigationView.setNavigationItemSelectedListener(this); View headerView = navigationView.getHeaderView(0); TextView userNameTextView = headerView.findViewById(R.id.user_profile_name); CircleImageView profileImageView = headerView.findViewById(R.id.user_profile_image); userNameTextView.setText(Prevalent.currentOnlineUser.getName()); Picasso.get().load(Prevalent.currentOnlineUser.getImage()).placeholder(R.drawable.profile).into(profileImageView); recyclerView = findViewById(R.id.recycler_menu); recyclerView.setHasFixedSize(true); layoutManager = new LinearLayoutManager(this); recyclerView.setLayoutManager(layoutManager); FloatingActionButton fab = (FloatingActionButton) findViewById(R.id.fab); fab.setOnClickListener(view -\> { Intent intent = new Intent(HomeActivity.this,CartActivity.class); startActivity(intent); }); } @Override protected void onStart() { super.onStart(); // Get a reference to the vendors node // Retrieve vendor IDs ProductsRef.addListenerForSingleValueEvent(new ValueEventListener() { @Override public void onDataChange(@NonNull DataSnapshot dataSnapshot) { for (DataSnapshot vendorSnapshot : dataSnapshot.getChildren()) { String vendorId = vendorSnapshot.getKey(); ProductsRef.child(Objects.requireNonNull(vendorId)).child("Products").addListenerForSingleValueEvent(new ValueEventListener() { @Override public void onDataChange(@NonNull DataSnapshot snapshot) { FirebaseRecyclerOptions\<Products\> options = new FirebaseRecyclerOptions.Builder\<Products\>() .setQuery(ProductsRef, Products.class) .build(); FirebaseRecyclerAdapter\<Products, ProductViewHolder\> adapter = new FirebaseRecyclerAdapter\<Products, ProductViewHolder\>(options) { @Override protected void onBindViewHolder(@NonNull ProductViewHolder holder, int position, @NonNull final Products model) { for (DataSnapshot url : snapshot.getChildren()) { holder.txtProductName.setText(url.child("pname").getValue(String.class)); holder.txtProductDescription.setText(url.child("description").getValue(String.class)); holder.txtProductPrice.setText("Price = " + url.child("price").getValue(String.class) + "Rs."); String productImage = url.child("image").getValue(String.class); Picasso.get().load(productImage).into(holder.imageView); holder.itemView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Intent intent = new Intent(HomeActivity.this, ProductDetailsActivity.class); intent.putExtra("pid", url.child("pid").getValue(String.class)); intent.putExtra("pname", url.child("pname").getValue(String.class)); intent.putExtra("description", url.child("description").getValue(String.class)); intent.putExtra("price", url.child("price").getValue(String.class)); intent.putExtra("image", url.child("image").getValue(String.class)); startActivity(intent); } }); } } @NonNull @Override public ProductViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) { View view = LayoutInflater.from(parent.getContext()).inflate(R.layout.product_items_layout, parent, false); return new ProductViewHolder(view); } }; recyclerView.setAdapter(adapter); adapter.startListening(); vendorIds.add(vendorId); System.out.println(vendorIds); } @Override public void onCancelled(@NonNull DatabaseError error) { } }); } } @Override public void onCancelled(@NonNull DatabaseError databaseError) { System.out.println("The read failed: " + databaseError.getCode()); } }); } @Override public void onBackPressed(){ if(drawerLayout.isDrawerOpen(GravityCompat.START)){ drawerLayout.closeDrawer(GravityCompat.START); } else {super.onBackPressed(); } } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main_menu, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); //noinspection SimplifiableIfStatement // if (id == R.id.action_settings) { // return true; // } return super.onOptionsItemSelected(item); } @Override public boolean onNavigationItemSelected(MenuItem item) { // Handle navigation view item clicks here. int id = item.getItemId(); if (id == R.id.nav_cart) { Intent intent = new Intent(HomeActivity.this,CartActivity.class); startActivity(intent); } else if (id == R.id.nav_search) { Intent intent = new Intent(HomeActivity.this,SearchProductsActivity.class); startActivity(intent); } else if (id == R.id.nav_settings) { Intent intent=new Intent(HomeActivity.this,SettinsActivity.class); startActivity(intent); } else if (id == R.id.nav_logout) { Paper.book().destroy(); Intent intent=new Intent(HomeActivity.this,MainActivity.class); intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK |Intent.FLAG_ACTIVITY_CLEAR_TASK ); startActivity(intent); finish(); } drawerLayout.closeDrawer(GravityCompat.START); return true; } } Here is the database look enter image description hereenter image description here Here is the code i used to upload the products: package com.rent.shopping; import android.annotation.SuppressLint; import android.app.ProgressDialog; import android.content.Intent; import android.net.Uri; import android.os.Bundle; import android.text.TextUtils; import android.widget.Button; import android.widget.EditText; import android.widget.ImageView; import android.widget.Toast; import androidx.annotation.NonNull; import androidx.appcompat.app.AppCompatActivity; import com.google.android.gms.tasks.Continuation; import com.google.android.gms.tasks.Task; import com.google.firebase.database.DatabaseReference; import com.google.firebase.database.FirebaseDatabase; import com.google.firebase.storage.FirebaseStorage; import com.google.firebase.storage.StorageReference; import com.google.firebase.storage.UploadTask; import java.text.SimpleDateFormat; import java.util.Calendar; import java.util.HashMap; import java.util.Objects; public class AdminAddNewProductActivity extends AppCompatActivity { private String CategoryName, Description, Price, saveCurrentDate, saveCurrentTime ,security ,borowingprice; public static String Pname; private Button AddNewProductButton ; private ImageView InputProductImage; private EditText InputProductName, InputProductDescription, InputProductPrice,Securityprice,Borrowingprice; private static final int GalleryPick = 1; private Uri ImageUri; private String productRandomKey, downloadImageUrl; private StorageReference ProductImagesRef; private DatabaseReference ProductsRef; private ProgressDialog loadingBar; String phone; EditText PHONE; @SuppressLint("MissingInflatedId") @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_admin_add_new_product); PHONE=findViewById(R.id.number); CategoryName = getIntent().getExtras().get("category").toString(); ProductImagesRef = FirebaseStorage.getInstance().getReference().child("Product Images"); ProductsRef = FirebaseDatabase.getInstance().getReference().child("Vendors"); AddNewProductButton = findViewById(R.id.add_new_product); InputProductImage = findViewById(R.id.select_product_image); InputProductName = findViewById(R.id.product_name); InputProductDescription = findViewById(R.id.product_description); InputProductPrice = findViewById(R.id.product_price); Securityprice= findViewById(R.id.security_price); Borrowingprice= findViewById(R.id.borrowing_price); loadingBar = new ProgressDialog(this); InputProductImage.setOnClickListener(view -> OpenGallery()); AddNewProductButton.setOnClickListener(view -> ValidateProductData()); } private void OpenGallery(){ Intent galleryIntent = new Intent(); galleryIntent.setAction(Intent.ACTION_GET_CONTENT); galleryIntent.setType("image/*"); startActivityForResult(galleryIntent, GalleryPick); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode==GalleryPick && resultCode==RESULT_OK && data!=null) { ImageUri = data.getData(); InputProductImage.setImageURI(ImageUri); } } private void ValidateProductData() { Description = InputProductDescription.getText().toString(); Price = InputProductPrice.getText().toString(); Pname = InputProductName.getText().toString(); security=Securityprice.getText().toString(); borowingprice=Borrowingprice.getText().toString(); phone=PHONE.getText().toString().trim(); if (ImageUri == null) { Toast.makeText(this, "Product image is mandatory...", Toast.LENGTH_SHORT).show(); } else if (TextUtils.isEmpty(Description)) { Toast.makeText(this, "Please write product description...", Toast.LENGTH_SHORT).show(); } else if (TextUtils.isEmpty(Price)) { Toast.makeText(this, "Please write product Price...", Toast.LENGTH_SHORT).show(); } else if (TextUtils.isEmpty(Pname)) { Toast.makeText(this, "Please write product name...", Toast.LENGTH_SHORT).show(); }else if (TextUtils.isEmpty(security)) { Toast.makeText(this, "Please write Security fees...", Toast.LENGTH_SHORT).show(); }else if (TextUtils.isEmpty(borowingprice)) { Toast.makeText(this, "Please write Borrowing price...", Toast.LENGTH_SHORT).show(); } else { StoreProductInformation(); } } private void StoreProductInformation() { loadingBar.setTitle("Add New Product"); loadingBar.setMessage("Dear Admin, please wait while we are adding the new product."); loadingBar.setCanceledOnTouchOutside(false); loadingBar.show(); Calendar calendar = Calendar.getInstance(); SimpleDateFormat currentDate = new SimpleDateFormat("MMM dd, yyyy"); saveCurrentDate = currentDate.format(calendar.getTime()); SimpleDateFormat currentTime = new SimpleDateFormat("HH:mm:ss a"); saveCurrentTime = currentTime.format(calendar.getTime()); productRandomKey = saveCurrentDate + saveCurrentTime; final StorageReference filePath = ProductImagesRef.child(ImageUri.getLastPathSegment() + productRandomKey + ".jpg"); final UploadTask uploadTask = filePath.putFile(ImageUri); uploadTask.addOnFailureListener(e -> { String message = e.toString(); Toast.makeText(AdminAddNewProductActivity.this, "Error: " + message, Toast.LENGTH_SHORT).show(); loadingBar.dismiss(); }).addOnSuccessListener(taskSnapshot -> { Toast.makeText(AdminAddNewProductActivity.this, "Product Image uploaded Successfully...", Toast.LENGTH_SHORT).show(); Task<Uri> urlTask = uploadTask.continueWithTask(new Continuation<UploadTask.TaskSnapshot, Task<Uri>>() { @Override public Task<Uri> then(@NonNull Task<UploadTask.TaskSnapshot> task) throws Exception { if (!task.isSuccessful()) { throw Objects.requireNonNull(task.getException()); } downloadImageUrl = filePath.getDownloadUrl().toString(); return filePath.getDownloadUrl(); } }).addOnCompleteListener(task -> { if (task.isSuccessful()) { downloadImageUrl = task.getResult().toString(); Toast.makeText(AdminAddNewProductActivity.this, "got the Product image Url Successfully...", Toast.LENGTH_SHORT).show(); SaveProductInfoToDatabase(); } }); }); } private void SaveProductInfoToDatabase() { HashMap<String, Object> productMap = new HashMap<>(); productMap.put("pid", productRandomKey); productMap.put("date", saveCurrentDate); productMap.put("time", saveCurrentTime); productMap.put("description", Description); productMap.put("image", downloadImageUrl); productMap.put("category", CategoryName); productMap.put("price", Price); productMap.put("Borrowing price", borowingprice); productMap.put("Security Fees", security); productMap.put("pname", Pname); ProductsRef.child(phone).child("Products").child(Pname).updateChildren(productMap) .addOnCompleteListener(task -> { if (task.isSuccessful()) { Intent intent = new Intent(AdminAddNewProductActivity.this, AdminCategoryActivity.class); startActivity(intent); loadingBar.dismiss(); Toast.makeText(AdminAddNewProductActivity.this, "Product is added successfully..", Toast.LENGTH_SHORT).show(); } else { loadingBar.dismiss(); String message = Objects.requireNonNull(task.getException()).toString(); Toast.makeText(AdminAddNewProductActivity.this, "Error: " + message, Toast.LENGTH_SHORT).show(); } }); }} package com.rent.shopping; import android.annotation.SuppressLint; import android.app.ProgressDialog; import android.content.Intent; import android.net.Uri; import android.os.Bundle; import android.text.TextUtils; import android.widget.Button; import android.widget.EditText; import android.widget.ImageView; import android.widget.Toast; import androidx.annotation.NonNull; import androidx.appcompat.app.AppCompatActivity; import com.google.android.gms.tasks.Continuation; import com.google.android.gms.tasks.Task; import com.google.firebase.database.DatabaseReference; import com.google.firebase.database.FirebaseDatabase; import com.google.firebase.storage.FirebaseStorage; import com.google.firebase.storage.StorageReference; import com.google.firebase.storage.UploadTask; import java.text.SimpleDateFormat; import java.util.Calendar; import java.util.HashMap; import java.util.Objects; public class AdminAddNewProductActivity extends AppCompatActivity { private String CategoryName, Description, Price, saveCurrentDate, saveCurrentTime ,security ,borowingprice; public static String Pname; private Button AddNewProductButton ; private ImageView InputProductImage; private EditText InputProductName, InputProductDescription, InputProductPrice,Securityprice,Borrowingprice; private static final int GalleryPick = 1; private Uri ImageUri; private String productRandomKey, downloadImageUrl; private StorageReference ProductImagesRef; private DatabaseReference ProductsRef; private ProgressDialog loadingBar; String phone; EditText PHONE; @SuppressLint("MissingInflatedId") @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_admin_add_new_product); PHONE=findViewById(R.id.number); CategoryName = getIntent().getExtras().get("category").toString(); ProductImagesRef = FirebaseStorage.getInstance().getReference().child("Product Images"); ProductsRef = FirebaseDatabase.getInstance().getReference().child("Vendors"); AddNewProductButton = findViewById(R.id.add_new_product); InputProductImage = findViewById(R.id.select_product_image); InputProductName = findViewById(R.id.product_name); InputProductDescription = findViewById(R.id.product_description); InputProductPrice = findViewById(R.id.product_price); Securityprice= findViewById(R.id.security_price); Borrowingprice= findViewById(R.id.borrowing_price); loadingBar = new ProgressDialog(this); InputProductImage.setOnClickListener(view -> OpenGallery()); AddNewProductButton.setOnClickListener(view -> ValidateProductData()); } private void OpenGallery(){ Intent galleryIntent = new Intent(); galleryIntent.setAction(Intent.ACTION_GET_CONTENT); galleryIntent.setType("image/*"); startActivityForResult(galleryIntent, GalleryPick); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode==GalleryPick && resultCode==RESULT_OK && data!=null) { ImageUri = data.getData(); InputProductImage.setImageURI(ImageUri); } } private void ValidateProductData() { Description = InputProductDescription.getText().toString(); Price = InputProductPrice.getText().toString(); Pname = InputProductName.getText().toString(); security=Securityprice.getText().toString(); borowingprice=Borrowingprice.getText().toString(); phone=PHONE.getText().toString().trim(); if (ImageUri == null) { Toast.makeText(this, "Product image is mandatory...", Toast.LENGTH_SHORT).show(); } else if (TextUtils.isEmpty(Description)) { Toast.makeText(this, "Please write product description...", Toast.LENGTH_SHORT).show(); } else if (TextUtils.isEmpty(Price)) { Toast.makeText(this, "Please write product Price...", Toast.LENGTH_SHORT).show(); } else if (TextUtils.isEmpty(Pname)) { Toast.makeText(this, "Please write product name...", Toast.LENGTH_SHORT).show(); }else if (TextUtils.isEmpty(security)) { Toast.makeText(this, "Please write Security fees...", Toast.LENGTH_SHORT).show(); }else if (TextUtils.isEmpty(borowingprice)) { Toast.makeText(this, "Please write Borrowing price...", Toast.LENGTH_SHORT).show(); } else { StoreProductInformation(); } } private void StoreProductInformation() { loadingBar.setTitle("Add New Product"); loadingBar.setMessage("Dear Admin, please wait while we are adding the new product."); loadingBar.setCanceledOnTouchOutside(false); loadingBar.show(); Calendar calendar = Calendar.getInstance(); SimpleDateFormat currentDate = new SimpleDateFormat("MMM dd, yyyy"); saveCurrentDate = currentDate.format(calendar.getTime()); SimpleDateFormat currentTime = new SimpleDateFormat("HH:mm:ss a"); saveCurrentTime = currentTime.format(calendar.getTime()); productRandomKey = saveCurrentDate + saveCurrentTime; final StorageReference filePath = ProductImagesRef.child(ImageUri.getLastPathSegment() + productRandomKey + ".jpg"); final UploadTask uploadTask = filePath.putFile(ImageUri); uploadTask.addOnFailureListener(e -> { String message = e.toString(); Toast.makeText(AdminAddNewProductActivity.this, "Error: " + message, Toast.LENGTH_SHORT).show(); loadingBar.dismiss(); }).addOnSuccessListener(taskSnapshot -> { Toast.makeText(AdminAddNewProductActivity.this, "Product Image uploaded Successfully...", Toast.LENGTH_SHORT).show(); Task<Uri> urlTask = uploadTask.continueWithTask(new Continuation<UploadTask.TaskSnapshot, Task<Uri>>() { @Override public Task<Uri> then(@NonNull Task<UploadTask.TaskSnapshot> task) throws Exception { if (!task.isSuccessful()) { throw Objects.requireNonNull(task.getException()); } downloadImageUrl = filePath.getDownloadUrl().toString(); return filePath.getDownloadUrl(); } }).addOnCompleteListener(task -> { if (task.isSuccessful()) { downloadImageUrl = task.getResult().toString(); Toast.makeText(AdminAddNewProductActivity.this, "got the Product image Url Successfully...", Toast.LENGTH_SHORT).show(); SaveProductInfoToDatabase(); } }); }); } private void SaveProductInfoToDatabase() { HashMap<String, Object> productMap = new HashMap<>(); productMap.put("pid", productRandomKey); productMap.put("date", saveCurrentDate); productMap.put("time", saveCurrentTime); productMap.put("description", Description); productMap.put("image", downloadImageUrl); productMap.put("category", CategoryName); productMap.put("price", Price); productMap.put("Borrowing price", borowingprice); productMap.put("Security Fees", security); productMap.put("pname", Pname); ProductsRef.child(phone).child("Products").child(Pname).updateChildren(productMap) .addOnCompleteListener(task -> { if (task.isSuccessful()) { Intent intent = new Intent(AdminAddNewProductActivity.this, AdminCategoryActivity.class); startActivity(intent); loadingBar.dismiss(); Toast.makeText(AdminAddNewProductActivity.this, "Product is added successfully..", Toast.LENGTH_SHORT).show(); } else { loadingBar.dismiss(); String message = Objects.requireNonNull(task.getException()).toString(); Toast.makeText(AdminAddNewProductActivity.this, "Error: " + message, Toast.LENGTH_SHORT).show(); } }); }} I am using java and firebase for MULTIVENDER renting app, but its not showing me the products rightly only one product is showing me repeatedly.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: save each element of the array into redis with lPush but got stuck in the process i have a array [ {url: '/project/1679786259239684'}, {url: '/project/1751999121621250'}, { url: '/project/2143988207597961'}, { url: '/project/2141232634002056' } ], I want to save this array into redis and pop each element one by one. firstly i try // save the array const value = JSON.stringify(array); this.redisClient .multi() .rPush(command:temp, value) .exec(); // retrieve the array return await this.redisClient .multi() .lPop(command:temp) .exec(); it doesn't work since I pushed the whole array as as one element. Then I tried to push each element of the array with redis lPush. const multi = redisClient.multi(); _.map(messages,function(item){ const value = JSON.stringify(item); multi.rPush(`command:temp`, value) }); multi.exec(); however this seems got stuck in the process, how to get it around
{ "language": "en", "url": "https://stackoverflow.com/questions/75640197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Add $ in from of input text in an OutlinedTextField I'm using an OutlinedTextField for the user to input a dollar amount. It works as desired but I would like to add a $ in front of what they are inputting, whether it's one digit or five. val patternPool: Regex = Regex("^\\d{1,5}\$") OutlinedTextField( value = siteValues.valueExtraMoney.value, onValueChange = { if (it.isEmpty() || it.matches(sitePatterns.patternPool)) siteValues.valueExtraMoney.value = it }, modifier = Modifier .weight(1f) .fillMaxHeight(), textStyle = TextStyle(fontSize = fonts.fontSizeText, textAlign = TextAlign.Center), label = { Text( text ="Extra money taken out of prize pool", modifier = Modifier.fillMaxWidth(), fontSize = fonts.fontSizeEntry, textAlign = TextAlign.Center ) }, keyboardOptions = KeyboardOptions(keyboardType = KeyboardType.Number), shape = RectangleShape, colors = TextFieldDefaults.outlinedTextFieldColors(textColor = Color.Black) ) I've tried valueExtraMoney.value = "$$it". That outputs a $ plus what ever digit is inputed. But now the string doesn't satisfy the regex and I can't alter the input anymore. I tried playing with the regex to allow a $ but it still gets frozen.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Facial Recognition_1 OpenCV(4.7.0) :-1: error: (-5:Bad argument) in function 'read' Overload resolution failed: * *image is not a numpy array, neither a scalar *Expected Ptr<cv::UMat> for argument 'image' I am getting this error from the code and am wondering on how to fix it. import cv2 import matplotlib.pyplot as plt cap = cv2.VideoCapture(0) res, img = cap.read('/Example/photo.jpg') img = cv2.cvtColor(img, cv2.COLOR_BGRA2RGBA) fig, ax = plt.subplots(figsize=(10, 10)) ax.axis('off') plt.imshow(img) plt.show() I try to fix the read function but has not worked out for me. A: cap.read() method doesn't take a filename as an argument. Instead, it reads the frames from the video capture device (camera). So, you need to remove the filename argument and just call cap.read(). You need to check whether cap.read() was successful in reading the frame from the camera. You can do this by checking the value of res. If it's True, then the frame was successfully read, and you can proceed with the rest of the code. If it's False, then you need to handle the error. Here's the corrected code: import cv2 import matplotlib.pyplot as plt cap = cv2.VideoCapture(0) res, img = cap.read() if not res: print("Error reading frame") else: img = cv2.cvtColor(img, cv2.COLOR_BGRA2RGBA) fig, ax = plt.subplots(figsize=(10, 10)) ax.axis('off') plt.imshow(img) plt.show() This code should capture a frame from your camera and display it using matplotlib. If you want to read an image file instead of capturing a frame from the camera, you can replace cap = cv2.VideoCapture(0) with img = cv2.imread('/Example/photo.jpg') and remove the cap.read() line.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: The request failed: Google returned a response with code 400 with Pytrends Python Library for Google Trends API I have the following code: from pytrends.request import TrendReq keywords = ['example1', 'example2', 'example3'] mapping = {} pytrends = TrendReq() for keyword in keywords[:3]: print(keyword) pytrends.build_payload(keyword,timeframe='today 12-m') trend_data = pytrends.interest_over_time() series = trend_data[keyword[0]] print(series) plt.plot(series) plt.show() mapping[keyword] = series time.sleep(65) For the first keyword in the keywords array, this will work. However, as soon as the for loop iterates to the next keyword, I get "pytrends.exceptions.ResponseError: The request failed: Google returned a response with code 400". Initially, I thought this was because of rate limits but I set time.sleep() in the for loop to over 1 minute in between requests. Any help would be much appreciated. A: Maybe this will help. You need to pass a list into build_paylod() per the documentation. I suspect you want to compare the results in one chart, but if not this should at least get you closer: from pytrends.request import TrendReq import matplotlib.pyplot as plt keywords = ['example1', 'example2', 'example3'] pytrends = TrendReq() pytrends.build_payload(keywords,timeframe='today 12-m') trend_data = pytrends.interest_over_time() series = trend_data[keywords] plt.plot(series) plt.show() This will give you a graph like:
{ "language": "en", "url": "https://stackoverflow.com/questions/75640202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How does the altitude of the coffee plantation affect the taste of the coffee? The altitude of the coffee plantation can significantly affect the taste of the coffee due to several factors: Temperature: Higher altitude coffee farms experience cooler temperatures, which causes the coffee cherry to ripen more slowly, resulting in a more complex flavor profile. Cooler temperatures also allow the coffee to retain more acidity, which contributes to a brighter, more vibrant taste. Soil: Altitude can also affect the quality of the soil. In high altitude coffee farms, the soil tends to be richer in nutrients, which leads to healthier and more robust coffee plants. Sunlight: Altitude affects the amount of sunlight the coffee plants receive, which can impact the development of the coffee cherry. Coffee plants grown at higher altitudes receive more direct sunlight, leading to a higher concentration of sugars in the cherry, resulting in a sweeter taste. Pests and diseases: Higher altitude coffee farms are less susceptible to pests and diseases, as the cooler temperatures and richer soil create a more hostile environment for these organisms. This results in a more natural and pure flavor profile. Overall, coffee grown at higher altitudes tends to have a more complex and nuanced flavor profile, with brighter acidity and sweeter notes. However, growing coffee at higher altitudes is more challenging and expensive, which contributes to the higher price of specialty coffees that are often grown at these elevations The thing that i tried was to ask a question in this site and what i expected is to get a result for that question. what actually resulted was i got a better result
{ "language": "en", "url": "https://stackoverflow.com/questions/75640203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Does WireMock support Mocking GraphQL APIs We have multiple micro services which is based on GrapQL and few of them based on REST and each micro service will call multiple other microservices (Java, Spring Boot tech stack). Now we want to write integration testing for this kind of orchestration. We thought of using WireMock for Stubbing external Micro Services. But does it support GraphQL? As Wirmock will will support REST MS mocking can it be used for Stubbing GraphQL as well when using with Spring Boot?
{ "language": "en", "url": "https://stackoverflow.com/questions/75640205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: The `style` property is not suggested with `document.querySelector()` in VS Code When I write 'document.querySelector().' and click 'CTRL + Spacebar' to trigger suggest, the 'style' property does not get suggested, but it seems to work fine with 'document.getElementById()', Why is this happening and how do I fix this? With querySelector With getElementById Help please, Thanks in advance~ Expected the IntelliSense to suggest the 'style' property with the 'document.querySelector().' but it did not.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: i created a library managment system but it would just skip the steps and run all the way down i created a library manangment system with various switch cases and loops and breaks statements however sometimes it works absolutely fie however sometimes it runs the commands and follows all the statement after that particular switch statement my code is as undergiven https://onlinegdb.com/aYdKFJSTy i tired researching on various sites and with my collegues but no one was able to fnd the solution as we all are new to c and are in early learning phase A: ok so firstly, you should stop posting a complete 900 lines of code and just give a snippet code of the part which you think is giving problem. Secondly, the places in where you are facing problem, you have missed the break statement at several places in the switch cases. When you do not put break statement in a switch case the case is executed and then all the cases after it are executed sequentially . you can either use break / exit/ continue statements depending upon your requirement. For example the switch case in the 687th line of your code does not contain any break statement. Hence, all the statements after the case chosen will be executed and an undesirable output will be displayed .
{ "language": "en", "url": "https://stackoverflow.com/questions/75640207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ReactJS Nested List Issue My nested list in ReactJS opens all sublists when I expand one category Collapsed List Expand List import * as React from 'react'; import ListSubheader from '@mui/material/ListSubheader'; import List from '@mui/material/List'; import ListItemButton from '@mui/material/ListItemButton'; import ListItemText from '@mui/material/ListItemText'; import Collapse from '@mui/material/Collapse'; import ExpandLess from '@mui/icons-material/ExpandLess'; import ExpandMore from '@mui/icons-material/ExpandMore'; export default function SideBar() { const [open, setOpen] = React.useState(false); const handleClick = () => { setOpen(!open); }; return ( <List sx={{ width: '100%', maxWidth: 360, bgcolor: 'background.paper' }} component="nav" aria-labelledby="nested-list-subheader" subheader={ <ListSubheader component="div" id="nested-list-subheader"> courses </ListSubheader> } > <ListItemButton onClick={() => handleClick()}> <ListItemText primary="Course 1" /> {open ? <ExpandLess /> : <ExpandMore />} </ListItemButton> <Collapse in={open} timeout="auto" unmountOnExit> <List component="div" disablePadding> <ListItemButton sx={{ pl: 4 }}> <ListItemText primary="Research Paper" /> </ListItemButton> </List> </Collapse> <ListItemButton> <ListItemText primary="Course 2" /> {open ? <ExpandLess /> : <ExpandMore />} </ListItemButton> <ListItemButton onClick={() => handleClick()}> <ListItemText primary="Course 3" /> {open ? <ExpandLess /> : <ExpandMore />} </ListItemButton> <Collapse in={open} timeout="auto" unmountOnExit> <List component="div" disablePadding> <ListItemButton sx={{ pl: 4 }}> <ListItemText primary="Exams" /> </ListItemButton> </List> </Collapse> </List> ); } I found the problem might be that I am using an binary operator to open all the lists. I found this solution ReactJs nested list collapse for only one list item but was unable to implement it properly. Any ideas on what I could do? A: It is because you are using the same state open for all the items. A simple solution is to have a different state for each one. You could create a different handleClick too or pass the new setStates down into the original handleClick.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Export array of DataFrames to csv I am trying to use tabula-py to extract data from a PDf and save it to a csv. The PDF contains a work order. The data in the PDF is not formatted in a usable table - I am required to use Stream mode. Through the Tabula web interface, I have created a template and can use it to extract the data to a csv. My template includes 8 sections. The csv export from Tabula looks like this: (without a header row though...) | job number| | ---- | | full name | | phone1 | | phone2 | | address-line-1 | | address-line-2 | | email | | jobtype1 | | jobtype2 | Using tabula-py, I can run the code df = tabula.read_pdf_with_template("file.pdf","template.tabula-template.json") The output of print(df) is: [Empty DataFrame Columns: [full name] Index: [], Empty DataFrame Columns: [phone1] Index: [], Empty DataFrame Columns: [phone2] Index: [], Empty DataFrame Columns: [email] Index: [], address-line-1 0 address-line-2 Columns: [jobtype] Index: [], Empty DataFrame Columns: [jobtype2] Index: [], Empty DataFrame Columns: [jobnumber] Index: []] Note: I have changed the output text to the field name None of the sections that I have selected have a header. Notice the address section shows the address line 1 as the header and address line 2 as the value. This is due to the address being on 2 lines in the PDF. I have 3 questions: * *How do I get the address to be on one line? *How do I export the output to csv? I know I have to loop through the DataFrame(s) to output one at a time, but python is not my strong suit *I have over 3000 work orders to process. Is it possible to have the extract appended to the same csv file? All code used to produce the above results: import tabula import pandas as pd df = tabula.read_pdf_with_template("file.pdf","template.tabula-template.json") print(df)
{ "language": "en", "url": "https://stackoverflow.com/questions/75640211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Calculate bar number and total number of bars in the stacked bar chart How to get bar# ( and then the total number of bars) in the stacked bar chart? I tried dense_rank window function, but it didn't work. { "data": { "values": [ {"DATE": "2020-02-02", "Category": "AA", "Value": 50}, {"DATE": "2020-02-02", "Category": "BB", "Value": 50}, {"DATE": "2020-02-03", "Category": "AA", "Value": 70}, {"DATE": "2020-02-03", "Category": "BB", "Value": 100}, {"DATE": "2020-02-04", "Category": "AA", "Value": 110}, {"DATE": "2020-02-04", "Category": "BB", "Value": 140}, {"DATE": "2020-02-05", "Category": "AA", "Value": 150}, {"DATE": "2020-02-05", "Category": "BB", "Value": 190}, {"DATE": "2020-02-06", "Category": "AA", "Value": 200}, {"DATE": "2020-02-06", "Category": "BB", "Value": 250} ] }, "transform": [ {"window": [{"op": "dense_rank", "as": "BarNo"}], "sortby": ["DATE"]} ], "encoding": { "x": {"field": "DATE", "type": "nominal"}, "y": {"field": "Value", "aggregate": "sum", "type": "quantitative"} }, "layer": [ {"mark": "bar", "encoding": {"color": {"field": "Category"}}}, { "mark": {"type": "text", "dy": -5}, "encoding": { "text": {"field": "BarNo", "aggregate": "min", "type": "quantitative"} } } ] } Vega Editor A: Never mind. Here is the solution: "transform": [ {"window": [{ "op":"distinct", "field":"DATE", "as": "BarNo"}], "sortby": ["DATE"], "frame": [null,0] }, { "joinaggregate": [{ "op":"max", "field": "BarNo", "as": "BarCount" }] } ] Vega-lite Editor
{ "language": "en", "url": "https://stackoverflow.com/questions/75640213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to render a function im passing down as a prop to a child component in React? I'm trying to show a list of times available in a form as options, I was able to so it using useState but now im trying it with a reducer and I can't figure out why its not working. I pass the function down as a prop and then call it int he child component but the options element doesn't render and i'm not recieving any errors in the browser. I can add more information if its needed. I tried to condense it. //parent component import BookingForm from '../components/bookingForm' import { useState,useReducer } from 'react' function Reservations(){ const initialState = ["17:00","18:00","19:00","20:00","21:00"] const updateTimes = (availableTimes,action)=>{ return availableTimes } const initializeTimes= () =>{ {initialState.map((times) =>{ return( <option key={initialState}>{times}</option> ) } )} } const [availableTimes, dispatch] = useReducer(updateTimes, initializeTimes); return( <> <Nav/> <header> <div> <h1>Book a table</h1> <h2>Little Lemon</h2> </div> </header> <div> {<BookingForm initializeTimes={initializeTimes} />} </div> {<Footer/>} </> ) } export default Reservations //Child component import { useState } from "react"; function BookingForm(props){ (...form elements) return( <div> <label htmlFor="res-time">Choose time</label> <select id="res-time"> {props.initializeTimes()} </select> </div> ) I've pretty much tried doing most things i could think of as I've googled how to pass functions as props to children but have not been successful. A: change initializeTimes function: const initializeTimes = () => { return initialState.map((times) => { return ( <option key={times}>{times}</option> ); }); }
{ "language": "en", "url": "https://stackoverflow.com/questions/75640216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to avoid repetitive indices in sparse tensor in tensorflow in google colab? We are trying to convert the numpy code in tensorflow: Numpy version: sK=((KE.flatten()[np.newaxis]).T*(Emin+(xPhys)**penal*(Emax-Emin))).flatten(order='F') K = coo_matrix((sK,(iK,jK)),shape=(ndof,ndof)).tocsc() K = K[free,:][:,free] Tensorflow version: sess1 = tf.compat.v1.Session() for i in range(0,len(iK)): SPindex.append([int(iK[i]),int(jK[i])]) def output_coo(KE,xPhys,SPindex1,freeN,penalty=5.4,Emin = 1e-9,Emax = 1.0): m1=tf.reshape(KE,shape=(64,1)) m2=m1*(Emin+((xPhys)**penalty)*(Emax-Emin)) m3=tf.transpose(m2) sK=tf.reshape(m3,shape=(115200,)) K = tf.SparseTensor(indices=SPindex1,values=sK,dense_shape=[ndof,ndof]) K = tf.sparse.reorder(K) K = tf.sparse.to_dense(K) K = tf.gather(K, freeN, axis=0) K = tf.gather(K, freeN, axis=1) return K k1 = output_coo(KE,xPhys,SPindex,freeN) sess1.run(k1) Here is the error: enter image description here
{ "language": "en", "url": "https://stackoverflow.com/questions/75640217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: can i model bird species richness against plants species richness in different farm types (harvested, fallow and cultivated plots) using a glm My project is on how birds utilize different hedgerows types(natural and planted) at different farm types such as fallow, harvested, and cultivated plots. So I want to see the effect of plant species richness and abundance on bird species richness of different hedgerow types in different farm types.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: cv2.VideoCapture(0) read frame from a different camera for each run I ran into a very strange problem that appeared out of nowhere, the same code did not cause this problem before. The following code, the first time I ran it, it read the frame from my connected external camera; the second time I ran it, it read the frame from my MacBook's built-in camera; the third time it read the frame from the external camera. It just keeps switching and I don't know how to fix it, I want cameraCapture = cv2.VideoCapture(0) to always get the frame from the external camera. import cv2 cameraCapture = cv2.VideoCapture(1) # read success, frame = cameraCapture.read() while success and cv2.waitKey(1) == -1: img = frame cv2.imshow("Mine", img) success, frame = cameraCapture.read() Is there any way I can get the data of a certain numbered camera, such as name, resolution, etc.? A: You can use cameraCapture.get(<property id>) to get different properties of the capture device. There's a full list of them here, but the ones you're looking for are: * *cv.CAP_PROP_FRAME_WIDTH for the width *cv.CAP_PROP_FRAME_HEIGHT for the height.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Round Robin assignment with dynamic assignee group (members can leave the group) I would like to distribute tasks among the active members of a group: WorkerId IsActive 001 true 002 true 003 true 004 false 005 false 006 true Each new task is assigned to the next active member in the group. Tasks are created at irregular intervals, so I need some way of keeping track of the last member to be assigned a task (call him the "last assignee"), so that I can assign the next task to the next active member after him. Currently, I'm storing the last assignee in a field called lastAssignee. When a new task comes in for assignment, I retrieve the group members from the database and starting from the index of lastAssignee, I loop through the members to find the nearest next active member: // pseudo-code let nextAssignee; let groupMemberIds; for (let i = 0; i < groupMembers.length(); i++) { groupMemberIds[i] = groupMembers[i].Id; } int indexOfLastAssignee = groupMemberIds.indexOf(RRState.lastAssignee); for (int i = indexOfLastAssignee; i < groupMembers.size(); i++) { if (groupMembers[i].IsActive) { nextAssignee = groupMembers[i].Id; break; } } RRState.lastAssignee = nextAssignee; Problem: Members can leave the group. If lastAssignee leaves the group before the next assignment, then the solution above no longer works, since indexOfLastAssignee will be -1 (since his Id no longer appears in groupMemberIds). The only solution I could think of was saving a copy of the group members in RRState, so that I can use it on the next assignment: // pseudo-code let nextAssignee; let groupMemberIds; let oldGroup = RRState.oldGroup; for (let i = 0; i < oldGroup.length(); i++) { groupMemberIds[i] = groupMembers[i].Id; } int indexOfLastAssignee = groupMemberIds.indexOf(RRState.lastAssignee); for (int i = indexOfLastAssignee; i < groupMembers.size(); i++) { if (groupMembers[i].IsActive) { nextAssignee = groupMembers[i].Id; break; } } RRState.lastAssignee = nextAssignee; RRState.oldGroup = newGroupMembers; But this is clunky and none of the fields of the RRState object support lists. I'm not sure where to go from here, so was wondering if anyone knew of a general round robin implementation that supports dynamic worker groups.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to install and use PouchDB in react native? I am looking for a simple NoSQL database to be used locally (without sync with server) in a react native app and I decided to give PouchDB a try. I am wondering which of the following 2 packages I should use : > npm install pouchdb > npm install pouchdb-react-native pouchdb-react-native seems a better choice but its not active, not documented, without tutorials. Is it possible to install pouchdb-react-native and use the js pouchdb documentation and tutorials?
{ "language": "en", "url": "https://stackoverflow.com/questions/75640223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Type narrowing is not working in for loop Type narrowing is not working in for loop. How to fix to work type narrowing correctly in for loop? Below is a simple example(Please run it on TS playground) // string or null is unsure until runtime const htmlElements = [ { textContent: Math.random() < 0.5 ? "test": null }, { textContent: Math.random() < 0.5 ? "test": null }, { textContent: Math.random() < 0.5 ? "test": null }, ]; const contents: { content: string; }[] = []; // type narrowing work if (typeof htmlElements[0].textContent === "string") { contents.push({ content: htmlElements[0].textContent }); } // type narrowing not work for (const i in htmlElements) { if (htmlElements[i].textContent && typeof htmlElements[i].textContent === "string") { contents.push({ content: htmlElements[i].textContent // <-- Error: Type 'null' is not assignable to type 'string'. textContent: string | null }); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/75640224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: karate read request as JSON file having tags for data driven scenario I am trying to read the json request from a file, also my request is to get data from .csv file. however the below code does not replace my keyvalues, and are passed with brackets expected { "Key1": "HAPPY", "key2": "Val_12", "key3": "Val_13", "key4": "Val_14" } actual { "Key1": "<key1>", "key2": "<key2>", "key3": "<key3>", "key4": "<key4>" } Feature: Validate TCs Background: Given url 'https://'+env_apiHost Given path '/abc/pqr-stu/v1/wxy-zzzz' Scenario Outline: Validate Functional Data * def request_string = read('classpath:data/jsnStr.json') And request request_string When method post Then status 200 * print response Examples: read('classpath:data/jsnVals.csv')| data/jsnVals.csv | key1 | key2 | key3 | key4 | | HAPPY | Val_12 | Val_13 | Val_14 | | TC_002 | Val_22 | Val_23 | Val_24 | data/jsnStr.json { "Key1": "<key1>", "key2": "<key2>", "key3": "<key3>", "key4": "<key4>" }
{ "language": "en", "url": "https://stackoverflow.com/questions/75640227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Averaging temperatures over time on multiple dates I have a device that measures temperature at 10sec intervals and is worn by a subject for a number of days. I want to average the temperatures every 30 seconds. My data set can be missing data points (from an individual 10sec data point missing to a block of several hours if device accidently turned off). The code I have written will average the temperatures over 30 seconds eg 10.30.00 to 10.30.30 but doesn't separate by date so I end up with an average for the one time block over all dates. I have added an example of the data below (same times over 3 days) and the converted table and output. The code I am using is below data data as table output df_sum <- df[, c('Hour', 'Minute', 'Second') := .(data.table::hour(datetime), minute(datetime), second (datetime))][, second_Cut := cut(Second, breaks = c(0,30,60), include.lowest = T) ][, .(Avg = mean(CorTemp)), .(Hour, Minute, second_Cut)] A: library(data.table) # generate random sample tmp = seq(as.POSIXct("2023-03-05 12:00:00"),as.POSIXct("2023-03-05 12:12:00"),by="sec") df = data.table(datetime=tmp,TEMP=37+rnorm(length(tmp),sd = 2)) # this is what you what res= df[, .(Hour=hour(datetime), Minute=minute(datetime), second_Cut=cut(second(df$datetime),c(0,30,60),include.lowest = T), TEMP)][, .(Avg=mean(TEMP)),by=.(Hour,Minute,second_Cut)] res This is the input data. BTW, you'd better provide the data file (such as csv file) rather than a screenshot. This is the output you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/75640229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can i solve this problem? None of these files exist: * ..\src\components\Main.jsx The full error message is: None of these files exist: * *..\src\components\Main.jsx(.native|.android.ts|.native.ts|.ts|.android.tsx|.native.tsx|.tsx|.android.js|.native.js|.js|.android.jsx|.native.jsx|.jsx|.android.json|.native.json|.json) *..\src\components\Main.jsx\index(.native|.android.ts|.native.ts|.ts|.android.tsx|.native.tsx|.tsx|.android.js|.native.js|.js|.android.jsx|.native.jsx|.jsx|.android.json|.native.json|.json) I have the file Main.jsx in ./src/components/Main.jsx but it doesnt detect it VS Code explorer The react native version is 0.71.3 The react version is 18.2.0 The node version is 18.14.2 I have tried using backwards slash but htat didnt solve the problem. App.js import React from 'react' import Main from './src/components/Main.jsx' export default function App() { return <Main /> } Main.jsx import React from 'react' import { Text, View } from 'react-native' const Main = () =>{ return ( <View> <Text>Hola Mundo</Text> </View> ) } export default Main A: If you are a Mac user try to check file name case sensitivity e.g if your file name is main.js and you are importing Main.js. Then it fails to detect files in mac machines sometime. Moreover try to check if pathname is correct and re-run the server after cleaning npm cache. For cahce clean you can use command: > npm cache clean –force
{ "language": "en", "url": "https://stackoverflow.com/questions/75640230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Rails 7.x and Zeitwerk seems to load / find nothing? What did I forget? I have an old Rails 3.x application that I now migrate to 7.x (It is/was already running under 5.0, so it is not so hard). However, I planed to do this step by step, but it is more or less impossible to install older versions 2023. The application runs nice and (almost) without any requires with 3.x. Therefore I assume, that I kept the naming conventions. But zeitwerk ignores all my constants, not complicated things I read in other Q&A, that are tricky. No just easy things. I bring two examples, that I think zeitwerk should find, but doesn't I have a config\initializers\site_init.rb that I use to configure some of my modules. Site.setup do |config| config.default_do_say=false config.default_show_all_exceptions=false end Site::Cache.setup do |config| … end So I would expect zeitwerk to find it in lib/site/cache.rb where it is defined like that: module Site class Cache include Singleton extend Configurable configurable_local_name 'config', 'setup' configurable_all_to_module configurable_global cache_path: "./tmp/cache" configurable_global auto_file_size: 1000 configurable_global auto_persist: 100 ... The second is also easy (config/initializers/bot_check.rb): BotCheck.setup do |config| config.etw_divider=20 config.etw_min_level=0 end Located in controllers directly module BotCheck extend Configurable configurable_local_name 'config', 'setup' configurable_all_to_module # configurable_global start_time: Time.now configurable_global etw_divider: 5 configurable_global etw_min_level: 0 def self.check_level count=BotResult.where("created_at > ?", Time.now()-1.hour).count() Rails.logger.warn "bot check level: #{count}, #{config.etw_min_level}".white.on_red [count/config.etw_divider, config.etw_min_level].max end end rails zeitwerk:check tells me: /config/environment.rb:5:in -> NameError: uninitialized constant BotCheck You see, I struggle early. That's why I am sure, that I have overseen or not understood something very basic of zeitwerk. The relevant part of application.rb looks like if Rails.version < "7" config.autoload_paths += %W(#{config.root}/app/helpers) config.autoload_paths += %W(#{config.root}/app/helpers/fields) config.autoload_paths += %W(#{config.root}/app/helpers/tags) config.autoload_paths += %W(#{config.root}/lib) config.autoload_paths += %W(#{config.root}/lib/geo_lib) config.autoload_paths += %W(#{config.root}/lib/tech_draw) config.autoload_paths += %W(#{config.root}/lib/site) config.autoload_paths += %W(#{config.root}/lib/inplace_trans) config.autoload_paths += %W(#{config.root}/lib/flat_form) config.autoload_paths += %W(#{config.root}/lib/site_tag_helper) else config.load_defaults 7.0 config.add_autoload_paths_to_load_path=false puts "---------------- 7777777 -----------------" config.eager_load = false config.autoload_paths << "#{config.root}/lib" config.autoload_paths << "#{config.root}/lib/site" puts config.eager_load_namespaces end #check for typos config.autoload_paths.each { |p| puts "autoload_paths: #{p}"; puts "#{p} not found or no directory".yellow if !File.directory?(p) } config.eager_load_paths.each { |p| puts "eager_load_paths: #{p}"; puts "#{p} not found or no directory".yellow if !File.directory?(p) }
{ "language": "en", "url": "https://stackoverflow.com/questions/75640231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Transactions With MetaMask Returns An Internal JSON RPC Error This is the error MetaMask gives when my app prompts to open it: {"message": "Error Domain=org.walletconnect Code=-32000 \"Internal JSON-RPC error.\" UserInfo={NSLocalizedDescription=Internal JSON-RPC error.}"} I tried changing my MetaMask Mumbai Polygon TestNet RPC to the same as the one in my app (I am using a third party provider called particle.network). I was looking to get guidance on what might be returning this error message?
{ "language": "en", "url": "https://stackoverflow.com/questions/75640232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }