Jump to content

confused and dazed

Members
  • Posts

    313
  • Joined

  • Last visited

Everything posted by confused and dazed

  1. I tried that but its not working... the table is buried in a script tag $dom = new DOMDocument(); @$dom->loadHTML($stuff); foreach($dom->getElementsByTagName('script') as $this1) { $try1 = $this1->*****dont know what to put here******("I_want_this"); $echo $try1; }
  2. OK so once I started digging into the data the user page provided I realized the content I want is buried in bootstrap table data. How do I extract "bingo" from this table data? {"status":"X","some_id":"112","getting_close":"ER","I_want_this":"bingo"}
  3. So This absolutely worked. I was able to login and grab the info i needed. What I like about W3S forum (and is the reason why this is the only one I go to) is because the answers are not given. Most if not all of you guys try to teach and allow folks to do the work on their own. THANKS AGAIN!!
  4. Awesome! I have another string going for this topic so I will close this one down because the other one starts from the beginning. As usual thanks for the response!! http://w3schools.invisionzone.com/index.php?showtopic=52997&hl=
  5. submit a post request that contains the data from the login form with the correct names, and get the cookies that the server sends back. I guess I have some investigating ahead of me. I understand what that is on the surface but have no idea how to accomplish it. I will be back at some point either thanking or asking more questions. Until then....... May all your code be syntax error free!
  6. There is no error message. Maybe it has something to do with what I expect to see. From the echo $out; I expect to see the logged in page embedded in the current page. All I see is the login page embedded in the current page. I expected to see the user info page instead of the login page as well as have access to the source data from the user page so I can scrape the source code.
  7. Hello internet, Recently I have been working with curl sessions and scrapping data from webpages. I have been fairly successful until I tried to access data from pages that are username and password protected. I have the username and password so that's not an issue - but I am not able to get the data on the page. Any thoughts? $username='usr1'; $password='pswd1'; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'somelink'); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY); curl_setopt($ch, CURLOPT_USERPWD, $username.":".$password); $out = curl_exec($ch); if(curl_exec($ch) === false) { echo 'Curl error: ' . curl_error($ch); } echo $out; curl_close($ch);
  8. I'm using this but it is not working... It's displaying the login page but its not logging in. What do I do? $username='usr1'; $password='pasw1'; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $link); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY); curl_setopt($ch, CURLOPT_USERPWD, "$username:$password"); $out = curl_exec($ch); if(curl_exec($ch) === false) { echo 'Curl error: ' . curl_error($ch); } echo $out; curl_close($ch);
  9. Again - thanks for setting me off in the right direction. I was able to resolve that issue as well. SO... now I have a new problem. I was able to send the href links to my database where I was going to pull them individually into a curl session (with a while loop) when I realized you need a password and login to get to the page. I have both those things but I don't know how to code them in so the curl session can access the data from that page. Where do I go from here?
  10. That worked. Thanks! Each time I progress I figure out I need more.... So here is my next quest - I need to get the text after the class="sort" but I need to be able to group the 22 with THIS TEXT1 and 0 with THIS TEXT2. Basically I am sending the data through mysql and THIS TEXT1 and 22 need to be in the same row in the database. <td align="left"><a href='some site' target='_blank'>THIS TEXT1</a></td><td align="center" class="sort">22</td> <td align="left"><a href='some site' target='_blank'>THIS TEXT2</a></td><td align="center" class="sort">0</td>
  11. So - I have been successful in pulling all the very specific links that I want from a webpage using the code below - it works well. However I am struggling now to pull the text in between the over all <a></a> tags. Example: <a href=http...bla bla bla>THIS TEXT</a> How do I get the text "THIS TEXT" I wont be able to search for THIS TEXT because the text will not actually be "THIS TEXT" it will be different each time. Any thoughts? $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "some site"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $out = curl_exec($ch); if(curl_exec($ch) === false){echo 'Curl error: ' . curl_error($ch);} else {echo 'Operation completed without any errors'; echo "<br>";} curl_close($ch); $dom = new DOMDocument(); @$dom->loadHTML($out); foreach($dom->getElementsByTagName('a') as $links) { $try = $links->getAttribute('href'); if (preg_match('#^some very specific links#i', $try) === 1) { print_r($try); echo "<br>";} }
  12. OK so I decided to go the DOMDocument route and so far so good.I have been able to pull all the links from the site. Next is delimiting and saving to mysql More to come!!
  13. This is as far as I have been able to get. I know the code below works because it grabs the site and displays it on the page. When I try to get the info I want out of the source code i end up with nothing. I am trying to grab the href links but my arrays come up empty with no data. $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "some site"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($ch); if(curl_exec($ch) === false) { echo 'Curl error: ' . curl_error($ch); } else { echo 'Operation completed without any errors'; } echo $output; curl_close($ch);
  14. I am familiar with API i used it when creating a plugin but i used .json to the html data. I dont know how to access the API any other way. How do i do it?
  15. If they did how would i access the info using the API
  16. You mean like putting a '.json' on the end of the webpage path? I did that and it did not launch a page with the data. Do you mean a different way?
  17. I would like to pull dozens of href links from a webpage as well as data within a bootstrap callout to export to a database. Basically there is a webpage that i am interested in that has many links to other pages as well as a lot of data i would like to export to a database.
  18. Yeah that tutorial kind of stalled out and went nowhere.... I looked and looked and I cant seem to get one that starts from the beginning. Anyone have any suggestions or links to tutorials?
  19. Interestingly enough I started this video series on cURL its getting me started. Thanks.
  20. Hello internet, There is a webpage I would like to start building but I need to be able to pull data from other webpages. The information is public info so it’s nothing sketchy but I don’t know where to start. I began with just doing google searches but it appears it is more involved than I originally thought. If you have any recommendations on tutorials or Forum discussions I would be grateful. Thanks.
  21. Hello internet, I have several blocks of code set up for images based in html but I am also using this page to process a previous form so the file is saved as a ".php" I am using the following code to contain information: <div class="block_1"> <This is where I have an image and two radio buttons> </div> <div class="block_2"> <This is where I have an image and two radio buttons> </div> …..….. <div class="block_100"> <This is where I have an image and two radio buttons> </div> Based on how many images are available for the user to see, I would like to limit how many of these blocks appear. The page's capability is 100 of these blocks. html does not accept if() statements so how could I set it up? I can "$_POST" to a variable to use to have a count of how many images I want displayed - but how do I use that variable to display or hide blocks... some background:I need to have 100 individual blocks set up so the user can select which ones they want to add to their favorite list. However I don’t want the user to see 20 image blocks with radio buttons of images AND 80 "x" images where there is no image to load.
  22. Hmmmm.... I will go back and look over all the information provided.
  23. Thanks for the links. I took a look at both. However it seems I am using a different approach to cycle through the images. I am using the code below. I set up individual variables and the function below steps through them. I would like to fade the images In/Out using this code I started with. Any recommendations? var step=1function slideit(){if (!document.images)returndocument.images.slide.src=eval("image"+step+".src")if (step<7)step++elsestep=1setTimeout("slideit()",2500)}
×
×
  • Create New...