编程语言
首页 > 编程语言> > javascript – 如何在不使用XmlService的情况下解析Google Apps脚本中的HTML字符串?

javascript – 如何在不使用XmlService的情况下解析Google Apps脚本中的HTML字符串?

作者:互联网

我想使用Google Spreadsheets和Google Apps脚本创建一个刮刀.我知道这是可能的,我已经看过一些关于它的教程和线程.

主要想法是使用:

  var html = UrlFetchApp.fetch('http://en.wikipedia.org/wiki/Document_Object_Model').getContentText();
  var doc = XmlService.parse(html);

然后开始使用这些元素.但是,方法

XmlService.parse()

对某些页面不起作用.例如,如果我尝试:

function test(){
    var html = UrlFetchApp.fetch("https://www.nespresso.com/br/pt/product/maquina-de-cafe-espresso-pixie-clips-preto-lima-neon-c60-220v").getContentText();
    var parse = XmlService.parse(html);
}

我收到以下错误:

Error on line 225: The entity name must immediately follow the '&' in the entity reference. (line 3, file "")

我试图使用string.replace()来消除显然导致错误的字符,但它不起作用.出现所有其他错误.以下代码为例:

function test(){
    var html = UrlFetchApp.fetch("https://www.nespresso.com/br/pt/product/maquina-de-cafe-espresso-pixie-clips-preto-lima-neon-c60-220v").getContentText();
    var regExp = new RegExp("&", "gi");
    html = html.replace(regExp,"");

    var parse = XmlService.parse(html);
}

给我以下错误:

Error on line 358: The content of elements must consist of well-formed character data or markup. (line 6, file "")

我相信这是XmlService.parse()方法的一个问题.

我读过这个主题:

Google App Script parse table from messed htmlWhat is the best way to parse html in google apps script,可以使用名为xml.parse()的弃用方法,该方法接受允许解析HTML的第二个参数.但是,正如我所提到的,它已被弃用,我无法在任何地方找到任何文档. xml.parse()似乎解析了字符串,但由于缺少文档,我无法使用这些元素.它也不是最安全的长期解决方案,因为它可以很快停用.

那么,我想知道如何在Google Apps脚本中解析此HTML?

我也尝试过:

function test(){

    var html = UrlFetchApp.fetch("https://www.nespresso.com/br/pt/product/maquina-de-cafe-espresso-pixie-clips-preto-lima-neon-c60-220v").getContentText();
    var htmlOutput = HtmlService.createHtmlOutput(html).getContent();

    var parse = XmlService.parse(htmlOutput);
}

但它不起作用,我得到这个错误:

Malformed HTML content:

我想过使用一个开源库解析HTML,但我找不到任何.

我的最终目标是从一组页面获取一些信息,如价格,链接,产品名称等.我已经设法使用一系列RegEx来做到这一点:

var ss = SpreadsheetApp.getActiveSpreadsheet();
  var linksSheet = ss.getSheetByName("Links");
  var resultadosSheet = ss.getSheetByName("Resultados");

function scrapyLoco(){

  var links = linksSheet.getRange(1, 1, linksSheet.getLastRow(), 1).getValues();
  var arrayGrandao = [];
  for (var row =  0, len = links.length; row < len; row++){
   var link = links[row];


   var arrayDeResultados = pegarAsCoisas(link[0]);
   Logger.log(arrayDeResultados);
   arrayGrandao.push(arrayDeResultados);
  }   


  resultadosSheet.getRange(2, 1, arrayGrandao.length, arrayGrandao[0].length).setValues(arrayGrandao);

}


function pegarAsCoisas(linkDoProduto) {
  var resultadoArray = [];

  var html = UrlFetchApp.fetch(linkDoProduto).getContentText();
  var regExp = new RegExp("<h1([^]*)h1>", "gi");
  var h1Html = regExp.exec(html);
  var h1Parse = XmlService.parse(h1Html[0]);
  var h1Output = h1Parse.getRootElement().getText();
  h1Output = h1Output.replace(/(\r\n|\n|\r|(^( )*))/gm,"");

  regExp = new RegExp("Ref.: ([^(])*", "gi");
  var codeHtml = regExp.exec(html);
  var codeOutput = codeHtml[0].replace("Ref.: ","").replace(" ","");

  regExp = new RegExp("margin-top: 5px; margin-bottom: 5px; padding: 5px; background-color: #699D15; color: #fff; text-align: center;([^]*)/div>", "gi");
  var descriptionHtml = regExp.exec(html);
  var regExp = new RegExp("<p([^]*)p>", "gi");
  var descriptionHtml = regExp.exec(descriptionHtml);
  var regExp = new RegExp("^[^.]*", "gi");
  var descriptionHtml = regExp.exec(descriptionHtml);
  var descriptionOutput = descriptionHtml[0].replace("<p>","");
  descriptionOutput = descriptionOutput+".";

  regExp = new RegExp("ecom(.+?)Main.png", "gi");
  var imageHtml = regExp.exec(html);
  var comecoDaURL = "https://www.nespresso.com/";
  var imageOutput = comecoDaURL+imageHtml[0];

  var regExp = new RegExp("nes_l-float nes_big-price nes_big-price-with-out([^]*)p>", "gi");
  var precoHtml = regExp.exec(html);
  var regExp = new RegExp("[0-9]*,", "gi");
  precoHtml = regExp.exec(precoHtml);
  var precoOutput = "BRL "+precoHtml[0].replace(",","");

  resultadoArray = [codeOutput,h1Output,descriptionOutput,"Home & Garden > Kitchen & Dining > Kitchen Appliances > Coffee Makers & Espresso Machines",
                    "Máquina",linkDoProduto,imageOutput,"new","in stock",precoOutput,"","","","Nespresso",codeOutput];

  return resultadoArray;
}

但这对于编程非常耗时,很难动态地改变它并且不是非常可靠.

我需要一种方法来解析这个HTML并轻松访问它的元素.
它实际上不是一个补充.但一个简单的谷歌应用程序脚本..

解决方法:

我在vanilla js中做过这个.不是真正的HTML解析.只是尝试从字符串(url)中获取一些内容:

function getLKKBTC() {
  var url = 'https://www.lykke.com/exchange';
  var html = UrlFetchApp.fetch(url).getContentText();
  var searchstring = '<td class="ask_BTCLKK">';
  var index = html.search(searchstring);
  if (index >= 0) {
    var pos = index + searchstring.length
    var rate = html.substring(pos, pos + 6);
    rate = parseFloat(rate)
    rate = 1/rate
    return parseFloat(rate);
  }
  throw "Failed to fetch/parse data from " + url;
}

标签:javascript,parsing,google-sheets,google-apps-script,html-parsing
来源: https://codeday.me/bug/20191004/1853489.html