系统相关
首页 > 系统相关> > 一起重新开始学大数据-hive篇-day 52 常用函数、复杂函数、行列互转、UDF自定义函数、HiveShell

一起重新开始学大数据-hive篇-day 52 常用函数、复杂函数、行列互转、UDF自定义函数、HiveShell

作者:互联网

一起重新开始学大数据-hive篇-day 52 常用函数、复杂函数、行列互转、UDF自定义函数、HiveShell

在这里插入图片描述

Hive 常用函数


关系运算

<=>与=和==是等于的意思,<> 与!=都是不等于的意思,但是一般都是用<>来代表不等于,因为<>在任何SQL中都起作用,但是!=在sql2000中用到,则是语法错误,不兼容的(同理=和<=>也是)

A RLIKE B ,表示B是否在A里面即可。而A LIKE B,则表示B是否是A.
regexp的用法和rlike一样

数值计算

条件函数

select if(1>0,1,0); 
select if(1>0,if(-1>0,-1,1),0);
select COALESCE(null,'1','2'); // 1 从左往右 一次匹配 直到非空为止
select COALESCE('1',null,'2'); // 1
select  score
        ,case when score>120 then '优秀'
              when score>100 then '良好'
              when score>90 then '及格'
        else '不及格'
        end as pingfen
from score limit 20;

在这里插入图片描述

select  name
        ,case name when "施笑槐" then "槐ge"
                  when "吕金鹏" then "鹏ge"
                  when "单乐蕊" then "蕊jie"
        else "算了不叫了"
        end as nickname
from students limit 10;

在这里插入图片描述
注意
条件的顺序

日期函数

select from_unixtime(1610611142,'YYYY/MM/dd HH:mm:ss');

select from_unixtime(unix_timestamp(),'YYYY/MM/dd HH:mm:ss');
// '2021年09月8日' -> '2021-09-8'
select from_unixtime(unix_timestamp('2021年09月8日','yyyy年MM月dd日'),'yyyy-MM-dd');
// "09大帅2021宇宙第一8逼" -> "2021/09/8"
select from_unixtime(unix_timestamp("09大帅2021宇宙第一8逼","MM大帅yyyy宇宙第一dd逼"),"yyyy/MM/dd");

在这里插入图片描述

字符串函数

concat 拼接

concat('123','456'); // 123456
concat('123','456',null); // NULL

concat_ws 分隔符拼接

select concat_ws('#','a','b','c'); // a#b#c
select concat_ws('#','a','b','c',NULL); // a#b#c 可以指定分隔符,并且会自动忽略NULL
select concat_ws("|",cast(id as string),name,cast(age as string),gender,clazz) from students limit 10;

substring 取对应位置开始相应数量的字符串

select substring("abcdefg",1); // abcdefg HQL中涉及到位置的时候 是从1开始计数
// '2021/09/08' -> '2021-09-08'
select concat_ws("-",substring('2021/09/08',1,4),substring('2021/09/08',6,2),substring('2021/09/08',9,2));

split 切分

select split("abcde,fgh",","); // ["abcde","fgh"]
select split("a,b,c,d,e,f",",")[2]; // c

select explode(split("abcde,fgh",",")); // abcde
										//  fgh

get_json_object解析json格式数据

// 解析json格式的数据
select get_json_object('{"name":"zhangsan","age":18,"score":[{"course_name":"math","score":100},{"course_name":"english","score":60}]}',"$.score[0].score"); // 100



Hive 中的wordCount


create table words(
    words string
)row format delimited fields terminated by '|';
hello,java,hello,java,scala,python
hbase,hadoop,hadoop,hdfs,hive,hive
hbase,hadoop,hadoop,hdfs,hive,hive
load data local inpath "/usr/local/soft/data/words_data.txt" into table words;
select word,count(*) from (select explode(split(words,',')) word from words) a group by a.word;


Hive 开窗函数


测试数据

vim /usr/local/soft/data/new_score_data.txt;
111,69,class1,department1
112,80,class1,department1
113,74,class1,department1
114,94,class1,department1
115,93,class1,department1
121,74,class2,department1
122,86,class2,department1
123,78,class2,department1
124,70,class2,department1
211,93,class1,department2
212,83,class1,department2
213,94,class1,department2
214,94,class1,department2
215,82,class1,department2
216,74,class1,department2
221,99,class2,department2
222,78,class2,department2
223,74,class2,department2
224,80,class2,department2
225,85,class2,department2

建表语句

create table new_score(
    id  int
    ,score int
    ,clazz string
    ,department string
) row format delimited fields terminated by ",";

插入数据:

load data local inpath "/usr/local/soft/data/new_score_data.txt" into table new_score;

row_number:无并列排名

Over子句之后第一个提到的就是Partition By. Partition By子句也可以称为查询分区子句,非常类似于Group By,都是将数据按照边界值分组,而Over之前的函数在每一个分组之内进行,如果超出了分组,则函数会重新计算.

dense_rank:有并列排名,并且依次递增

rank:有并列排名,不依次递增

PERCENT_RANK:(rank的结果-1)/(分区内数据的个数-1)

select  id
        ,score
        ,clazz
        ,department
        ,row_number() over (partition by clazz order by score desc) as row_number_rk
        ,dense_rank() over (partition by clazz order by score desc) as dense_rk
        ,rank() over (partition by clazz order by score desc) as rk
        ,percent_rank() over (partition by clazz order by score desc) as percent_rk
from new_score;

结果:

id  score   clazz   department  row_number_rk dense_rk  rk  percent_rk
114	 94	    class1	department1	    1       	1	    1	    0.0
214	 94	    class1	department2	    2       	1	    1	    0.0
213	 94	    class1	department2	    3       	1	    1	    0.0
211	 93	    class1	department2	    4       	2	    4	    0.3
115	 93	    class1	department1	    5       	2	    4	    0.3
212	 83	    class1	department2	    6       	3	    6	    0.5
215	 82	    class1	department2	    7       	4	    7	    0.6
112	 80	    class1	department1	    8       	5	    8	    0.7
113	 74	    class1	department1	    9       	6	    9	    0.8
216	 74	    class1	department2	    10      	6	    9	    0.8
111	 69	    class1	department1	    11      	7	    11	    1.0
221	 99	    class2	department2	    1       	1	    1	    0.0
122	 86	    class2	department1	    2       	2	    2	    0.125
225	 85	    class2	department2	    3       	3	    3	    0.25
224	 80	    class2	department2	    4       	4	    4	    0.375
123	 78	    class2	department1	    5       	5	    5	    0.5
222	 78	    class2	department2	    6       	5	    5	    0.5
121	 74	    class2	department1	    7       	6	    7	    0.75
223	 74	    class2	department2	    8       	6	    7	    0.75
124	 70	    class2	department1	    9       	7	    9	    1.0

LAG(col,n):往前第n行数据

LEAD(col,n):往后第n行数据

FIRST_VALUE:取分组内排序后,截止到当前行,第一个值

LAST_VALUE:取分组内排序后,截止到当前行,最后一个值,对于并列的排名,取最后一个

NTILE(n):对分区内数据再分成n组,然后打上组号

select  id
        ,score
        ,clazz
        ,department
        ,lag(id,2) over (partition by clazz order by score desc) as lag_num
        ,LEAD(id,2) over (partition by clazz order by score desc) as lead_num
        ,FIRST_VALUE(id) over (partition by clazz order by score desc) as first_v_num
        ,LAST_VALUE(id) over (partition by clazz order by score desc) as last_v_num
        ,NTILE(3) over (partition by clazz order by score desc) as ntile_num
from new_score;
id  score   clazz   department  lag_num lead_num  first_v_num last_v_num  ntile_num
114	 94	    class1	department1	  NULL	   213	    114	          213	      1
214	 94	    class1	department2	  NULL	   211	    114	          213	      1
213	 94	    class1	department2	  114	   115	    114	          213	      1
211	 93	    class1	department2	  214	   212	    114	          115	      1
115	 93	    class1	department1	  213	   215	    114	          115	      2
212	 83	    class1	department2	  211	   112	    114	          212	      2
215	 82	    class1	department2	  115	   113	    114	          215	      2
112	 80	    class1	department1	  212	   216	    114	          112	      2
113	 74	    class1	department1	  215	   111	    114	          216	      3
216	 74	    class1	department2	  112	   NULL	    114	          216	      3
111	 69	    class1	department1	  113	   NULL	    114	          111	      3
221	 99	    class2	department2	  NULL	   225	    221	          221	      1
122	 86	    class2	department1	  NULL	   224	    221	          122	      1
225	 85	    class2	department2	  221	   123	    221	          225	      1
224	 80	    class2	department2	  122	   222	    221	          224	      2
123	 78	    class2	department1	  225	   121	    221	          222	      2
222	 78	    class2	department2	  224	   223	    221	          222	      2
121	 74	    class2	department1	  123	   124	    221	          223	      3
223	 74	    class2	department2	  222	   NULL	    221	          223	      3
124	 70	    class2	department1	  121	   NULL	    221	          124	      3



Hive 行转列


lateral view explode

create table testArray2(
    food string,
    dish_name array<string>
)row format delimited 
fields terminated by '#'
COLLECTION ITEMS terminated by ',';

数据

鱼#红烧鱼,清蒸鱼,水煮鱼
鸡#白斩鸡,香酥鸡,黄焖鸡
select food,col1  from testarray2 lateral view explode(dish_name) t1 as col1;

结果

鱼	红烧鱼
鱼	清蒸鱼
鱼	水煮鱼
鸡	白斩鸡
鸡	香酥鸡
鸡	黄焖鸡

select key from (select explode(map('key1',1,'key2',2,'key3',3)) as (key,value)) t;

结果:

key1
key2
key3
select food,col1,col2  from testarray2 lateral view explode(map('key1',1,'key2',2,'key3',3)) t1 as col1,col2;

结果:

鱼	key1	1
鱼	key2	2
鱼	key3	3
鸡	key1	1
鸡	key2	2
鸡	key3	3
select food,pos,col1  from testarray2 lateral view posexplode(dish_name) t1 as pos,col1;

结果:

鱼	0	红烧鱼
鱼	1	清蒸鱼
鱼	2	水煮鱼
鸡	0	白斩鸡
鸡	1	香酥鸡
鸡	2	黄焖鸡




Hive 列转行


// testLieToLine

fooddish_name
红烧鱼
清蒸鱼
水煮鱼
白斩鸡
香酥鸡
黄焖鸡
create table testLieToLine(
    food string,
    dish_name string
)row format delimited 
fields terminated by '#';


select food,collect_list(dish_name) from testLieToLine group by food;

// 结果
鸡	["白斩鸡","香酥鸡","黄焖鸡"]
鱼	["红烧鱼","清蒸鱼","水煮鱼"]

select  t1.food
        ,collect_list(t1.col1) 
from (
    select  food
            ,dish_name 
    from testarray2 
    lateral view explode(dish_name) t1 as col1
) t1 group by t1.food;

Hive自定义函数UserDefineFunction

UDF:一进一出

        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-exec</artifactId>
            <version>1.2.1</version>
        </dependency>
import org.apache.hadoop.hive.ql.exec.UDF;

public class HiveUDF extends UDF {
    // hadoop => #hadoop#
    public String evaluate(String col1) {
    // 给传进来的数据 左边加上 # 号 右边加上 $
        String result = "#" + col1 + "$";
        return result;
    }
}
add jar /usr/local/soft/jars/HiveUDF2-1.0.jar;
create temporary function fxxx1 as 'MyUDF';
select fxx1(name) as fxx_name from students limit 10;
#施笑槐$
#吕金鹏$
#单乐蕊$
#葛德曜$
#宣谷芹$
#边昂雄$
#尚孤风$
#符半双$
#沈德昌$
#羿彦昌$

UDTF:一进多出

“key1:value1,key2:value2,key3:value3”

key1 value1

key2 value2

key3 value3

方法一:使用 explode+split

select split(t.col1,":")[0],split(t.col1,":")[1] 
from (select explode(split("key1:value1,key2:value2,key3:value3",",")) as col1) t;

方法二:自定UDTF

import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
import org.apache.hadoop.hive.ql.metadata.HiveException;
import org.apache.hadoop.hive.ql.udf.generic.GenericUDTF;
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory;
import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;

import java.util.ArrayList;

public class HiveUDTF extends GenericUDTF {
    // 指定输出的列名 及 类型
    @Override
    public StructObjectInspector initialize(StructObjectInspector argOIs) throws UDFArgumentException {
        ArrayList<String> filedNames = new ArrayList<String>();
        ArrayList<ObjectInspector> filedObj = new ArrayList<ObjectInspector>();
        filedNames.add("col1");
        filedObj.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
        filedNames.add("col2");
        filedObj.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
        return ObjectInspectorFactory.getStandardStructObjectInspector(filedNames, filedObj);
    }

    // 处理逻辑 my_udtf(col1,col2,col3)
    // "key1:value1,key2:value2,key3:value3"
    // my_udtf("key1:value1,key2:value2,key3:value3")
    public void process(Object[] objects) throws HiveException {
        // objects 表示传入的N列
        String col = objects[0].toString();
        // key1:value1  key2:value2  key3:value3
        String[] splits = col.split(",");
        for (String str : splits) {
            String[] cols = str.split(":");
            // 将数据输出
            forward(cols);
        }

    }

    // 在UDTF结束时调用
    public void close() throws HiveException {

    }
}
select my_udtf("key1:value1,key2:value2,key3:value3");

字段:id,col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12 共13列

数据:

idcol1col2col3col4col5col6col7col8col9col10col11col12
a123456789101112
b111213141516171819202122
c212223242526272829303132

转成3列id,hours,value

例如:
a,1,2,3,4,5,6,7,8,9,10,11,12

a,0时,1

a,2时,2

a,4时,3

a,6时,4

create table udtfData(
    id string
    ,col1 string
    ,col2 string
    ,col3 string
    ,col4 string
    ,col5 string
    ,col6 string
    ,col7 string
    ,col8 string
    ,col9 string
    ,col10 string
    ,col11 string
    ,col12 string
)row format delimited fields terminated by ',';

代码:

import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
import org.apache.hadoop.hive.ql.metadata.HiveException;
import org.apache.hadoop.hive.ql.udf.generic.GenericUDTF;
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory;
import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;

import java.util.ArrayList;

public class HiveUDTF2 extends GenericUDTF {
    @Override
    public StructObjectInspector initialize(StructObjectInspector argOIs) throws UDFArgumentException {
        ArrayList<String> filedNames = new ArrayList<String>();
        ArrayList<ObjectInspector> fieldObj = new ArrayList<ObjectInspector>();
        filedNames.add("col1");
        fieldObj.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
        filedNames.add("col2");
        fieldObj.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
        return ObjectInspectorFactory.getStandardStructObjectInspector(filedNames, fieldObj);
    }

    public void process(Object[] objects) throws HiveException {
        int hours = 0;
        for (Object obj : objects) {
            hours = hours + 1;
            String col = obj.toString();
            ArrayList<String> cols = new ArrayList<String>();
            cols.add(hours + "时");
            cols.add(col);
            forward(cols);
        }
    }

    public void close() throws HiveException {

    }
}

添加jar资源:

add jar /usr/local/soft/HiveUDF2-1.0.jar;

注册udtf函数:

create temporary function my_udtf as 'MyUDTF';

SQL:

select id,hours,value from udtfData lateral view my_udtf(col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12) t as hours,value ;

UDAF:多进一出

UDAF(User Defined Aggregate Function),即用户定义的聚合函数,聚合函数和普通函数的区别是什么呢,普通函数是接受一行输入产生一个输出,聚合函数是接受一组(一般是多行)输入然后产生一个输出,即将一组的值想办法聚合一下。(多对一)
太过复杂也不经常用到需了解请查看:

标签:函数,自定义,hive,department2,score,互转,class2,select,class1
来源: https://blog.csdn.net/tiand7/article/details/120173234