安裝ambari的時候遇到的ambari和hadoop問題集
5.在安裝的時候遇到的問題
5.1使用ambari-server start的時候出現ERROR: Exiting with exit code -1.
5.1.1REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information
解決:
由於是重新安裝,所以在使用/etc/init.d/postgresql initdb初始化資料庫的時候會出現這個錯誤,所以需要
先用yum –y remove postgresql*命令把postgresql解除安裝
然後把/var/lib/pgsql/data目錄下的檔案全部刪除
然後再配置postgresql資料庫(執行1.6章節內容)
然後再次安裝(3章節內容)
5.1.2在日誌中有如下錯誤:ERROR [main] AmbariServer:820 - Failed to run the Ambari Server
com.google.inject.ProvisionException: Guice provision errors:
1) Error injecting method, java.lang.NullPointerException
at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:243)
at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:125)
while locating org.apache.ambari.server.api.services.AmbariMetaInfo
for field at org.apache.ambari.server.controller.AmbariServer.ambariMetaInfo(AmbariServer.java:145)
at org.apache.ambari.server.controller.AmbariServer.class(AmbariServer.java:145)
while locating org.apache.ambari.server.controller.AmbariServer
1 error
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:813)
Caused by: java.lang.NullPointerException
at org.apache.ambari.server.stack.StackModule.processRepositories(StackModule.java:665)
at org.apache.ambari.server.stack.StackModule.resolve(StackModule.java:158)
at org.apache.ambari.server.stack.StackManager.fullyResolveStacks(StackManager.java:201)
at org.apache.ambari.server.stack.StackManager.(StackManager.java:119)
at org.apache.ambari.server.stack.StackManager$$FastClassByGuice$$33e4ffe0.newInstance()
at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)
at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)
at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)
at com.sun.proxy.$Proxy26.create(Unknown Source)
at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:247)
5.2安裝HDFS和HBASE的時候出現/usr/hdp/current/hadoop-client/conf doesn't exist
5.2.1/etc/Hadoop/conf檔案連結存在
是由於/etc/hadoop/conf和/usr/hdp/current/hadoop-client/conf目錄互相連結,造成死迴圈,所以要改變一個的連結
cd /etc/hadoop
rm -rf conf
ln -s /etc/hadoop/conf.backup /etc/hadoop/conf
HBASE也會遇到同樣的問題,解決方式同上
cd /etc/hbase
rm -rf conf
ln -s /etc/hbase/conf.backup /etc/hbase/conf
ZooKeeper也會遇到同樣的問題,解決方式同上
cd /etc/zookeeper
rm -rf conf
ln -s /etc/zookeeper/conf.backup /etc/zookeeper/conf
5.2.2/etc/Hadoop/conf檔案連結不存在
檢視正確的配置,發現缺少兩個目錄檔案config.backup和2.4.0.0-169,把資料夾複製到/etc/hadoop目錄下
重新建立/etc/hadoop目錄下的conf連結:
cd /etc/hadoop
rm -rf conf
ln -s /usr/hdp/current/hadoop-client/conf conf
問題解決
5.3在認證機器(Confirm Hosts)的時候出現錯誤Ambari agent machine hostname (localhost) does not match expected ambari server hostname
Ambari配置時在Confirm Hosts的步驟時,中間遇到一個很奇怪的問題:總是報錯誤:
Ambari agent machine hostname (localhost.localdomain) does not match expected ambari server hostname (xxx).
後來修改的/etc/hosts檔案中
修改前:
127.0.0.1 localhost dsj-kj1
::1 localhost dsj-kj1
10.13.39.32 dsj-kj1
10.13.39.33 dsj-kj2
10.13.39.34 dsj-kj3
10.13.39.35 dsj-kj4
10.13.39.36 dsj-kj5
修改後:
127.0.0.1 localhost
localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
10.13.39.32 dsj-kj1
10.13.39.33 dsj-kj2
10.13.39.34 dsj-kj3
10.13.39.35 dsj-kj4
10.13.39.36 dsj-kj5
感覺應該是走的ipv6協議,很奇怪,不過修改後就可以了。
5.4ambary-server重灌
刪除使用指令碼刪除
注意刪除後要安裝兩個系統元件
yum -y install ruby*
yum -y install redhat-lsb*
yum -y install snappy*
安裝參考3
5.5Ambari連線mysql設定
在主節點把mysql資料庫連線包複製在/var/lib/ambary-server/resources目錄下並改名為mysql-jdbc-driver.jar
cp /usr/share/java/mysql-connector-java-5.1.17.jar /var/lib/ambari-server/resources/mysql-jdbc-driver.jar
再在圖形介面下啟動hive
5.6在序號產生器器(Confirm Hosts)的時候出現錯誤Failed to start ping port listener of: [Errno 98] Address already in use
某個埠或者程式一直陪佔用
解決方法:
發現df命令一直執行沒有完成,
[root@testserver1 ~]# netstat -lanp|grep 8670
tcp 0 0
0.0.0.0:8670
0.0.0.0:*
LISTEN 2587/df
[root@testserver1 ~]# kill -9 2587
kill後,再重啟ambari-agent問題解決
[root@testserver1 ~]# service ambari-agent restart
Verifying Python version compatibility...
Using python /usr/bin/python2.6
ambari-agent is not running. No PID found at
/var/run/ambari-agent/ambari-agent.pid
Verifying Python version compatibility...
Using python /usr/bin/python2.6
Checking for previously running Ambari Agent...
Starting ambari-agent
Verifying ambari-agent process status...
Ambari Agent successfully started
Agent PID at: /var/run/ambari-agent/ambari-agent.pid
Agent out at: /var/log/ambari-agent/ambari-agent.out
Agent log at: /var/log/ambari-agent/ambari-agent.log
5.7在序號產生器器(Confirm Hosts)的時候出現錯誤The following hosts have Transparent HugePages (THP) enabled。THP should be disabled to avoid potential Hadoop performance issues
解決方法:
在Linux下執行:
echo never >/sys/kernel/mm/redhat_transparent_hugepage/defrag
echo never >/sys/kernel/mm/redhat_transparent_hugepage/enabled
echo never >/sys/kernel/mm/transparent_hugepage/enabled
echo never >/sys/kernel/mm/transparent_hugepage/defrag
5.8啟動hive的時候出現錯誤unicodedecodeerror ambari in position 117
檢視/etc/sysconfig/i18n檔案,發現內容如下:
LANG=”zh_CN.UTF8”
原來系統字符集設定成了中文,改成如下內容,問題解決:
LANG="en_US.UTF-8"
5.9安裝Metrics的時候報如下錯誤,安裝包找不到
1.failure: Updates-ambari-2.2.1.0/ambari/ambari-metrics-monitor-2.2.1.0-161.x86_64.rpm from HDP-UTILS-1.1.0.20: [Errno 256] No more mirrors to try.
在ftp源伺服器上執行命令:
cd /var/www/html/ambari/HDP-UTILS-1.1.0.20/repos/centos6
mkdir Updates-ambari-2.2.1.0
cp -r /var/www/html/ambari/Updates-ambari-2.2.1.0/ambari /var/www/html/ambari/HDP-UTILS-1.1.0.20/repos/centos6/Updates-ambari-2.2.1.0
然後重新生成repodata
cd /var/www/html/ambari
rm -rf repodata
createrepo ./
2.failure: HDP-UTILS-1.1.0.20/repos/centos6/Updates-ambari-2.2.1.0/ambari/ambari-metrics-monitor-2.2.1.0-161.x86_64.rpm from HDP-UTILS-1.1.0.20: [Errno 256] No more mirrors to try.
在/etc/yum.repos.d目錄下刪除mnt.repo,並使用yum clean all命令來清空yum的快取
cd /ec/yum.repos.d
rm -rf mnt.repo
yum clean all
5.11jps 報process information unavailable解決辦法
4791 -- process information unavailable
解決辦法:
進入tmp目錄,
cd /tmp
刪除該目錄下
名稱為hsperfdata_{ username}的資料夾
然後jps,清淨了。
指令碼:
cd /tmp
ls -l | grep hsperf | xargs rm -rf
ls -l | grep hsperf
5.12namenode啟動報錯在日誌檔案中ERROR namenode.NameNode (NameNode.java:main(1712)) - Failed to start namenode
日誌中還有java.net.BindException: Port in use: gmaster:50070
Caused by: java.net.BindException: Address already in use
判斷原因是50070上一次沒有釋放,埠占用
netstat下time_wait狀態的tcp連線:
1.這是一種處於連線完全關閉狀態前的狀態;
2.通常要等上4分鐘(windows server)的時間才能完全關閉;
3.這種狀態下的tcp連線佔用控制程式碼與埠等資源,伺服器也要為維護這些連線狀態消耗資源;
4.解決這種time_wait的tcp連線只有讓伺服器能夠快速回收和重用那些TIME_WAIT的資源:修改登錄檔[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters]新增dword值TcpTimedWaitDelay=30(30也為微軟建議值;預設為2分鐘)和MaxUserPort:65534(可選值5000 - 65534);
5.具體tcpip連線引數配置還可參照這裡:%28v=ws.10%29.aspx
6.linux下:
vi /etc/sysctl.conf
新增如下內容:
net.ipv4.tcp_tw_reuse
= 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_syncookies=1
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_keepalive_time=1800
net.ipv4.tcp_max_syn_backlog=8192
使核心引數生效:
[root@web02 ~]# sysctl -p
readme:
net.ipv4.tcp_syncookies=1 開啟TIME-WAIT套接字重用功能,對於存在大量連線的Web伺服器非常有效。
net.ipv4.tcp_tw_recyle=1
net.ipv4.tcp_tw_reuse=1 減少處於FIN-WAIT-2連線狀態的時間,使系統可以處理更多的連線。
net.ipv4.tcp_fin_timeout=30 減少TCP KeepAlive連線偵測的時間,使系統可以處理更多的連線。
net.ipv4.tcp_keepalive_time=1800 增加TCP SYN佇列長度,使系統可以處理更多的併發連線。
net.ipv4.tcp_max_syn_backlog=8192
5.13在啟動的時候報錯誤resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh
在日誌中有如下內容:
2016-03-31 13:55:28,090 INFO security.ShellBasedIdMapping (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static UID/GID mapping because '/etc/nfs.map' does not exist.
2016-03-31 13:55:28,096 INFO nfs3.WriteManager (WriteManager.java:(92)) - Stream timeout is 600000ms.
2016-03-31 13:55:28,096 INFO nfs3.WriteManager (WriteManager.java:(100)) - Maximum open streams is 256
2016-03-31 13:55:28,096 INFO nfs3.OpenFileCtxCache (OpenFileCtxCache.java:(54)) - Maximum open streams is 256
2016-03-31 13:55:28,259 INFO nfs3.RpcProgramNfs3 (RpcProgramNfs3.java:(205)) - Configured HDFS superuser is
2016-03-31 13:55:28,261 INFO nfs3.RpcProgramNfs3 (RpcProgramNfs3.java:clearDirectory(231)) - Delete current dump directory /tmp/.hdfs-nfs
2016-03-31 13:55:28,269 WARN fs.FileUtil (FileUtil.java:deleteImpl(187)) - Failed to delete file or dir [/tmp/.hdfs-nfs]: it still exists.
說明hdfs這個使用者對/tmp沒有許可權
賦予許可權給hdfs使用者:
chown hdfs:hadoop /tmp
再啟動問題解決
5.14在安裝ranger元件的時候出現錯誤連線不上mysql資料庫rangeradmin使用者和不能賦權的問題
在資料庫中先刪除所有rangeradmin使用者,注意使用drop user命令:
drop user 'rangeradmin'@'%';
drop user 'rangeradmin'@'localhost';
drop user 'rangeradmin'@'gmaster';
drop user 'rangeradmin'@'gslave1';
drop user 'rangeradmin'@'gslave2';
FLUSH PRIVILEGES;
再建立使用者(注意gmaster是ranger安裝的伺服器機器名)
CREATE USER 'rangeradmin'@'%' IDENTIFIED BY 'rangeradmin';
GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'%' with grant option;
CREATE USER 'rangeradmin'@'localhost' IDENTIFIED BY 'rangeradmin';
GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'localhost' with grant option;
CREATE USER 'rangeradmin'@'gmaster' IDENTIFIED BY 'rangeradmin';
GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'gmaster' with grant option;
FLUSH PRIVILEGES;
再檢視許可權:
SELECT DISTINCT CONCAT('User: ''',user,'''@''',host,''';') AS query FROM mysql.user
select * from mysql.user where user='rangeradmin' \G;
問題解決
5.15在ambari啟動的時候出現錯誤:AmbariServer:820 - Failed to run the Ambari Server
這個問題困擾了我很久,最後透過檢視原始碼找到了問題所在:
在/var/log/ambari-server/ambary-server.log檔案中報有錯誤:
13 Apr 2016 14:16:01,723 INFO [main] StackDirectory:458 - Stack '/var/lib/ambari-server/resources/stacks/HDP/2.1.GlusterFS' doesn't contain an upgrade directory
13 Apr 2016 14:16:01,723 INFO [main] StackDirectory:468 - Stack '/var/lib/ambari-server/resources/stacks/HDP/2.1.GlusterFS' doesn't contain config upgrade pack file
13 Apr 2016 14:16:01,744 INFO [main] StackDirectory:484 - Role command order info was loaded from file: /var/lib/ambari-server/resources/stacks/HDP/2.1.GlusterFS/role_command_order.json
13 Apr 2016 14:16:01,840 INFO [main] StackDirectory:484 - Role command order info was loaded from file: /var/lib/ambari-server/resources/stacks/HDP/2.4/role_command_order.json
13 Apr 2016 14:16:01,927 ERROR [main] AmbariServer:820 - Failed to run the Ambari Server
com.google.inject.ProvisionException: Guice provision errors:
1) Error injecting method, java.lang.NullPointerException
at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:243)
at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:125)
while locating org.apache.ambari.server.api.services.AmbariMetaInfo
for field at org.apache.ambari.server.controller.AmbariServer.ambariMetaInfo(AmbariServer.java:145)
at org.apache.ambari.server.controller.AmbariServer.class(AmbariServer.java:145)
while locating org.apache.ambari.server.controller.AmbariServer
1 error
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:813)
Caused by: java.lang.NullPointerException
at org.apache.ambari.server.stack.StackModule.processRepositories(StackModule.java:665)
at org.apache.ambari.server.stack.StackModule.resolve(StackModule.java:158)
at org.apache.ambari.server.stack.StackManager.fullyResolveStacks(StackManager.java:201)
at org.apache.ambari.server.stack.StackManager.(StackManager.java:119)
at org.apache.ambari.server.stack.StackManager$$FastClassByGuice$$33e4ffe0.newInstance()
at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)
at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)
at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)
at com.sun.proxy.$Proxy26.create(Unknown Source)
at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:247)
at org.apache.ambari.server.api.services.AmbariMetaInfo$$FastClassByGuice$$202844bc.invoke()
at com.google.inject.internal.cglib.reflect.$FastMethod.invoke(FastMethod.java:53)
at com.google.inject.internal.SingleMethodInjector$1.invoke(SingleMethodInjector.java:56)
at com.google.inject.internal.SingleMethodInjector.inject(SingleMethodInjector.java:90)
at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.Scopes$1$1.get(Scopes.java:65)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.Scopes$1$1.get(Scopes.java:65)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)
... 2 more
解決方法:
後來發現在檔案/var/lib/ambari-server/resources/stacks/HDP/2.4/repos/repoinfo.xml中的內容os這行原來的如下:
改成:
問題解決
5.16在啟動hive的時候出現錯誤Error: Duplicate key name 'PCS_STATS_IDX' (state=42000,code=1061)
在HIVE安裝的機器cd /var/lib/ambari-agent/data目錄下有日誌,日誌中報如下錯誤:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 245, in
HiveMetastore().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 58, in start
self.configure(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 72, in configure
hive(name = 'metastore')
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 292, in hive
user = params.hive_user
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]' returned 1. WARNING: Use "yarn jar" to launch YARN applications.
Metastore connection URL: jdbc:mysql://a2slave1/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
Starting metastore schema initialization to 1.2.1000
Initialization script hive-schema-1.2.1000.mysql.sql
Error: Duplicate key name 'PCS_STATS_IDX' (state=42000,code=1061)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
*** schemaTool failed ***
解決方法:
HIVE安裝的機器複製指令碼hive-schema-1.2.1000.mysql.sql到:
[root@a2master /]# scp /usr/hdp/2.4.0.0-169/hive/scripts/metastore/upgrade/mysql/hive-schema-1.2.1000.mysql.sql root@a2slave1:/usr/local/mysql
hive-schema-1.2.1000.mysql.sql 100% 34KB 34.4KB/s 00:00
在HIVE安裝機器使用hive使用者登陸,進入hive資料庫,執行這個指令碼:
[root@a2slave1 conf.server]# mysql -uhive -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 505
Server version: 5.6.26-log MySQL Community Server (GPL)
Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use hive;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> source /usr/local/mysql/hive-schema-1.2.1000.mysql.sql;
問題解決
5.17在啟動hive的時候出現錯誤Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.
在/var/log/hive中的日誌檔案hiveserver.log中記錄有:
2016-04-15 10:45:20,446 INFO [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(405)) - Starting HiveServer2
2016-04-15 10:45:20,573 INFO [main]: metastore.ObjectStore (ObjectStore.java:initialize(294)) - ObjectStore, initialize called
2016-04-15 10:45:20,585 INFO [main]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:(140)) - Using direct SQL, underlying DB is MYSQL
2016-04-15 10:45:20,585 INFO [main]: metastore.ObjectStore (ObjectStore.java:setConf(277)) - Initialized ObjectStore
2016-04-15 10:45:20,590 WARN [main]: metastore.ObjectStore (ObjectStore.java:getDatabase(577)) - Failed to get database default, returning NoSuchObjectException
2016-04-15 10:45:20,591 ERROR [main]: bonecp.ConnectionHandle (ConnectionHandle.java:markPossiblyBroken(388)) - Database access problem. Killing off this connection and all remaining connections in the connection pool. SQL State = HY000
2016-04-15 10:45:20,600 WARN [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultDB(623)) - Retrying creating default database after error: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.
javax.jdo.JDOUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:549)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)
at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:530)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)
at com.sun.proxy.$Proxy6.createDatabase(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:605)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:621)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:462)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5789)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)
at sun.reflect.GeneratedConstructorAccessor23.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:475)
at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:127)
at org.apache.hive.service.cli.CLIService.init(CLIService.java:112)
at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:104)
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:411)
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:78)
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:654)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:527)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
NestedThrowablesStackTrace:
Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.
org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.
at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.createRepository(TableGenerator.java:261)
at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:162)
at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:197)
at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:105)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:2005)
at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1386)
at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:3827)
at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:2571)
at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:513)
at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:232)
at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:1414)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2218)
at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065)
at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1913)
at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)
at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:530)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)
at com.sun.proxy.$Proxy6.createDatabase(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:605)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:621)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:462)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5789)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)
at sun.reflect.GeneratedConstructorAccessor23.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:475)
at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:127)
at org.apache.hive.service.cli.CLIService.init(CLIService.java:112)
at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:104)
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:411)
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:78)
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:654)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:527)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
2016-04-15 10:45:20,607 WARN [main]: metastore.ObjectStore (ObjectStore.java:getDatabase(577)) - Failed to get database default, returning NoSuchObjectException
2016-04-15 10:45:20,609 ERROR [main]: bonecp.ConnectionHandle (ConnectionHandle.java:markPossiblyBroken(388)) - Database access problem. Killing off this connection and all remaining connections in the connection pool. SQL State = HY000
2016-04-15 10:45:20,617 INFO [main]: server.HiveServer2 (HiveServer2.java:stop(371)) - Shutting down HiveServer2
2016-04-15 10:45:20,618 WARN [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(442)) - Error starting HiveServer2 on attempt 29, will retry in 60 seconds
java.lang.RuntimeException: Error applying authorization policy on hive configuration: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hive.service.cli.CLIService.init(CLIService.java:114)
at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:104)
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:411)
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:78)
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:654)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:527)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:494)
at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:127)
at org.apache.hive.service.cli.CLIService.init(CLIService.java:112)
... 12 more
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1533)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:475)
... 14 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedConstructorAccessor23.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)
... 20 more
Caused by: javax.jdo.JDOUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.
NestedThrowables:
org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:549)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)
at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:530)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)
at com.sun.proxy.$Proxy6.createDatabase(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:605)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:625)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:462)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5789)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)
... 24 more
Caused by: org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.
at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.createRepository(TableGenerator.java:261)
at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:162)
at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:197)
at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:105)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:2005)
at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1386)
at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:3827)
at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:2571)
at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:513)
at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:232)
at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:1414)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2218)
at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065)
at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1913)
at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727)
... 39 more
最後發現是mysql資料庫的binlog_format引數設定不正確,
原來設定的是STATEMENT,修改為MIXED
修改方法,在/etc/my.cnf檔案中加上binlog_format=MIXED
然後重啟mysql資料庫
問題解決
5.18在使用hive命令進入hive的時候報錯誤Permission denied: user=root, access=WRITE, inode="/user/root":hdfs:hdfs:drwxr-xr-x hive
解決方法:
1. 使用HDFS的命令列介面修改相應目錄的許可權,hadoop fs -chmod 777 /user,後面的/user是要上傳檔案的路徑,不同的情況可能不一樣,比如要上傳的檔案路徑為hdfs://namenode/user/xxx.doc,則這樣的修改可以,如果要上傳的檔案路徑為hdfs://namenode/java/xxx.doc,則要修改的為hadoop fs -chmod 777 /java或者hadoop fs -chmod 777 /,java的那個需要先在HDFS裡面建立Java目錄,後面的這個是為根目錄調整許可權。
指令碼
su - hdfs
hadoop fs -chmod 777 /user
2. 在/etc/profile檔案中加上系統的環境變數或java JVM變數裡面新增export HADOOP_USER_NAME=hdfs(ambari使用的hadoop使用者是hdfs),這個值具體等於多少看自己的情況,以後會執行HADOOP上的Linux的使用者名稱。
export HADOOP_USER_NAME=hdfs
5.19Spark Thrift Server元件啟動後自動關閉
Spark Thrift Server元件在啟動後自動關閉,檢視/var/log/spark下的日誌檔案spark-hive-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-a2master.out,中有如下內容:
16/04/18 10:26:10 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
16/04/18 10:26:10 INFO Client: Requesting a new application from cluster with 3 NodeManagers
16/04/18 10:26:10 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (512 MB per container)
16/04/18 10:26:10 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (512 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:283)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:139)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.(SparkContext.scala:530)
at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:56)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:76)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/04/18 10:26:11 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
分析:原因是spark的記憶體設定的太小
解決方法:在前臺介面修改記憶體為1536M,並把配置在其它機器上更新,重啟spark服務
5.20hbase在啟動的時候出現錯誤Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461130860883":hdfs:hdfs:drwxr-xr-x
在日誌中有如下內容:
2016-04-20 15:42:11,640 INFO [regionserver/gslave2/192.168.1.253:16020] hfile.CacheConfig: Allocating LruBlockCache size=401.60 MB, blockSize=64 KB
2016-04-20 15:42:11,648 INFO [regionserver/gslave2/192.168.1.253:16020] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=433080, freeSize=420675048, maxSize=421108128, heapSize=433080, minSize=400052704, minFactor=0.95, multiSize=200026352, multiFactor=0.5, singleSize=100013176, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-04-20 15:42:11,704 INFO [regionserver/gslave2/192.168.1.253:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-04-20 15:42:11,729 INFO [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: STOPPED: Failed initialization
2016-04-20 15:42:11,729 ERROR [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: Failed init
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2589)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2558)
at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:820)
at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:816)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:809)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
at org.apache.hadoop.hbase.regionserver.wal.FSHLog.(FSHLog.java:488)
at org.apache.hadoop.hbase.wal.DefaultWALProvider.init(DefaultWALProvider.java:97)
at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:147)
at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:179)
at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1624)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1362)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:899)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
at org.apache.hadoop.ipc.Client.call(Client.java:1411)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:508)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
at com.sun.proxy.$Proxy16.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2587)
... 15 more
2016-04-20 15:42:11,732 FATAL [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: ABORTING region server gslave2,16020,1461138130424: Unhandled: Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2589)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2558)
at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:820)
at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:816)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:809)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
at org.apache.hadoop.hbase.regionserver.wal.FSHLog.(FSHLog.java:488)
at org.apache.hadoop.hbase.wal.DefaultWALProvider.init(DefaultWALProvider.java:97)
at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:147)
at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:179)
at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1624)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1362)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:899)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
at org.apache.hadoop.ipc.Client.call(Client.java:1411)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:508)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
at com.sun.proxy.$Proxy16.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2587)
... 15 more
2016-04-20 15:42:11,732 FATAL [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
2016-04-20 15:42:11,744 INFO [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: Dump of metrics as JSON on abort: {
解決方法:
su - hdfs
hdfs dfs -chown -R hbase:hbase /apps/hbase
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/7490392/viewspace-2084847/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Centos下Ambari2.7.5的編譯和安裝CentOS編譯
- ambari2.8+ambari-metrics3.0+bigtop3.2編譯、打包、安裝S3編譯
- Ambari叢集搭建
- HortonWorks Ambari安裝部署實踐
- pip安裝時遇到的問題集錦,持續更新!
- Ambari非root使用者安裝步驟
- Ambari HDP叢集搭建全攻略
- windows2003 的安裝以及安裝時遇到的問題Windows
- Ambari 上使用 Kakfa
- 安裝kylin遇到的問題
- Laravel 安裝遇到的問題Laravel
- Ambari環境準備
- 安裝部署hzero遇到的問題
- PaddleOCR 安裝使用遇到的問題
- jenkins安裝的時候 提示字型問題的解決方法Jenkins
- Mysql主從架構搭建的時候遇到的問題MySql架構
- 在 Debian 64 位上安裝 wkhtmltopdf 時遇到的問題HTML
- Ambari啟用Kerberos認證ROS
- 安裝 Laravel Mix 中遇到的問題Laravel
- lumen安裝orangehill/iseed遇到的問題
- 安裝PHPscws分詞擴充套件時候遇到的坑PHP分詞套件
- laravel 8 分別安裝Vue和tailwindcss遇到的問題LaravelVueAICSS
- Flutter-安裝步驟及安裝遇到的問題Flutter
- 【Redis】redis-cluster 安裝遇到的問題Redis
- Centos7安裝greenplum遇到的問題CentOS
- PYTORCH安裝過程以及遇到的問題PyTorch
- python中安裝qtdesigner、pyuic遇到的問題PythonQTUI
- mac11.2安裝air遇到的問題MacAI
- Python操作SAP時候遇到的一些常見問題Python
- RIDE匯入AutoItLibrary的安裝以及遇到的問題IDE
- Keepalived 實現 Ambari-Server 高可用Server
- 解決macbook安裝burp suite遇到的問題MacUI
- 最新版大資料平臺安裝部署指南,HDP-2.6.5.0,ambari-2.6.2.0大資料
- 在Centos和Docker上安裝STF 遇到的若干問題總結CentOSDocker
- CentOS 7.4 安裝 K8S v1.11.0 叢集所遇到的問題CentOSK8S
- macbook 下安裝Goglang 以及安裝svn外掛所遇到的問題MacGo
- 在騰訊雲上安裝mysql遇到的問題MySql
- vue2.x工程安裝遇到的問題解答Vue
- Linux 安裝mysql 5.7.21 可能遇到的問題歸類LinuxMySql