() .is a term for data sets that are so large or complex that traditional data processing applications are inadequate Challenges include analysis, capture, data ?curation, search, sharing, storage,transfer,visualization, querying, updating and information privacy.
A.Data market'
B.Data warehouse
C.Big data
D.BI
第1题:
method is the use of a data processing system to represent selected behavioral(67)of a physical or abstract system. For example, the representation of air streams around airfoils at various velocities, temperatures, and air pressures with such a system.Emulation method is slightly different, it uses a data processing system to imitate another data processing system, so that the imitating system accepts the same data, executes the same programs, and achieves the same(68) as the imitated system. Emulation is usually achieved(69) hardware or firmware. In a network, for example, microcomputers might emulate terminals(70) communicate with mainframe.
A.Assembly
B.Simultaneity
C.Fraud
D.Simulation
第2题:
Generally, data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different perspectives and (1) it into useful information information that can be used to increase revenue, (2) costs, or both. Data mining software is one of a number of analytical tools for analyzing data. It (3) users to analyze data from many different dimensions or angles, categorize it, and summarize the relationships identified. Technically, data mining is the process of finding correlations or patterns among dozens of fields in large relational databases.
Although data mining is a (4) new term, the technology is not. Companies have used powerful computers to sift through volumes of supermarket scanner data and analyze market research reports for years. However, continuous innovations in computer processing power, disk storage, and statistical software are dramatically increasing the accuracy of analysis while driving (5) the cost.
A.organizing
B.summarizing
C.composing
D.constituting
第3题:
( ) are datasets that grow so large that they become awkward to work with on-hand database management tools.
A.Data structures B.Relations C.Big data D.Metadata
第4题:
第5题:
第6题:
第7题:
( )is a collection of data sets, which is so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications.
第8题:
Which two statements are true about Oracle ActiveCache ?()
第9题:
Tab canvas.
Pop up canvas.
Spread table canvas.
Vertical toolbar canvas.
第10题:
CPU,because the database replication process requires a considerable amount of CPU time
Memory,because the buffer cache requires a large amount of memory to maintain the data in system memory
Disk,because it is important to configure a sufficient number of disks to match the CPU processing power
Network,because the data returned to the client from the server can be a large subset of the total database
第11题:
it supports very large data-sets such as the result-sets from large search queries to be held in memory
it provides a set of management tools that enables automation of configuration
it provides enhanced visibility across the entire application infrastructure
it significantly increases the performance of Web-based applications with no code change
第12题:
Voice traffic data flow involves large volumes of large packets.
Because a packet loss involves a small amount of data, voice traffic is less affected by packet losses than traditional data traffic is.
Voice carrier stream utilizes Real-Time Transport Protocol (RTP) to carry the audio/media portion of VoIP communication.
Voice packets are very sensitive to delay and jitter.
Voice packets are encapsulated in TCP segments to allow for proper sequencing during delivery.
第13题:
( )is a trem for data sets that are so large or complex that traditional data processing applications are inadequate. Challenges include analysis, capture . data curation. Search. Sharing. Storage . transfer. Visualization . querying updating and information privacy
A.Data market
B.Data varehouse
C.Big data
D.BI
第14题:
()is a collection of data sets,which is so large and complet that is because difficult to process using on hand database management tools or traditional data peocessing applications
A.big data
B.cluster
C.parallel computing
D.data warehouse
第15题:
Traditional( )are organized by fields, record, and files.
A.documents B.data tables C.data sets D.databases
第16题:
第17题:
第18题:
第19题:
Which of the following is the correct configuration for a RAID 5 array?()
第20题:
When the data is naturally tabular
When the number of nodes are volatile
When the data by nature has sparse attributes
When the data is of low volume and requires a complex star-schema topology
第21题:
Voice carrier stream utilizes Real-Time Transport Protocol (RTP) to carry the audio/media portion of the VoIP communication.
Voice traffic data flow involves large volumes of large packets.
Because a packet loss involves a small amount of data, voice traffic is less affected by packet losses than traditional data traffic is.
Voice packets are encapsulated in TCP segments to allow for proper sequencing during delivery.
Voice packets are very sensitive to delay and jitter.
第22题:
Upgrade the hardware/memory to accommodate the data.
Load the data into your database by using the PARALLEL clause.
Give analysts DBA privilege, so that they can query DBA_EXTERNAL_TABLES.
Use an external table so you can have the metadata available in your database, but leave the data in the operating system files.
第23题:
when performing export and import using Oracle Data Pump
when performing backup and recovery operations using Oracle Recovery Manager
when performing batch processing and bulk loading operation in a data warehouse environment
in an online transaction processing (OLTP) system where large number of client sessions are idle most of the time