Search results for: tide data
25101 Women Hashtactivism: Civic Engagement in Saudi Arabia
Authors: Mohammed Ibahrine
Abstract:
One of the prominent trends in the Saudi digital space in recent years is the boom in the use of social networking sites such as Facebook, YouTube, and Twitter. As of 2016, Twitter has over six million users in Saudi Arabia. In the wake of the recent political instability in the Arab region, digital platforms have gained importance for both, personal and professional purposes. A conspicuously observable tide of social activism has risen, with Twitter playing an increasingly important role. One of their primary goals is to enforce the logic of public visibility, social mobility and civic participation in the Saudi society. Saudi women use Twitter to disseminate specific and relevant information and promote their social agenda that remained unrecognized and invisible in the mainstream media and thus in the public sphere. The question is to what extent does Twitter empower Saudi women or reinforces their social immobility and invisibility? This paper focuses on three kinds of empowerment through Twitter in the religiously conservative and socially patriarchal Saudi society. It traces and analyses how Saudi female hashtactivism is increasingly becoming a site of struggle over visibility, mobility, control, and civic participation. The underlying thesis is that Twitter makes a contribution to the development of participatory culture, especially in the lives of women.Keywords: civic, hashtactivism, Saudi Arabia, Twiterverse
Procedia PDF Downloads 32325100 The Circularity of Re-Refined Used Motor Oils: Measuring Impacts and Ensuring Responsible Procurement
Authors: Farah Kanani
Abstract:
Blue Tide Environmental is a company focused on developing a network of used motor oil recycling facilities across the U.S. They initiated the redesign of its recycling plant in Texas, and aimed to establish an updated carbon footprint of re-refined used motor oils compared to an equivalent product derived from virgin stock that is not re-refined. The aim was to quantify emissions savings of a circular alternative to conventional end-of-life combustion of used motor oil (UMO). To do so, they mandated an ISO-compliant carbon footprint, utilizing complex models requiring geographical and temporal accuracy to accommodate the U.S. refinery market. The quantification of linear and circular flows, proxies for fuel substitution and system expansion for multi-product outputs were all critical methodological choices and were tested through sensitivity analyses. The re-refined system consisted of continuous recycling of UMO and thus, end-of-life is considered non-existent. The unique perspective to this topic will be from a life cycle i.e. holistic one and essentially demonstrate using this example of how a cradle-to-cradle model can be used to quantify a comparative carbon footprint. The intended audience is lubricant manufacturers as the consumers, motor oil industry professionals and other industry members interested in performing a cradle-to-cradle modeling.Keywords: circularity, used motor oil, re-refining, systems expansion
Procedia PDF Downloads 3125099 Comparison the Energy Consumption with Sustainability in Campus: Case Study of Four American Universities
Authors: Bifeng Zhu, Zhekai Wang, Chaoyang Sun, Bart Dewancker
Abstract:
Under the tide of promoting sustainable development in the world, American universities that have been committed to sustainable practice and innovation, not only have its sustainable campus construction been in the forefront of the world, but also have developed STARS (The Sustainability Tracking, Assessment & Rating System), which is widely used in the world and highly recognized. At the same time, in the process of global sustainable campus construction, energy problem is often regarded as one of the most important sustainable aspects, even equivalent to the sustainability of campus. Therefore, the relationship between campus energy and sustainability is worth discussing. In this study, four American universities with the highest level evaluated by STARS are selected as examples to compare and analyze the campus energy consumption and the use of new energy, GHG emissions and the overall sustainability of the campus, in order to explore the relationship between campus energy and sustainable construction. It is found that the advantages of sustainable campus construction in the United States are mainly focused on the "software" of management, education, activities, etc. Although different energy-saving measures have been taken in campus energy, the construction results are quite different. Moreover, as an important aspect of sustainable campus, energy can not fully represent the sustainability of campus, but because of the various measures it takes, it can greatly promote the sustainable construction of the whole campus. These measures and construction experiences are worthy of summary and promotion, and have positive reference significance for other universities even communities around the world.Keywords: sustainable campus, energy consumption, STARS assessment, GHG emissions
Procedia PDF Downloads 27525098 Mining Big Data in Telecommunications Industry: Challenges, Techniques, and Revenue Opportunity
Authors: Hoda A. Abdel Hafez
Abstract:
Mining big data represents a big challenge nowadays. Many types of research are concerned with mining massive amounts of data and big data streams. Mining big data faces a lot of challenges including scalability, speed, heterogeneity, accuracy, provenance and privacy. In telecommunication industry, mining big data is like a mining for gold; it represents a big opportunity and maximizing the revenue streams in this industry. This paper discusses the characteristics of big data (volume, variety, velocity and veracity), data mining techniques and tools for handling very large data sets, mining big data in telecommunication and the benefits and opportunities gained from them.Keywords: mining big data, big data, machine learning, telecommunication
Procedia PDF Downloads 40925097 JavaScript Object Notation Data against eXtensible Markup Language Data in Software Applications a Software Testing Approach
Authors: Theertha Chandroth
Abstract:
This paper presents a comparative study on how to check JSON (JavaScript Object Notation) data against XML (eXtensible Markup Language) data from a software testing point of view. JSON and XML are widely used data interchange formats, each with its unique syntax and structure. The objective is to explore various techniques and methodologies for validating comparison and integration between JSON data to XML and vice versa. By understanding the process of checking JSON data against XML data, testers, developers and data practitioners can ensure accurate data representation, seamless data interchange, and effective data validation.Keywords: XML, JSON, data comparison, integration testing, Python, SQL
Procedia PDF Downloads 14025096 Using Machine Learning Techniques to Extract Useful Information from Dark Data
Authors: Nigar Hussain
Abstract:
It is a subset of big data. Dark data means those data in which we fail to use for future decisions. There are many issues in existing work, but some need powerful tools for utilizing dark data. It needs sufficient techniques to deal with dark data. That enables users to exploit their excellence, adaptability, speed, less time utilization, execution, and accessibility. Another issue is the way to utilize dark data to extract helpful information to settle on better choices. In this paper, we proposed upgrade strategies to remove the dark side from dark data. Using a supervised model and machine learning techniques, we utilized dark data and achieved an F1 score of 89.48%.Keywords: big data, dark data, machine learning, heatmap, random forest
Procedia PDF Downloads 2825095 Population Dynamics of Juvenile Dusky Groupers, Epinephelus Marginatus: "Lowe, 1834" From Two Sites in Terceira Island, Azores, Portugal
Authors: Regina Streltsov
Abstract:
The Archipelago of the Azores in the NE Atlantic is a hot spot of marine biodiversity, both pelagic and demersal. Epinephelus marginatus is a solitary species commonly observed in these waters, with distinct territorial/residential behaviors from their post- larva and juvenile stages to the adult phase. Being commercially high valued species, about 13% of all groupers (Family Epinephelidae) face an increasing pressure that has produced known impacts in both the abundance and distribution of this group of fishes. Epinephelus marginatus is currently assessed by the IUCN as a vulnerable species. Dusky gropers inhabit rocky bottoms from shallow waters down to 200 m. Juveniles are usually found in shallow shoreline waters. Population dynamics of juveniles can lead to a better understanding of the competition for resources and predation and further conservation measures that must be taken upon dusky groupers. This study is carried out in rocky reefs from two sheltered bays on the south and north coast of the island in two different spots with four sampling sites in total. Using Transects individuals are counted at the peak of high tide and all abiotic factors are recorded. Our goal is to complete a statistically significant number of observations in order to detail these populations and to better understand their dynamics and dimension.Keywords: Azores, dusky groupers, Epinephelus marginatus, population dynamics
Procedia PDF Downloads 15725094 Multi-Source Data Fusion for Urban Comprehensive Management
Authors: Bolin Hua
Abstract:
In city governance, various data are involved, including city component data, demographic data, housing data and all kinds of business data. These data reflects different aspects of people, events and activities. Data generated from various systems are different in form and data source are different because they may come from different sectors. In order to reflect one or several facets of an event or rule, data from multiple sources need fusion together. Data from different sources using different ways of collection raised several issues which need to be resolved. Problem of data fusion include data update and synchronization, data exchange and sharing, file parsing and entry, duplicate data and its comparison, resource catalogue construction. Governments adopt statistical analysis, time series analysis, extrapolation, monitoring analysis, value mining, scenario prediction in order to achieve pattern discovery, law verification, root cause analysis and public opinion monitoring. The result of Multi-source data fusion is to form a uniform central database, which includes people data, location data, object data, and institution data, business data and space data. We need to use meta data to be referred to and read when application needs to access, manipulate and display the data. A uniform meta data management ensures effectiveness and consistency of data in the process of data exchange, data modeling, data cleansing, data loading, data storing, data analysis, data search and data delivery.Keywords: multi-source data fusion, urban comprehensive management, information fusion, government data
Procedia PDF Downloads 39325093 Reviewing Privacy Preserving Distributed Data Mining
Authors: Sajjad Baghernezhad, Saeideh Baghernezhad
Abstract:
Nowadays considering human involved in increasing data development some methods such as data mining to extract science are unavoidable. One of the discussions of data mining is inherent distribution of the data usually the bases creating or receiving such data belong to corporate or non-corporate persons and do not give their information freely to others. Yet there is no guarantee to enable someone to mine special data without entering in the owner’s privacy. Sending data and then gathering them by each vertical or horizontal software depends on the type of their preserving type and also executed to improve data privacy. In this study it was attempted to compare comprehensively preserving data methods; also general methods such as random data, coding and strong and weak points of each one are examined.Keywords: data mining, distributed data mining, privacy protection, privacy preserving
Procedia PDF Downloads 52525092 The Right to Data Portability and Its Influence on the Development of Digital Services
Authors: Roman Bieda
Abstract:
The General Data Protection Regulation (GDPR) will come into force on 25 May 2018 which will create a new legal framework for the protection of personal data in the European Union. Article 20 of GDPR introduces a right to data portability. This right allows for data subjects to receive the personal data which they have provided to a data controller, in a structured, commonly used and machine-readable format, and to transmit this data to another data controller. The right to data portability, by facilitating transferring personal data between IT environments (e.g.: applications), will also facilitate changing the provider of services (e.g. changing a bank or a cloud computing service provider). Therefore, it will contribute to the development of competition and the digital market. The aim of this paper is to discuss the right to data portability and its influence on the development of new digital services.Keywords: data portability, digital market, GDPR, personal data
Procedia PDF Downloads 47325091 Recent Advances in Data Warehouse
Authors: Fahad Hanash Alzahrani
Abstract:
This paper describes some recent advances in a quickly developing area of data storing and processing based on Data Warehouses and Data Mining techniques, which are associated with software, hardware, data mining algorithms and visualisation techniques having common features for any specific problems and tasks of their implementation.Keywords: data warehouse, data mining, knowledge discovery in databases, on-line analytical processing
Procedia PDF Downloads 40425090 How to Use Big Data in Logistics Issues
Authors: Mehmet Akif Aslan, Mehmet Simsek, Eyup Sensoy
Abstract:
Big Data stands for today’s cutting-edge technology. As the technology becomes widespread, so does Data. Utilizing massive data sets enable companies to get competitive advantages over their adversaries. Out of many area of Big Data usage, logistics has significance role in both commercial sector and military. This paper lays out what big data is and how it is used in both military and commercial logistics.Keywords: big data, logistics, operational efficiency, risk management
Procedia PDF Downloads 64125089 Effects of UV-B Radiation on the Growth of Ulva Pertusa Kjellman Seedling
Authors: HengJiang Cai, RuiJin Zhang, JinSong Gui
Abstract:
Enhanced UV-B (280-320nm) radiation resulting from ozone depletion was one of the global environmental problems. The effects of enhanced UV-B radiation on marine macro-algae were exposed to be the greatest in shallow intertidal environments because the macro-alga was often at or above the water during low tide. Ulva pertusa Kjellman was belonged to Chlorophyta (Phylum), Ulvales (Order), Ulvaceae (Family) which was widely distributed in the western Pacific coast, and the resources were extremely rich in China. Therefore, the effects of UV-B radiation on the growth of Ulva pertusa seedling were studied in this research. Ulva pertusa seedling appearances were mainly characterized by rod shapes and tadpole shapes. The percentage of rod shapes was 90.68%±2.50%. UV-B radiation could inhibit the growth of Ulva pertusa seedling, and the growth inhibition was more significant with the increased doses of UV-B radiation treatment. The relative inhibition rates of Ulva pertusa seedling length were16.11%, 24.98%and 39.04% respectively on the 30th day at different doses (30.96, 61.92 and 123.84 Jm-2d-1) of UV-B radiation. Ulva pertusa seedling had emerged death under UV-B radiation, and the death rates were increased with the increased doses of UV-B radiation treatment. Physiology and biochemistry of Ulva pertusa seedling could be affected by UV-B radiation treatment. The SOD (superoxide dismutase) activity was increased at low-dose UV-B radiation (30.96 Jm-2d-1), while was decreased at high-dose UV-B radiation (61.92 and 123.84 Jm-2d-1). UV-B radiation could inhibit CAT (catalase) activity all the while. It speculated that the reasons for growth inhibition and death of Ulva pertusa seedling were excess ROS (reactive oxygen species), which produced by UV-B radiation.Keywords: growth, physiology and biochemistry, Ulva pertusa Kjellman, UV-B radiation
Procedia PDF Downloads 28125088 Marine Phytoplankton and Zooplankton from the North-Eastern Bay of Bengal, Bangladesh
Authors: Mahmudur Rahman Khan, Saima Sharif Nilla, Kawser Ahmed, Abdul Aziz
Abstract:
The marine phyto and zooplankton of the extreme north-eastern part of the Bay of Bengal, off the coast of Bangladesh have been studied. Relative occurrence of phyto and zooplankton and their relationship with physico-chemical conditions (f.e. temperature, salinity, dissolved oxygen, carbonate, phosphate, and sulphate) of the water and Shannon-Weiber diversity indices were also studied. The phytoplankton communities represented by 25 genera with 69 species of Bacillariophyceae, 5 genera with 12 species of Dinophyceae and 6 genera with 16 species of Chlorophyceae have been found. A total of 24 genera of 25 species belonging to Protozoa, Coelenterata, Chaetognatha, Nematoda, Cladocera, Copepoda, and decapoda have been recorded. In addition, the average phytoplankton was 80% of all collections, whereas the zooplankton was 20%, Z ratio of about 4:1. The total numbers of plankton individuals per liter were generally higher during low tide than those of high one. Shannon-Weiber diversity indices were highest (3.675 for phytoplankton and 3.021 for zooplankton) in the north-east part and lowest (1.516 for phytoplankton and 1.302 for zooplankton) in the south-east part of the study area. Principal Component Analysis (PCA) showed the relationship between pH and some species of phyto and zooplankton where all diatoms and copepods have showed positive correlation and dinoflagellates showed negative correlation with pH.Keywords: plankton presence, shannon-weiber diversity index, principal component analysis, Bay of Bengal
Procedia PDF Downloads 66025087 Implementation of an IoT Sensor Data Collection and Analysis Library
Authors: Jihyun Song, Kyeongjoo Kim, Minsoo Lee
Abstract:
Due to the development of information technology and wireless Internet technology, various data are being generated in various fields. These data are advantageous in that they provide real-time information to the users themselves. However, when the data are accumulated and analyzed, more various information can be extracted. In addition, development and dissemination of boards such as Arduino and Raspberry Pie have made it possible to easily test various sensors, and it is possible to collect sensor data directly by using database application tools such as MySQL. These directly collected data can be used for various research and can be useful as data for data mining. However, there are many difficulties in using the board to collect data, and there are many difficulties in using it when the user is not a computer programmer, or when using it for the first time. Even if data are collected, lack of expert knowledge or experience may cause difficulties in data analysis and visualization. In this paper, we aim to construct a library for sensor data collection and analysis to overcome these problems.Keywords: clustering, data mining, DBSCAN, k-means, k-medoids, sensor data
Procedia PDF Downloads 37825086 Government (Big) Data Ecosystem: Definition, Classification of Actors, and Their Roles
Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis
Abstract:
Organizations, including governments, generate (big) data that are high in volume, velocity, veracity, and come from a variety of sources. Public Administrations are using (big) data, implementing base registries, and enforcing data sharing within the entire government to deliver (big) data related integrated services, provision of insights to users, and for good governance. Government (Big) data ecosystem actors represent distinct entities that provide data, consume data, manipulate data to offer paid services, and extend data services like data storage, hosting services to other actors. In this research work, we perform a systematic literature review. The key objectives of this paper are to propose a robust definition of government (big) data ecosystem and a classification of government (big) data ecosystem actors and their roles. We showcase a graphical view of actors, roles, and their relationship in the government (big) data ecosystem. We also discuss our research findings. We did not find too much published research articles about the government (big) data ecosystem, including its definition and classification of actors and their roles. Therefore, we lent ideas for the government (big) data ecosystem from numerous areas that include scientific research data, humanitarian data, open government data, industry data, in the literature.Keywords: big data, big data ecosystem, classification of big data actors, big data actors roles, definition of government (big) data ecosystem, data-driven government, eGovernment, gaps in data ecosystems, government (big) data, public administration, systematic literature review
Procedia PDF Downloads 16225085 Government Big Data Ecosystem: A Systematic Literature Review
Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis
Abstract:
Data that is high in volume, velocity, veracity and comes from a variety of sources is usually generated in all sectors including the government sector. Globally public administrations are pursuing (big) data as new technology and trying to adopt a data-centric architecture for hosting and sharing data. Properly executed, big data and data analytics in the government (big) data ecosystem can be led to data-driven government and have a direct impact on the way policymakers work and citizens interact with governments. In this research paper, we conduct a systematic literature review. The main aims of this paper are to highlight essential aspects of the government (big) data ecosystem and to explore the most critical socio-technical factors that contribute to the successful implementation of government (big) data ecosystem. The essential aspects of government (big) data ecosystem include definition, data types, data lifecycle models, and actors and their roles. We also discuss the potential impact of (big) data in public administration and gaps in the government data ecosystems literature. As this is a new topic, we did not find specific articles on government (big) data ecosystem and therefore focused our research on various relevant areas like humanitarian data, open government data, scientific research data, industry data, etc.Keywords: applications of big data, big data, big data types. big data ecosystem, critical success factors, data-driven government, egovernment, gaps in data ecosystems, government (big) data, literature review, public administration, systematic review
Procedia PDF Downloads 22825084 A Machine Learning Decision Support Framework for Industrial Engineering Purposes
Authors: Anli Du Preez, James Bekker
Abstract:
Data is currently one of the most critical and influential emerging technologies. However, the true potential of data is yet to be exploited since, currently, about 1% of generated data are ever actually analyzed for value creation. There is a data gap where data is not explored due to the lack of data analytics infrastructure and the required data analytics skills. This study developed a decision support framework for data analytics by following Jabareen’s framework development methodology. The study focused on machine learning algorithms, which is a subset of data analytics. The developed framework is designed to assist data analysts with little experience, in choosing the appropriate machine learning algorithm given the purpose of their application.Keywords: Data analytics, Industrial engineering, Machine learning, Value creation
Procedia PDF Downloads 16825083 Providing Security to Private Cloud Using Advanced Encryption Standard Algorithm
Authors: Annapureddy Srikant Reddy, Atthanti Mahendra, Samala Chinni Krishna, N. Neelima
Abstract:
In our present world, we are generating a lot of data and we, need a specific device to store all these data. Generally, we store data in pen drives, hard drives, etc. Sometimes we may loss the data due to the corruption of devices. To overcome all these issues, we implemented a cloud space for storing the data, and it provides more security to the data. We can access the data with just using the internet from anywhere in the world. We implemented all these with the java using Net beans IDE. Once user uploads the data, he does not have any rights to change the data. Users uploaded files are stored in the cloud with the file name as system time and the directory will be created with some random words. Cloud accepts the data only if the size of the file is less than 2MB.Keywords: cloud space, AES, FTP, NetBeans IDE
Procedia PDF Downloads 20625082 Sea Level Rise and Implications for Low-lying areas: Coastal Evolution and Impact of Future Sea Level Rise Scenarios in Mirabello Gulf - NE Crete
Authors: Maria Kazantzaki, Evangelos Tsakalos, Eleni Filippaki, Yannis Bassiakos
Abstract:
Mediterranean areas are characterized by intense seismic and volcanic activity as well as eustatic changes, the result of which is the creation of particularly vulnerable coastal zones. The most vulnerable are low-lying coastal areas, the geomorphological evolution of which are highly affected by natural processes and anthropogenic interventions. Therefore, assessing changes that take place along coastal zones is of great importance in order to enable the development of integrated coastal management plans. A characteristic case is the gulf of Mirabello in N.E Crete, where intense coastal erosion, in combination with the tectonic subsidence of the area, threatens a large part of the coastal zone, resulting in direct socio-economic impacts. The present study assesses the temporal geomorphological changes that have taken place in the coastal zone of Mirabello gulf to provide a clear frame of the coastal zone evolution over time and performs a vulnerability assessment based on the coastal vulnerability index (CVI) methodology by Thieler and Hammar-Klose, considering geological features, coastal slope, relative sea-level change, shoreline erosion/accretion rates and mean significant wave height as well as mean tide range in the area. In light of this, an impact assessment, based on three different sea level rise scenarios, is also performed and presented.Keywords: coastal vulnerability index, coastal erosion, GIS, sea level rise
Procedia PDF Downloads 17125081 Business Intelligence for Profiling of Telecommunication Customer
Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro
Abstract:
Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.Keywords: business intelligence, customer segmentation, data warehouse, data mining
Procedia PDF Downloads 48325080 A Spatio-Temporal Analysis and Change Detection of Wetlands in Diamond Harbour, West Bengal, India Using Normalized Difference Water Index
Authors: Lopita Pal, Suresh V. Madha
Abstract:
Wetlands are areas of marsh, fen, peat land or water, whether natural or artificial, permanent or temporary, with water that is static or flowing, fresh, brackish or salt, including areas of marine water the depth of which at low tide does not exceed six metres. The rapidly expanding human population, large scale changes in land use/land cover, burgeoning development projects and improper use of watersheds all has caused a substantial decline of wetland resources in the world. Major degradations have been impacted from agricultural, industrial and urban developments leading to various types of pollutions and hydrological perturbations. Regular fishing activities and unsustainable grazing of animals are degrading the wetlands in a slow pace. The paper focuses on the spatio-temporal change detection of the area of the water body and the main cause of this depletion. The total area under study (22°19’87’’ N, 88°20’23’’ E) is a wetland region in West Bengal of 213 sq.km. The procedure used is the Normalized Difference Water Index (NDWI) from multi-spectral imagery and Landsat to detect the presence of surface water, and the datasets have been compared of the years 2016, 2006 and 1996. The result shows a sharp decline in the area of water body due to a rapid increase in the agricultural practices and the growing urbanization.Keywords: spatio-temporal change, NDWI, urbanization, wetland
Procedia PDF Downloads 28325079 Imputation Technique for Feature Selection in Microarray Data Set
Authors: Younies Saeed Hassan Mahmoud, Mai Mabrouk, Elsayed Sallam
Abstract:
Analysing DNA microarray data sets is a great challenge, which faces the bioinformaticians due to the complication of using statistical and machine learning techniques. The challenge will be doubled if the microarray data sets contain missing data, which happens regularly because these techniques cannot deal with missing data. One of the most important data analysis process on the microarray data set is feature selection. This process finds the most important genes that affect certain disease. In this paper, we introduce a technique for imputing the missing data in microarray data sets while performing feature selection.Keywords: DNA microarray, feature selection, missing data, bioinformatics
Procedia PDF Downloads 57425078 PDDA: Priority-Based, Dynamic Data Aggregation Approach for Sensor-Based Big Data Framework
Authors: Lutful Karim, Mohammed S. Al-kahtani
Abstract:
Sensors are being used in various applications such as agriculture, health monitoring, air and water pollution monitoring, traffic monitoring and control and hence, play the vital role in the growth of big data. However, sensors collect redundant data. Thus, aggregating and filtering sensors data are significantly important to design an efficient big data framework. Current researches do not focus on aggregating and filtering data at multiple layers of sensor-based big data framework. Thus, this paper introduces (i) three layers data aggregation and framework for big data and (ii) a priority-based, dynamic data aggregation scheme (PDDA) for the lowest layer at sensors. Simulation results show that the PDDA outperforms existing tree and cluster-based data aggregation scheme in terms of overall network energy consumptions and end-to-end data transmission delay.Keywords: big data, clustering, tree topology, data aggregation, sensor networks
Procedia PDF Downloads 34525077 Secret Agents in the Azores during the Second World War and the Impact of Espionage on Portuguese-British Relations
Authors: Marisa Galiza Filipe
Abstract:
In 1942, at the height of the Second World War, Roosevelt and Churchill planned to occupy the Azores to establish air and naval bases. The islands' privileged position in the middle of the Atlantic made them a strategic location for both the Axis and the Allies. For the Germans, the occupation of the island was also a strategic place to launch an attack on the United States of America, and for the British and Americans, the islands were the perfect spot to counterattack the German sinking of British boats and submarines. Salazar avoided the concession of the islands until 1943, claiming, on the one hand, the policy of neutrality, a decision made in agreement with England, and on the other hand, the reaffirmation of Portuguese sovereignty over the territory. Aware of the constant changes and supported by a network of informers on the islands, the German and British spies played a crucial role in the negotiations between Portugal and the Allies and the ceding of the bases by Salazar, which prevented their forced occupation. The espionage caused several diplomatic tensions, and the large number of German spies denounced by the British, operating on the islands under the watchful eye of the PVDE and Salazar, weakened the Portuguese-British alliance. Using primary source documents in the Ministério dos Negócios Estrangeiros (MNE) archives, this paper introduces the spies that operated on the islands, their missions and motives, organizations, and modus operandi. As a chess game, any move was careful thinking and the spies were valuable assets to control and use information that could lead to the occupation of the islands and, ultimately, change the tide of the war.Keywords: espionage, Azores, WWI, neutrality
Procedia PDF Downloads 6625076 Control the Flow of Big Data
Authors: Shizra Waris, Saleem Akhtar
Abstract:
Big data is a research area receiving attention from academia and IT communities. In the digital world, the amounts of data produced and stored have within a short period of time. Consequently this fast increasing rate of data has created many challenges. In this paper, we use functionalism and structuralism paradigms to analyze the genesis of big data applications and its current trends. This paper presents a complete discussion on state-of-the-art big data technologies based on group and stream data processing. Moreover, strengths and weaknesses of these technologies are analyzed. This study also covers big data analytics techniques, processing methods, some reported case studies from different vendor, several open research challenges and the chances brought about by big data. The similarities and differences of these techniques and technologies based on important limitations are also investigated. Emerging technologies are suggested as a solution for big data problems.Keywords: computer, it community, industry, big data
Procedia PDF Downloads 19425075 Flood Early Warning and Management System
Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare
Abstract:
The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.Keywords: flood, modeling, HPC, FOSS
Procedia PDF Downloads 8925074 High Performance Computing and Big Data Analytics
Authors: Branci Sarra, Branci Saadia
Abstract:
Because of the multiplied data growth, many computer science tools have been developed to process and analyze these Big Data. High-performance computing architectures have been designed to meet the treatment needs of Big Data (view transaction processing standpoint, strategic, and tactical analytics). The purpose of this article is to provide a historical and global perspective on the recent trend of high-performance computing architectures especially what has a relation with Analytics and Data Mining.Keywords: high performance computing, HPC, big data, data analysis
Procedia PDF Downloads 52025073 A Landscape of Research Data Repositories in Re3data.org Registry: A Case Study of Indian Repositories
Authors: Prashant Shrivastava
Abstract:
The purpose of this study is to explore re3dat.org registry to identify research data repositories registration workflow process. Further objective is to depict a graph for present development of research data repositories in India. Preliminarily with an approach to understand re3data.org registry framework and schema design then further proceed to explore the status of research data repositories of India in re3data.org registry. Research data repositories are getting wider relevance due to e-research concepts. Now available registry re3data.org is a good tool for users and researchers to identify appropriate research data repositories as per their research requirements. In Indian environment, a compatible National Research Data Policy is the need of the time to boost the management of research data. Registry for Research Data Repositories is a crucial tool to discover specific information in specific domain. Also, Research Data Repositories in India have not been studied. Re3data.org registry and status of Indian research data repositories both discussed in this study.Keywords: research data, research data repositories, research data registry, re3data.org
Procedia PDF Downloads 32425072 Romantic Theory in Comparative Perspective: Schlegel’s Philosophy of History and the Spanish Question
Authors: Geena Kim
Abstract:
The Romantic movements in Spain and Germany served as turning points in European literary history, advancing cognitive-emotional ideals of the essential unity between literature, life, and the natural world in reaction against the rising tide of mechanization, urban growth, and industrial progress. This paper offers a comparative study of the literary-theoretic underpinnings of the Romantic movements in Spain and Germany, particularly with regard to the reception history of Schlegel’s Romantic philosophy of history. By far one of the better-known figures of the period, Schlegel has traditionally been considered one of the principal theorists of German Romanticism, one of the first to embrace and acknowledge the more radical changes that the movement brought forth. His well-studied contributions to the German Romanticism were certainly significant domestically, but their impact on comparatively less industrialized Spain have been largely neglected, a puzzling oversight in light of Schlegel’s extensive efforts in advocating for the dissemination of Spanish literature under the guise of a kind of pan-European Romanticism. Indeed, Schlegel’s somewhat problematically exoticizing view of Spain as the quintessential embodiment of the spirit of Romanticism was itself enormously influential on the genesis and growth of the Spanish Romantic theory. This was especially significant considering earlier, pre-Romantic tropes of the ‘black legend,’ by which means Spain was demonized with even cruder essentializing, nationalistic language. By comparing Schlegel’s theorizing around Spain with contributions to Romantic theory by Hispanophone writers, this paper sheds light on questions of linguistic identity and national influence from two alas infrequently compared contexts of European Romanticism.Keywords: schlegel, Spanish romantic theory, German romanticism, romantic philosophy
Procedia PDF Downloads 190