Search results for: data source
26408 The Face Sync-Smart Attendance
Authors: Bekkem Chakradhar Reddy, Y. Soni Priya, Mathivanan G., L. K. Joshila Grace, N. Srinivasan, Asha P.
Abstract:
Currently, there are a lot of problems related to marking attendance in schools, offices, or other places. Organizations tasked with collecting daily attendance data have numerous concerns. There are different ways to mark attendance. The most commonly used method is collecting data manually by calling each student. It is a longer process and problematic. Now, there are a lot of new technologies that help to mark attendance automatically. It reduces work and records the data. We have proposed to implement attendance marking using the latest technologies. We have implemented a system based on face identification and analyzing faces. The project is developed by gathering faces and analyzing data, using deep learning algorithms to recognize faces effectively. The data is recorded and forwarded to the host through mail. The project was implemented in Python and Python libraries used are CV2, Face Recognition, and Smtplib.Keywords: python, deep learning, face recognition, CV2, smtplib, Dlib.
Procedia PDF Downloads 5826407 Prediction of Boundary Shear Stress with Flood Plains Enlargements
Authors: Spandan Sahu, Amiya Kumar Pati, Kishanjit Kumar Khatua
Abstract:
The river is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that need to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. During floods, part of a river is carried by the simple main channel and rest is carried by flood plains. For such compound asymmetric channels, the flow structure becomes complicated due to momentum exchange between the main channel and adjoining flood plains. Distribution of boundary shear in subsections provides us with the concept of momentum transfer between the interface of the main channel and the flood plains. Experimentally, to get better data with accurate results are very complex because of the complexity of the problem. Hence, CES software has been used to tackle the complex processes to determine the shear stresses at different sections of an open channel having asymmetric flood plains on both sides of the main channel, and the results are compared with the symmetric flood plains for various geometrical shapes and flow conditions. Error analysis is also performed to know the degree of accuracy of the model implemented.Keywords: depth average velocity, non prismatic compound channel, relative flow depth, velocity distribution
Procedia PDF Downloads 17726406 Geographical Data Visualization Using Video Games Technologies
Authors: Nizar Karim Uribe-Orihuela, Fernando Brambila-Paz, Ivette Caldelas, Rodrigo Montufar-Chaveznava
Abstract:
In this paper, we present the advances corresponding to the implementation of a strategy to visualize geographical data using a Software Development Kit (SDK) for video games. We use multispectral images from Landsat 7 platform and Laser Imaging Detection and Ranging (LIDAR) data from The National Institute of Geography and Statistics of Mexican (INEGI). We select a place of interest to visualize from Landsat platform and make some processing to the image (rotations, atmospheric correction and enhancement). The resulting image will be our gray scale color-map to fusion with the LIDAR data, which was selected using the same coordinates than in Landsat. The LIDAR data is translated to 8-bit raw data. Both images are fused in a software developed using Unity (an SDK employed for video games). The resulting image is then displayed and can be explored moving around. The idea is the software could be used for students of geology and geophysics at the Engineering School of the National University of Mexico. They will download the software and images corresponding to a geological place of interest to a smartphone and could virtually visit and explore the site with a virtual reality visor such as Google cardboard.Keywords: virtual reality, interactive technologies, geographical data visualization, video games technologies, educational material
Procedia PDF Downloads 24626405 Authority and Responsibility of Turkish Physical Education Teachers
Authors: Mufide Cotuk, Muslim Bakir
Abstract:
National education in Turkey aims to provide superior education opportunities to students in order to develop their intellectual abilities in accordance with contemporary pedagogy. Physical education (PE) plays an important role in this context. Various factors affect the quality and efficiency of the process of PE. Factors related to governance are crucially important, especially those of authority and responsibility. For educational institutions at high school level, the factors affecting authority and responsibility have not been clearly delineated. Therefore, the aim of this study was to examine authority and responsibility of PE teachers as the balance between them. The study sample consisted of 60 PE teachers (19 women, 41 men) at 57 high schools in Istanbul (65% state and 35% private institutions). All PE teachers completed the study questionnaire collecting demographic and institutional data as knowledge and attitudes regarding authority and responsibility issues. The determination of authority and responsibility of PE teachers has been grounded on the law for government officials, course-passing regulations, and school sports regulations. The PE teachers declared as the primary source of their authority and responsibility ‘school sports regulations’ (56,7% of PE teachers), ‘course-passing regulations’ (36,7% of PE teachers) and ‘the law for government officials’ (30,0% of PE teachers). The PE teachers mentioned that the school administration burdened them with additional responsibilities (58,3% of PE teachers). Such ‘additional’ responsibilities were primarily related to ‘disciplinary regulations’ (21,7% of PE teachers) and ‘maintenance of school order’ (16,0% of PE teachers). In conclusion, authority and responsibility of PE teachers were not well balanced. As authority issues were not clearly stated, ‘compulsory’ responsibilities increased causing this imbalance.Keywords: authority, PE teacher, responsibility, sport management
Procedia PDF Downloads 34526404 The Routes of Human Suffering: How Point-Source and Destination-Source Mapping Can Help Victim Services Providers and Law Enforcement Agencies Effectively Combat Human Trafficking
Authors: Benjamin Thomas Greer, Grace Cotulla, Mandy Johnson
Abstract:
Human trafficking is one of the fastest growing international crimes and human rights violations in the world. The United States Department of State (State Department) approximates some 800,000 to 900,000 people are annually trafficked across sovereign borders, with approximately 14,000 to 17,500 of these people coming into the United States. Today’s slavery is conducted by unscrupulous individuals who are often connected to organized criminal enterprises and transnational gangs, extracting huge monetary sums. According to the International Labour Organization (ILO), human traffickers collect approximately $32 billion worldwide annually. Surpassed only by narcotics dealing, trafficking of humans is tied with illegal arms sales as the second largest criminal industry in the world and is the fastest growing field in the 21st century. Perpetrators of this heinous crime abound. They are not limited to single or “sole practitioners” of human trafficking, but rather, often include Transnational Criminal Organizations (TCO), domestic street gangs, labor contractors, and otherwise seemingly ordinary citizens. Monetary gain is being elevated over territorial disputes and street gangs are increasingly operating in a collaborative effort with TCOs to further disguise their criminal activity; to utilizing their vast networks, in an attempt to avoid detection. Traffickers rely on a network of clandestine routes to sell their commodities with impunity. As law enforcement agencies seek to retard the expansion of transnational criminal organization’s entry into human trafficking, it is imperative that they develop reliable trafficking mapping of known exploitative routes. In a recent report given to the Mexican Congress, The Procuraduría General de la República (PGR) disclosed, from 2008 to 2010 they had identified at least 47 unique criminal networking routes used to traffic victims and that Mexico’s estimated domestic victims number between 800,000 adults and 20,000 children annually. Designing a reliable mapping system is a crucial step to effective law enforcement response and deploying a successful victim support system. Creating this mapping analytic is exceedingly difficult. Traffickers are constantly changing the way they traffic and exploit their victims. They swiftly adapt to local environmental factors and react remarkably well to market demands, exploiting limitations in the prevailing laws. This article will highlight how human trafficking has become one of the fastest growing and most high profile human rights violations in the world today; compile current efforts to map and illustrate trafficking routes; and will demonstrate how the proprietary analytical mapping analysis of point-source and destination-source mapping can help local law enforcement, governmental agencies and victim services providers effectively respond to the type and nature of trafficking to their specific geographical locale. Trafficking transcends state and international borders. It demands an effective and consistent cooperation between local, state, and federal authorities. Each region of the world has different impact factors which create distinct challenges for law enforcement and victim services. Our mapping system lays the groundwork for a targeted anti-trafficking response.Keywords: human trafficking, mapping, routes, law enforcement intelligence
Procedia PDF Downloads 38126403 Contribution to the Understanding of the Hydrodynamic Behaviour of Aquifers of the Taoudéni Sedimentary Basin (South-eastern Part, Burkina Faso)
Authors: Kutangila Malundama Succes, Koita Mahamadou
Abstract:
In the context of climate change and demographic pressure, groundwater has emerged as an essential and strategic resource whose sustainability relies on good management. The accuracy and relevance of decisions made in managing these resources depend on the availability and quality of scientific information they must rely on. It is, therefore, more urgent to improve the state of knowledge on groundwater to ensure sustainable management. This study is conducted for the particular case of the aquifers of the transboundary sedimentary basin of Taoudéni in its Burkinabe part. Indeed, Burkina Faso (and the Sahel region in general), marked by low rainfall, has experienced episodes of severe drought, which have justified the use of groundwater as the primary source of water supply. This study aims to improve knowledge of the hydrogeology of this area to achieve sustainable management of transboundary groundwater resources. The methodological approach first described lithological units regarding the extension and succession of different layers. Secondly, the hydrodynamic behavior of these units was studied through the analysis of spatio-temporal variations of piezometric. The data consists of 692 static level measurement points and 8 observation wells located in the usual manner in the area and capturing five of the identified geological formations. Monthly piezometric level chronicles are available for each observation and cover the period from 1989 to 2020. The temporal analysis of piezometric, carried out in comparison with rainfall chronicles, revealed a general upward trend in piezometric levels throughout the basin. The reaction of the groundwater generally occurs with a delay of 1 to 2 months relative to the flow of the rainy season. Indeed, the peaks of the piezometric level generally occur between September and October in reaction to the rainfall peaks between July and August. Low groundwater levels are observed between May and July. This relatively slow reaction of the aquifer is observed in all wells. The influence of the geological nature through the structure and hydrodynamic properties of the layers was deduced. The spatial analysis reveals that piezometric contours vary between 166 and 633 m with a trend indicating flow that generally goes from southwest to northeast, with the feeding areas located towards the southwest and northwest. There is a quasi-concordance between the hydrogeological basins and the overlying hydrological basins, as well as a bimodal flow with a component following the topography and another significant component deeper, controlled by the regional gradient SW-NE. This latter component may present flows directed from the high reliefs towards the sources of Nasso. In the source area (Kou basin), the maximum average stock variation, calculated by the Water Table Fluctuation (WTF) method, varies between 35 and 48.70 mm per year for 2012-2014.Keywords: hydrodynamic behaviour, taoudeni basin, piezometry, water table fluctuation
Procedia PDF Downloads 6526402 Human Connection over Technology: Evidence, Pitfalls, and Promise of Collaboration Technologies in Promoting Full Spectrum Participation of the Virtual Workforce
Authors: Michelle Marquard
Abstract:
The evidence for collaboration technologies (CTs) as a source of business productivity has never been stronger, and grows each day. At the same time, paradoxically, there is an increasingly greater concern about the challenge CTs present to the unity and well-being of the virtual workforce than ever before, but nowhere in the literature has an empirical understanding of these linkages been set out. This study attempted to address by using virtual distance as a measure of the efficacy of CTs to reduce the psychological distance among people. Data from 350 managers and 101 individual contributors across twelve functions in six major industries showed that business value is related to collaboration (r=.84, p < .01), which, in turn, is associated with full spectrum participation (r=.60, p < .01), a summative function of inclusion, integration, and we-intention. Further, virtual distance is negatively related to both collaboration (r=-.54, p < .01) and full spectrum participation (r=-.26, p < .01). Additionally, CIO-CDO relationship is a factor in the degree to which virtual distance is managed in the organization (r=-.26, p < .01). Overall, the results support the positive relationship between business value and collaboration. They also suggest that the extent to which collaboration can be fostered may depend on the degree of full spectrum participation or the level of inclusion, integration, and we-intention among members. Finally, the results indicate that CTs, when managed wisely to lower virtual distance, are a compelling concomitant to collaboration and full spectrum participation. A strategic outcome of this study is an instrumental blueprint of CTs and virtual distance in relation to full spectrum participation that should serve as a shared dashboard for CIOs, CHROs, and CDOs.Keywords: business value, collaboration, inclusion, integration, we-intention, full spectrum participation, collaboration technologies, virtual distance
Procedia PDF Downloads 34626401 Nonparametric Sieve Estimation with Dependent Data: Application to Deep Neural Networks
Authors: Chad Brown
Abstract:
This paper establishes general conditions for the convergence rates of nonparametric sieve estimators with dependent data. We present two key results: one for nonstationary data and another for stationary mixing data. Previous theoretical results often lack practical applicability to deep neural networks (DNNs). Using these conditions, we derive convergence rates for DNN sieve estimators in nonparametric regression settings with both nonstationary and stationary mixing data. The DNN architectures considered adhere to current industry standards, featuring fully connected feedforward networks with rectified linear unit activation functions, unbounded weights, and a width and depth that grows with sample size.Keywords: sieve extremum estimates, nonparametric estimation, deep learning, neural networks, rectified linear unit, nonstationary processes
Procedia PDF Downloads 4126400 Scenario Analysis to Assess the Competitiveness of Hydrogen in Securing the Italian Energy System
Authors: Gianvito Colucci, Valeria Di Cosmo, Matteo Nicoli, Orsola Maria Robasto, Laura Savoldi
Abstract:
The hydrogen value chain deployment is likely to be boosted in the near term by the energy security measures planned by European countries to face the recent energy crisis. In this context, some countries are recognized to have a crucial role in the geopolitics of hydrogen as importers, consumers and exporters. According to the European Hydrogen Backbone Initiative, Italy would be part of one of the 5 corridors that will shape the European hydrogen market. However, the set targets are very ambitious and require large investments to rapidly develop effective hydrogen policies: in this regard, scenario analysis is becoming increasingly important to support energy planning, and energy system optimization models appear to be suitable tools to quantitively carry on that kind of analysis. The work aims to assess the competitiveness of hydrogen in contributing to the Italian energy security in the coming years, under different price and import conditions, using the energy system model TEMOA-Italy. A wide spectrum of hydrogen technologies is included in the analysis, covering the production, storage, delivery, and end-uses stages. National production from fossil fuels with and without CCS, as well as electrolysis and import of low-carbon hydrogen from North Africa, are the supply solutions that would compete with other ones, such as natural gas, biomethane and electricity value chains, to satisfy sectoral energy needs (transport, industry, buildings, agriculture). Scenario analysis is then used to study the competition under different price and import conditions. The use of TEMOA-Italy allows the work to catch the interaction between the economy and technological detail, which is much needed in the energy policies assessment, while the transparency of the analysis and of the results is ensured by the full accessibility of the TEMOA open-source modeling framework.Keywords: energy security, energy system optimization models, hydrogen, natural gas, open-source modeling, scenario analysis, TEMOA
Procedia PDF Downloads 11626399 Development of Risk Management System for Urban Railroad Underground Structures and Surrounding Ground
Authors: Y. K. Park, B. K. Kim, J. W. Lee, S. J. Lee
Abstract:
To assess the risk of the underground structures and surrounding ground, we collect basic data by the engineering method of measurement, exploration and surveys and, derive the risk through proper analysis and each assessment for urban railroad underground structures and surrounding ground including station inflow. Basic data are obtained by the fiber-optic sensors, MEMS sensors, water quantity/quality sensors, tunnel scanner, ground penetrating radar, light weight deflectometer, and are evaluated if they are more than the proper value or not. Based on these data, we analyze the risk level of urban railroad underground structures and surrounding ground. And we develop the risk management system to manage efficiently these data and to support a convenient interface environment at input/output of data.Keywords: urban railroad, underground structures, ground subsidence, station inflow, risk
Procedia PDF Downloads 33626398 A Framework for Automated Nuclear Waste Classification
Authors: Seonaid Hume, Gordon Dobie, Graeme West
Abstract:
Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.Keywords: nuclear decommissioning, radiation detection, object detection, waste classification
Procedia PDF Downloads 20026397 Integration of Big Data to Predict Transportation for Smart Cities
Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin
Abstract:
The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system. The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.Keywords: big data, machine learning, smart city, social cost, transportation network
Procedia PDF Downloads 26026396 Impact of Fermentation Time and Microbial Source on Physicochemical Properties, Total Phenols and Antioxidant Activity of Finger Millet Malt Beverage
Authors: Henry O. Udeha, Kwaku G. Duodub, Afam I. O. Jideanic
Abstract:
Finger millet (FM) [Eleusine coracana] is considered as a potential ‘‘super grain’’ by the United States National Academies as one of the most nutritious among all the major cereals. The regular consumption of FM-based diets has been associated with reduced risk of diabetes, cataract and gastrointestinal tract disorder. Hyperglycaemic, hypocholesterolaemic and anticataractogenic, and other health improvement properties have been reported. This study examined the effect of fermentation time and microbial source on physicochemical properties, phenolic compounds and antioxidant activity of two finger millet (FM) malt flours. Sorghum was used as an external reference. The grains were malted, mashed and fermented using the grain microflora and Lactobacillus fermentum. The phenolic compounds of the resulting beverage were identified and quantified using ultra-performance liquid chromatography (UPLC) and mass spectrometer system (MS). A fermentation-time dependent decrease in pH and viscosities of the beverages, with a corresponding increase in sugar content were noted. The phenolic compounds found in the FM beverages were protocatechuic acid, catechin and epicatechin. Decrease in total phenolics of the beverages was observed with increased fermentation time. The beverages exhibited 2, 2-diphenyl-1-picrylhydrazyl, 2, 2՛-azinobis-3-ethylbenzthiazoline-6-sulfonic acid radical scavenging action and iron reducing activities, which were significantly (p < 0.05) reduced at 96 h fermentation for both microbial sources. The 24 h fermented beverages retained a higher amount of total phenolics and had higher antioxidant activity compared to other fermentation periods. The study demonstrates that FM could be utilised as a functional grain in the production of non-alcoholic beverage with important phenolic compounds for health promotion and wellness.Keywords: antioxidant activity, eleusine coracana, fermentation, phenolic compounds
Procedia PDF Downloads 10726395 Integrated Model for Enhancing Data Security Performance in Cloud Computing
Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali
Abstract:
Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.Keywords: cloud Ccomputing, data security, SAAS, PAAS, IAAS, Blowfish
Procedia PDF Downloads 47726394 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks
Authors: Wang Yichen, Haruka Yamashita
Abstract:
In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.Keywords: recurrent neural network, players lineup, basketball data, decision making model
Procedia PDF Downloads 13326393 Location and Group Specific Differences in Human-Macaque Interactions in Singapore: Implications for Conflict Management
Authors: Srikantan L. Jayasri, James Gan
Abstract:
The changes in Singapore’s land use, natural preference of long-tailed macaques (Macaca fascicularis) to live in forest edges and their adaptability has led to interface between humans and macaques. Studies have shown that two-third of human-macaque interactions in Singapore were related to human food. We aimed to assess differences among macaques groups in their dependence on human food and interaction with humans as indicators of the level of interface. Field observations using instantaneous scan sampling and all occurrence ad-lib sampling were carried out for 23 macaque groups over 28 days recording 71.5 hours of observations. Data on macaque behaviour, demography, frequency, and nature of human-macaque interactions were collected. None of the groups were found to completely rely on human food source. Of the 23 groups, 40% of them were directly or indirectly provisioned by humans. One-third of the groups observed engaged in some form of interactions with the humans. Three groups that were directly fed by humans contributed to 83% of the total human-macaque interactions observed during the study. Our study indicated that interactions between humans and macaques exist in specific groups and in those fed by humans regularly. Although feeding monkeys is illegal in Singapore, such incidents seem to persist in specific locations. We emphasize the importance of group and location-specific assessment of the existing human-wildlife interactions. Conflict management strategies developed should be location specific to address the cause of interactions.Keywords: primates, Southeast Asia, wildlife management, Singapore
Procedia PDF Downloads 47926392 Challenges in Multi-Cloud Storage Systems for Mobile Devices
Authors: Rajeev Kumar Bedi, Jaswinder Singh, Sunil Kumar Gupta
Abstract:
The demand for cloud storage is increasing because users want continuous access their data. Cloud Storage revolutionized the way how users access their data. A lot of cloud storage service providers are available as DropBox, G Drive, and providing limited free storage and for extra storage; users have to pay money, which will act as a burden on users. To avoid the issue of limited free storage, the concept of Multi Cloud Storage introduced. In this paper, we will discuss the limitations of existing Multi Cloud Storage systems for mobile devices.Keywords: cloud storage, data privacy, data security, multi cloud storage, mobile devices
Procedia PDF Downloads 69926391 The Combined Effect of Different Levels of Fe(III) in Diet and Cr(III) Supplementation on the Ca Status in Wistar
Authors: Staniek Halina
Abstract:
The inappropriate trace elements supply such as iron(III) and chromium(III) may be risk factors of many metabolic disorders (e.g., anemia, diabetes, as well cause toxic effect). However, little is known about their mutual interactions and their impact on these disturbances. The effects of Cr(III) supplementation with a deficit or excess supply of Fe(III) in vivo conditions are not known yet. The objective of the study was to investigate the combined effect of different Fe(III) levels in the diet and simultaneous Cr(III) supplementation on the Ca distribution in organs in healthy rats. The assessment was based on a two-factor (2x3) experiment carried out on 54 female Wistar rats (Rattus norvegicus). The animals were randomly divided into 9 groups and for 6 weeks, they were fed semi-purified diets AIN-93 with three different Fe(III) levels in the diet as a factor A [control (C) 45 mg/kg (100% Recommended Daily Allowance for rodents), deficient (D) 5 mg/kg (10% RDA), and oversupply (H) 180 mg/kg (400% RDA)]. The second factor (B) was the simultaneous dietary supplementation with Cr(III) at doses of 1, 50 and 500 mg/kg of the diet. Iron(III) citrate was the source of Fe(III). The complex of Cr(III) with propionic acid, also called Cr₃ or chromium(III) propionate (CrProp), was used as a source of Cr(III) in the diet. The Ca content of analysed samples (liver, kidneys, spleen, heart, and femur) was determined with the Atomic Absorption Spectrometry (AAS) method. It was found that different dietary Fe(III) supply as well as Cr(III) supplementation independently and in combination influenced Ca metabolism in healthy rats. Regardless of the supplementation of Cr(III), the oversupply of Fe(III) (180 mg/kg) decreased the Ca content in the liver and kidneys, while it increased the Ca saturation of bone tissue. High Cr(III) doses lowered the hepatic Ca content. Moreover, it tended to decrease the Ca content in the kidneys and heart, but this effect was not statistically significant. The combined effect of the experimental factors on the Ca content in the liver and the femur was observed. With the increase in the Fe(III) content in the diet, there was a decrease in the Ca level in the liver and an increase in bone saturation, and the additional Cr(III) supplementation intensified those effects. The study proved that the different Fe(III) content in the diet, independently and in combination with Cr(III) supplementation, affected the Ca distribution in organisms of healthy rats.Keywords: calcium, chromium(III), iron(III), rats, supplementation
Procedia PDF Downloads 19826390 Talent Management through Integration of Talent Value Chain and Human Capital Analytics Approaches
Authors: Wuttigrai Ngamsirijit
Abstract:
Talent management in today’s modern organizations has become data-driven due to a demand for objective human resource decision making and development of analytics technologies. HR managers have been faced with some obstacles in exploiting data and information to obtain their effective talent management decisions. These include process-based data and records; insufficient human capital-related measures and metrics; lack of capabilities in data modeling in strategic manners; and, time consuming to add up numbers and make decisions. This paper proposes a framework of talent management through integration of talent value chain and human capital analytics approaches. It encompasses key data, measures, and metrics regarding strategic talent management decisions along the organizational and talent value chain. Moreover, specific predictive and prescriptive models incorporating these data and information are recommended to help managers in understanding the state of talent, gaps in managing talent and the organization, and the ways to develop optimized talent strategies.Keywords: decision making, human capital analytics, talent management, talent value chain
Procedia PDF Downloads 18726389 Case Study of High-Resolution Marine Seismic Survey in Shallow Water, Arabian Gulf, Saudi Arabia
Authors: Almalki M., Alajmi M., Qadrouh Y., Alzahrani E., Sulaiman A., Aleid M., Albaiji A., Alfaifi H., Alhadadi A., Almotairy H., Alrasheed R., Alhafedh Y.
Abstract:
High-resolution marine seismic survey is a well-established technique that commonly used to characterize near-surface sediments and geological structures at shallow water. We conduct single channel seismic survey to provide high quality seismic images for near-surface sediments upto 100m depth at Jubal costal area, Arabian Gulf. Eight hydrophones streamer has been used to collect stacked seismic traces alone 5km seismic line. To reach the required depth, we have used spark system that discharges energies above 5000 J with expected frequency output span the range from 200 to 2000 Hz. A suitable processing flow implemented to enhance the signal-to-noise ratio of the seismic profile. We have found that shallow sedimentary layers at the study site have complex pattern of reflectivity, which decay significantly due to amount of source energy used as well as the multiples associated to seafloor. In fact, the results reveal that single channel marine seismic at shallow water is a cost-effective technique that can be easily repeated to observe any possibly changes in the wave physical properties at the near surface layersKeywords: shallow marine single-channel data, high resolution, frequency filtering, shallow water
Procedia PDF Downloads 7226388 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem
Authors: Ouafa Amira, Jiangshe Zhang
Abstract:
Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.Keywords: clustering, fuzzy c-means, regularization, relative entropy
Procedia PDF Downloads 25926387 Long Term Changes of Aerosols and Their Radiative Forcing over the Tropical Urban Station Pune, India
Authors: M. P. Raju, P. D. Safai, P. S. P. Rao, P. C. S. Devara, C. V. Naidu
Abstract:
In order to study the Physical and chemical characteristics of aerosols, samples of Total Suspended Particulates (TSP) were collected using a high volume sampler at Pune, a semi-urban location in SW India during March 2009 to February 2010. TSP samples were analyzed for water soluble components like F, Cl, NO3, SO4, NH4, Na, K, Ca, and Mg and acid soluble components like Al, Zn, Fe and Cu using Ion-Chromatograph and Atomic Absorption Spectrometer. Analysis of the data revealed that the monthly mean TSP concentrations varied between 471.3 µg/m3 and 30.5 µg/m3 with an annual mean value of 159.8 µg/m3. TSP concentrations were found to be less during post-monsoon and winter (October through February), compared to those in summer and monsoon (March through September). Anthropogenic activities like vehicular emissions and dust particles originated from urban activities were the major sources for TSP. TSP showed good correlation with all the major ionic components, especially with SO4 (R= 0.62) and NO3 (R= 0.67) indicating the impact of anthropogenic sources over the aerosols at Pune. However, the overall aerosol nature was alkaline (Ave pH = 6.17) mainly due to the neutralizing effects of Ca and NH4. SO4 contributed more (58.8%) to the total acidity as compared to NO3 (41.1%) where as, Ca contributed more (66.5%) to the total alkalinity than NH4 (33.5%). Seasonality of acid soluble component Al, Fe and Cu showed remarkable increase, indicating the dominance of soil source over the man-made activities. Overall study on TSP indicated that aerosols at Pune were mainly affected by the local sources.Keywords: chemical composition, acidic and neutralization potential, radiative forcing, urban station
Procedia PDF Downloads 24426386 Efficiency of Membrane Distillation to Produce Fresh Water
Authors: Sabri Mrayed, David Maccioni, Greg Leslie
Abstract:
Seawater desalination has been accepted as one of the most effective solutions to the growing problem of a diminishing clean drinking water supply. Currently, two desalination technologies dominate the market – the thermally driven multi-stage flash distillation (MSF) and the membrane based reverse osmosis (RO). However, in recent years membrane distillation (MD) has emerged as a potential alternative to the established means of desalination. This research project intended to determine the viability of MD as an alternative process to MSF and RO for seawater desalination. Specifically the project involves conducting a thermodynamic analysis of the process based on the second law of thermodynamics to determine the efficiency of the MD. Data was obtained from experiments carried out on a laboratory rig. In order to determine exergy values required for the exergy analysis, two separate models were built in Engineering Equation Solver – the ’Minimum Separation Work Model’ and the ‘Stream Exergy Model’. The efficiency of MD process was found to be 17.3 %, and the energy consumption was determined to be 4.5 kWh to produce one cubic meter of fresh water. The results indicate MD has potential as a technique for seawater desalination compared to RO and MSF. However, it was shown that this was only the case if an alternate energy source such as green or waste energy was available to provide the thermal energy input to the process. If the process was required to power itself, it was shown to be highly inefficient and in no way thermodynamically viable as a commercial desalination process.Keywords: desalination, exergy, membrane distillation, second law efficiency
Procedia PDF Downloads 36426385 Detecting Impact of Allowance Trading Behaviors on Distribution of NOx Emission Reductions under the Clean Air Interstate Rule
Authors: Yuanxiaoyue Yang
Abstract:
Emissions trading, or ‘cap-and-trade', has been long promoted by economists as a more cost-effective pollution control approach than traditional performance standard approaches. While there is a large body of empirical evidence for the overall effectiveness of emissions trading, relatively little attention has been paid to other unintended consequences brought by emissions trading. One important consequence is that cap-and-trade could introduce the risk of creating high-level emission concentrations in areas where emitting facilities purchase a large number of emission allowances, which may cause an unequal distribution of environmental benefits. This study will contribute to the current environmental policy literature by linking trading activity with environmental injustice concerns and empirically analyzing the causal relationship between trading activity and emissions reduction under a cap-and-trade program for the first time. To investigate the potential environmental injustice concern in cap-and-trade, this paper uses a differences-in-differences (DID) with instrumental variable method to identify the causal effect of allowance trading behaviors on emission reduction levels under the clean air interstate rule (CAIR), a cap-and-trade program targeting on the power sector in the eastern US. The major data source is the facility-year level emissions and allowance transaction data collected from US EPA air market databases. While polluting facilities from CAIR are the treatment group under our DID identification, we use non-CAIR facilities from the Acid Rain Program - another NOx control program without a trading scheme – as the control group. To isolate the causal effects of trading behaviors on emissions reduction, we also use eligibility for CAIR participation as the instrumental variable. The DID results indicate that the CAIR program was able to reduce NOx emissions from affected facilities by about 10% more than facilities who did not participate in the CAIR program. Therefore, CAIR achieves excellent overall performance in emissions reduction. The IV regression results also indicate that compared with non-CAIR facilities, purchasing emission permits still decreases a CAIR participating facility’s emissions level significantly. This result implies that even buyers under the cap-and-trade program have achieved a great amount of emissions reduction. Therefore, we conclude little evidence of environmental injustice from the CAIR program.Keywords: air pollution, cap-and-trade, emissions trading, environmental justice
Procedia PDF Downloads 15126384 Sampled-Data Model Predictive Tracking Control for Mobile Robot
Authors: Wookyong Kwon, Sangmoon Lee
Abstract:
In this paper, a sampled-data model predictive tracking control method is presented for mobile robots which is modeled as constrained continuous-time linear parameter varying (LPV) systems. The presented sampled-data predictive controller is designed by linear matrix inequality approach. Based on the input delay approach, a controller design condition is derived by constructing a new Lyapunov function. Finally, a numerical example is given to demonstrate the effectiveness of the presented method.Keywords: model predictive control, sampled-data control, linear parameter varying systems, LPV
Procedia PDF Downloads 30926383 Development of Typical Meteorological Year for Passive Cooling Applications Using World Weather Data
Authors: Nasser A. Al-Azri
Abstract:
The effectiveness of passive cooling techniques is assessed based on bioclimatic charts that require the typical meteorological year (TMY) for a specified location for their development. However, TMYs are not always available; mainly due to the scarcity of records of solar radiation which is an essential component used in developing common TMYs intended for general uses. Since solar radiation is not required in the development of the bioclimatic chart, this work suggests developing TMYs based solely on the relevant parameters. This approach improves the accuracy of the developed TMY since only the relevant parameters are considered and it also makes the development of the TMY more accessible since solar radiation data are not used. The presented paper will also discuss the development of the TMY from the raw data available at the NOAA-NCDC archive of world weather data and the construction of the bioclimatic charts for some randomly selected locations around the world.Keywords: bioclimatic charts, passive cooling, TMY, weather data
Procedia PDF Downloads 24026382 Co-Seismic Deformation Using InSAR Sentinel-1A: Case Study of the 6.5 Mw Pidie Jaya, Aceh, Earthquake
Authors: Jefriza, Habibah Lateh, Saumi Syahreza
Abstract:
The 2016 Mw 6.5 Pidie Jaya earthquake is one of the biggest disasters that has occurred in Aceh within the last five years. This earthquake has caused severe damage to many infrastructures such as schools, hospitals, mosques, and houses in the district of Pidie Jaya and surrounding areas. Earthquakes commonly occur in Aceh Province due to the Aceh-Sumatra is located in the convergent boundaries of the Sunda Plate subducted beneath the Indo-Australian Plate. This convergence is responsible for the intensification of seismicity in this region. The plates are tilted at a speed of 63 mm per year and the right lateral component is accommodated by strike- slip faulting within Sumatra, mainly along the great Sumatran fault. This paper presents preliminary findings of InSAR study aimed at investigating the co-seismic surface deformation pattern in Pidie Jaya, Aceh-Indonesia. Coseismic surface deformation is rapid displacement that occurs at the time of an earthquake. Coseismic displacement mapping is required to study the behavior of seismic faults. InSAR is a powerful tool for measuring Earth surface deformation to a precision of a few centimetres. In this study, two radar images of the same area but at two different times are required to detect changes in the Earth’s surface. The ascending and descending Sentinel-1A (S1A) synthetic aperture radar (SAR) data and Sentinels application platform (SNAP) toolbox were used to generate SAR interferogram image. In order to visualize the InSAR interferometric, the S1A from both master (26 Nov 2016) and slave data-sets (26 Dec 2016) were utilized as the main data source for mapping the coseismic surface deformation. The results show that the fringes of phase difference have appeared in the border region as a result of the movement that was detected with interferometric technique. On the other hand, the dominant fringes pattern also appears near the coastal area, this is consistent with the field investigations two days after the earthquake. However, the study has also limitations of resolution and atmospheric artefacts in SAR interferograms. The atmospheric artefacts are caused by changes in the atmospheric refractive index of the medium, as a result, has limitation to produce coherence image. Low coherence will be affected the result in creating fringes (movement can be detected by fringes). The spatial resolution of the Sentinel satellite has not been sufficient for studying land surface deformation in this area. Further studies will also be investigated using both ALOS and TerraSAR-X. ALOS and TerraSAR-X improved the spatial resolution of SAR satellite.Keywords: earthquake, InSAR, interferometric, Sentinel-1A
Procedia PDF Downloads 19626381 Investigation of Enterotoxigenic Staphylococcus aureus in Kitchen of Catering
Authors: Çiğdem Sezer, Aksem Aksoy, Leyla Vatansever
Abstract:
This study has been done for the purpose of evaluation of public health and identifying of enterotoxigenic Staphyloccocus aureus in kitchen of catering. In the kitchen of catering, samples have been taken by swabs from surface of equipments which are in the salad section, meat section and bakery section. Samples have been investigated with classical cultural methods in terms of Staphyloccocus aureus. Therefore, as a 10x10 cm area was identified (salad, cutting and chopping surfaces, knives, meat grinder, meat chopping surface) samples have been taken with sterile swabs with helping FTS from this area. In total, 50 samples were obtained. In aseptic conditions, Baird-Parker agar (with egg yolk tellurite) surface was seeded with swabs. After 24-48 hours of incubation at 37°C, the black colonies with 1-1.5 mm diameter and which are surrounded by a zone indicating lecithinase activity were identified as S. aureus after applying Gram staining, catalase, coagulase, glucose and mannitol fermentation and termonuclease tests. Genotypic characterization (Staphylococcus genus and S.aureus species spesific) of isolates was performed by PCR. The ELISA test was applied to the isolates for the identification of staphylococcal enterotoxins (SET) A, B, C, D, E in bacterial cultures. Measurements were taken at 450 nm in an ELISA reader using an Ridascreen-Total set ELISA test kit (r-biopharm R4105-Enterotoxin A, B, C, D, E). The results were calculated according to the manufacturer’s instructions. A total of 50 samples of 97 S. aureus was isolated. This number has been identified as 60 with PCR analysis. According to ELISA test, only 1 of 60 isolates were found to be enterotoxigenic. Enterotoxigenic strains were identified from the surface of salad chopping and cutting. In the kitchen of catering, S. aureus identification indicates a significant source of contamination. Especially, in raw consumed salad preparation phase of contamination is very important. This food can be a potential source of food-borne poisoning their terms, and they pose a significant risk to consumers have been identified.Keywords: Staphylococcus aureus, enterotoxin, catering, kitchen, health
Procedia PDF Downloads 40226380 Board Composition and Performance of Listed Deposit Money Banks in Nigeria
Authors: Mary David, Denis Basila
Abstract:
This study assessed the Impact of Board Composition on the Performance of Listed Deposit Money Banks in Nigeria. A sample of ten (10) deposit money banks formed the sample of this study. Board size, gender diversity, and board independence were used as the independent variables, and firm size as a control variable, whiles the bank performance was proxy with Tobin’s Q (TQ) as the dependent variable. Secondary data was collected from secondary source through the annual report and account of the banks and was analyzed through the support of STATA 14 versions. Descriptive statistics, correlation matrix, and OLS multiple regression model were adopted for the study. Breusch and pagan lagrangian multiplier test for random effect was conducted. The findings of the study reveal that board size has positive and significant impact on Tobin’s Q, gender diversity has positive and significant impact on Tobin’s Q, while board independent had a negative and nonsignificant influence on the Tobin’s Q, Similarly, firm size was found to have a negative and nonsignificant impact on Tobin’s Q of the study banks. This study recommended that policy makers, stakeholders, and corporate managers of deposit money banks of Nigeria and related industries are encouraged to adopt board sizes and gender diversity that impact positively on bank performance.Keywords: board composition, performance, deposit money banks, nigeria
Procedia PDF Downloads 7326379 Development of Management System of the Experience of Defensive Modeling and Simulation by Data Mining Approach
Authors: D. Nam Kim, D. Jin Kim, Jeonghwan Jeon
Abstract:
Defense Defensive Modeling and Simulation (M&S) is a system which enables impracticable training for reducing constraints of time, space and financial resources. The necessity of defensive M&S has been increasing not only for education and training but also virtual fight. Soldiers who are using defensive M&S for education and training will obtain empirical knowledge and know-how. However, the obtained knowledge of individual soldiers have not been managed and utilized yet since the nature of military organizations: confidentiality and frequent change of members. Therefore, this study aims to develop a management system for the experience of defensive M&S based on data mining approach. Since individual empirical knowledge gained through using the defensive M&S is both quantitative and qualitative data, data mining approach is appropriate for dealing with individual empirical knowledge. This research is expected to be helpful for soldiers and military policy makers.Keywords: data mining, defensive m&s, management system, knowledge management
Procedia PDF Downloads 254