Search results for: data comparison
26786 Optimizing Electric Vehicle Charging with Charging Data Analytics
Authors: Tayyibah Khanam, Mohammad Saad Alam, Sanchari Deb, Yasser Rafat
Abstract:
Electric vehicles are considered as viable replacements to gasoline cars since they help in reducing harmful emissions and stimulate power generation through renewable energy sources, hence contributing to sustainability. However, one of the significant obstacles in the mass deployment of electric vehicles is the charging time anxiety among users and, thus, the subsequent large waiting times for available chargers at charging stations. Data analytics, on the other hand, has revolutionized the decision-making tasks of management and operating systems since its arrival. In this paper, we attempt to optimize the choice of EV charging stations for users in their vicinity by minimizing the time taken to reach the charging stations and the waiting times for available chargers. Time taken to travel to the charging station is calculated by the Google Maps API and the waiting times are predicted by polynomial regression of the historical data stored. The proposed framework utilizes real-time data and historical data from all operating charging stations in the city and assists the user in finding the best suitable charging station for their current situation and can be implemented in a mobile phone application. The algorithm successfully predicts the most optimal choice of a charging station and the minimum required time for various sample data sets.Keywords: charging data, electric vehicles, machine learning, waiting times
Procedia PDF Downloads 19626785 Finding Data Envelopment Analysis Targets Using Multi-Objective Programming in DEA-R with Stochastic Data
Authors: R. Shamsi, F. Sharifi
Abstract:
In this paper, we obtain the projection of inefficient units in data envelopment analysis (DEA) in the case of stochastic inputs and outputs using the multi-objective programming (MOP) structure. In some problems, the inputs might be stochastic while the outputs are deterministic, and vice versa. In such cases, we propose a multi-objective DEA-R model because in some cases (e.g., when unnecessary and irrational weights by the BCC model reduce the efficiency score), an efficient decision-making unit (DMU) is introduced as inefficient by the BCC model, whereas the DMU is considered efficient by the DEA-R model. In some other cases, only the ratio of stochastic data may be available (e.g., the ratio of stochastic inputs to stochastic outputs). Thus, we provide a multi-objective DEA model without explicit outputs and prove that the input-oriented MOP DEA-R model in the invariable return to scale case can be replaced by the MOP-DEA model without explicit outputs in the variable return to scale and vice versa. Using the interactive methods for solving the proposed model yields a projection corresponding to the viewpoint of the DM and the analyst, which is nearer to reality and more practical. Finally, an application is provided.Keywords: DEA-R, multi-objective programming, stochastic data, data envelopment analysis
Procedia PDF Downloads 10626784 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks
Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez
Abstract:
Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning
Procedia PDF Downloads 34026783 Integrated Model for Enhancing Data Security Processing Time in Cloud Computing
Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali
Abstract:
Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a simple user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.Keywords: cloud computing, data security, SAAS, PAAS, IAAS, Blowfish
Procedia PDF Downloads 35926782 Using Machine Learning as an Alternative for Predicting Exchange Rates
Authors: Pedro Paulo Galindo Francisco, Eli Dhadad Junior
Abstract:
This study addresses the Meese-Rogoff Puzzle by introducing the latest machine learning techniques as alternatives for predicting the exchange rates. Using RMSE as a comparison metric, Meese and Rogoff discovered that economic models are unable to outperform the random walk model as short-term exchange rate predictors. Decades after this study, no statistical prediction technique has proven effective in overcoming this obstacle; although there were positive results, they did not apply to all currencies and defined periods. Recent advancements in artificial intelligence technologies have paved the way for a new approach to exchange rate prediction. Leveraging this technology, we applied five machine learning techniques to attempt to overcome the Meese-Rogoff puzzle. We considered daily data for the real, yen, British pound, euro, and Chinese yuan against the US dollar over a time horizon from 2010 to 2023. Our results showed that none of the presented techniques were able to produce an RMSE lower than the Random Walk model. However, the performance of some models, particularly LSTM and N-BEATS were able to outperform the ARIMA model. The results also suggest that machine learning models have untapped potential and could represent an effective long-term possibility for overcoming the Meese-Rogoff puzzle.Keywords: exchage rate, prediction, machine learning, deep learning
Procedia PDF Downloads 3226781 Hyperspectral Data Classification Algorithm Based on the Deep Belief and Self-Organizing Neural Network
Authors: Li Qingjian, Li Ke, He Chun, Huang Yong
Abstract:
In this paper, the method of combining the Pohl Seidman's deep belief network with the self-organizing neural network is proposed to classify the target. This method is mainly aimed at the high nonlinearity of the hyperspectral image, the high sample dimension and the difficulty in designing the classifier. The main feature of original data is extracted by deep belief network. In the process of extracting features, adding known labels samples to fine tune the network, enriching the main characteristics. Then, the extracted feature vectors are classified into the self-organizing neural network. This method can effectively reduce the dimensions of data in the spectrum dimension in the preservation of large amounts of raw data information, to solve the traditional clustering and the long training time when labeled samples less deep learning algorithm for training problems, improve the classification accuracy and robustness. Through the data simulation, the results show that the proposed network structure can get a higher classification precision in the case of a small number of known label samples.Keywords: DBN, SOM, pattern classification, hyperspectral, data compression
Procedia PDF Downloads 34126780 A Novel Algorithm for Production Scheduling
Authors: Ali Mohammadi Bolban Abad, Fariborz Ahmadi
Abstract:
Optimization in manufacture is a method to use limited resources to obtain the best performance and reduce waste. In this paper a new algorithm based on eurygaster life is introduced to obtain a plane in which task order and completion time of resources are defined. Evaluation results show our approach has less make span when the resources are allocated with some products in comparison to genetic algorithm.Keywords: evolutionary computation, genetic algorithm, particle swarm optimization, NP-Hard problems, production scheduling
Procedia PDF Downloads 37826779 Assessing Performance of Data Augmentation Techniques for a Convolutional Network Trained for Recognizing Humans in Drone Images
Authors: Masood Varshosaz, Kamyar Hasanpour
Abstract:
In recent years, we have seen growing interest in recognizing humans in drone images for post-disaster search and rescue operations. Deep learning algorithms have shown great promise in this area, but they often require large amounts of labeled data to train the models. To keep the data acquisition cost low, augmentation techniques can be used to create additional data from existing images. There are many techniques of such that can help generate variations of an original image to improve the performance of deep learning algorithms. While data augmentation is potentially assumed to improve the accuracy and robustness of the models, it is important to ensure that the performance gains are not outweighed by the additional computational cost or complexity of implementing the techniques. To this end, it is important to evaluate the impact of data augmentation on the performance of the deep learning models. In this paper, we evaluated the most currently available 2D data augmentation techniques on a standard convolutional network which was trained for recognizing humans in drone images. The techniques include rotation, scaling, random cropping, flipping, shifting, and their combination. The results showed that the augmented models perform 1-3% better compared to a base network. However, as the augmented images only contain the human parts already visible in the original images, a new data augmentation approach is needed to include the invisible parts of the human body. Thus, we suggest a new method that employs simulated 3D human models to generate new data for training the network.Keywords: human recognition, deep learning, drones, disaster mitigation
Procedia PDF Downloads 9626778 Correlation Between Hydrogen Charging and Charpy Impact of 4340 Steel
Authors: J. Alcisto, M. Papakyriakou, J. Guerra, A. Dominguez, M. Miller, J. Foyos, E. Jones, N. Ula, M. Hahn, L. Zeng, Y. Li, O. S. Es-Said
Abstract:
Current methods of testing for hydrogen charging are slow and time consuming. The objective of this paper was to determine if hydrogen charging can be detected quantitatively through the use of Charpy Impact (CI) testing. CI is a much faster and simpler process than current methods for detecting hydrogen charging. Steel plates were Electro Discharge Machined (EDM) into ninety-six 4340 steel CI samples and forty-eight tensile bars. All the samples were heat treated at 900°C to austentite and then rapidly quenched in water to form martensite. The samples were tempered at eight different target strengths/target temperatures (145, 160, 170, 180, 190, 205, 220, to 250KSI, thousands of pounds per square inch)/(1100, 1013, 956, 898, 840, 754, 667, 494 degrees Celsius). After a tedious process of grinding and machining v-notches to the Charpy samples, they were divided into four groups. One group was kept as received baseline for comparison while the other three groups were sent to Alcoa (Fasteners) Inc. in Torrance to be cadmium coated. The three groups were coated with three thicknesses (2, 3 and 5 mils). That means that the samples were charged with ascending hydrogen levels. The samples were CI tested and tensile tested, and the data was tabulated and compared to the baseline group of uncharged samples of the same material. The results of this study were successful and indicated that CI testing was able to quantitatively detect hydrogen charging.Keywords: Charpy impact toughness, hydrogen charging, 4340 steel, Electro Discharge Machined (EDM)
Procedia PDF Downloads 29826777 Comparison of Rainfall Trends in the Western Ghats and Coastal Region of Karnataka, India
Authors: Vinay C. Doranalu, Amba Shetty
Abstract:
In recent days due to climate change, there is a large variation in spatial distribution of daily rainfall within a small region. Rainfall is one of the main end climatic variables which affect spatio-temporal patterns of water availability. The real task postured by the change in climate is identification, estimation and understanding the uncertainty of rainfall. This study intended to analyze the spatial variations and temporal trends of daily precipitation using high resolution (0.25º x 0.25º) gridded data of Indian Meteorological Department (IMD). For the study, 38 grid points were selected in the study area and analyzed for daily precipitation time series (113 years) over the period 1901-2013. Grid points were divided into two zones based on the elevation and situated location of grid points: Low Land (exposed to sea and low elevated area/ coastal region) and High Land (Interior from sea and high elevated area/western Ghats). Time series were applied to examine the spatial analysis and temporal trends in each grid points by non-parametric Mann-Kendall test and Theil-Sen estimator to perceive the nature of trend and magnitude of slope in trend of rainfall. Pettit-Mann-Whitney test is applied to detect the most probable change point in trends of the time period. Results have revealed remarkable monotonic trend in each grid for daily precipitation of the time series. In general, by the regional cluster analysis found that increasing precipitation trend in shoreline region and decreasing trend in Western Ghats from recent years. Spatial distribution of rainfall can be partly explained by heterogeneity in temporal trends of rainfall by change point analysis. The Mann-Kendall test shows significant variation as weaker rainfall towards the rainfall distribution over eastern parts of the Western Ghats region of Karnataka.Keywords: change point analysis, coastal region India, gridded rainfall data, non-parametric
Procedia PDF Downloads 29526776 Comparison of Anthropometric Measurements Between Handball and Basketball Female Players
Authors: Jasmina Pluncevic Gligoroska, Sanja Manchevska, Vaska Antevska, Lidija Todorovska, Beti Dejanova, Sunchica Petrovska, Ivanka Karagjozova, Elizabeta Sivevska
Abstract:
Introduction: Anthropometric measurements are integral part of regular medical examinations of athletes. In addition to the quantification of the size of the body, these measurements indicate the quality of the physical status, because of its association with sports performance. The purpose of this study was to examine whether there are differences in anthropometric parameters and body mass components in female athletes who participate in two different types of sports. Methods: A total of 27 athletes, 15 handball players and 12 basketball players, at the average age of 22.7 years (age span from 17 to 30 years) entered the study. Anthropometric method by Matiegka was used for determination of body components. Sixteen anthropometric measures were taken: height, weight, four diameters of joints, four circumferences of limbs and six skin folds. Results: Handball players were 169.6±6.7 cm tall and 63,75±7.5 kg heavy. Their average relative muscle mass (absolute mass in kg) was 51% (32.5kg), while bone component was 16.8% (10.7kg) and fat component was 14.3% (7.74kg). The basketball players were 177.4±8.2cm tall and 70.37±12.1kg heavy. Their average relative muscle mass (absolute mass in kg) was 51.9 % (36.6kg), bone component was 16.37% (11.5kg) and fat component was 15.36% (9.4kg). The comparison of anthropometric values showed that basketball players were statistically significantly higher and heavier than handball players (p<0.05). Statistically significant difference (p<0.05) was observed in the range of upper leg circumference (higher in basketball players) and the forearm skin fold (higher in the basketball players). Conclusion: Handball players and basketball players significantly differed in basic anthropometric measures (height and weight), but the body components had almost identical values. The anthropometric measurements that have been taken did not show significant difference between handball and basketball female players despite the different physical demands of the games.Keywords: anthropometry, body components, basketball, handball female players
Procedia PDF Downloads 46326775 Emotional Artificial Intelligence and the Right to Privacy
Authors: Emine Akar
Abstract:
The majority of privacy-related regulation has traditionally focused on concepts that are perceived to be well-understood or easily describable, such as certain categories of data and personal information or images. In the past century, such regulation appeared reasonably suitable for its purposes. However, technologies such as AI, combined with ever-increasing capabilities to collect, process, and store “big data”, not only require calibration of these traditional understandings but may require re-thinking of entire categories of privacy law. In the presentation, it will be explained, against the background of various emerging technologies under the umbrella term “emotional artificial intelligence”, why modern privacy law will need to embrace human emotions as potentially private subject matter. This argument can be made on a jurisprudential level, given that human emotions can plausibly be accommodated within the various concepts that are traditionally regarded as the underlying foundation of privacy protection, such as, for example, dignity, autonomy, and liberal values. However, the practical reasons for regarding human emotions as potentially private subject matter are perhaps more important (and very likely more convincing from the perspective of regulators). In that respect, it should be regarded as alarming that, according to most projections, the usefulness of emotional data to governments and, particularly, private companies will not only lead to radically increased processing and analysing of such data but, concerningly, to an exponential growth in the collection of such data. In light of this, it is also necessity to discuss options for how regulators could address this emerging threat.Keywords: AI, privacy law, data protection, big data
Procedia PDF Downloads 8826774 Develop a Conceptual Data Model of Geotechnical Risk Assessment in Underground Coal Mining Using a Cloud-Based Machine Learning Platform
Authors: Reza Mohammadzadeh
Abstract:
The major challenges in geotechnical engineering in underground spaces arise from uncertainties and different probabilities. The collection, collation, and collaboration of existing data to incorporate them in analysis and design for given prospect evaluation would be a reliable, practical problem solving method under uncertainty. Machine learning (ML) is a subfield of artificial intelligence in statistical science which applies different techniques (e.g., Regression, neural networks, support vector machines, decision trees, random forests, genetic programming, etc.) on data to automatically learn and improve from them without being explicitly programmed and make decisions and predictions. In this paper, a conceptual database schema of geotechnical risks in underground coal mining based on a cloud system architecture has been designed. A new approach of risk assessment using a three-dimensional risk matrix supported by the level of knowledge (LoK) has been proposed in this model. Subsequently, the model workflow methodology stages have been described. In order to train data and LoK models deployment, an ML platform has been implemented. IBM Watson Studio, as a leading data science tool and data-driven cloud integration ML platform, is employed in this study. As a Use case, a data set of geotechnical hazards and risk assessment in underground coal mining were prepared to demonstrate the performance of the model, and accordingly, the results have been outlined.Keywords: data model, geotechnical risks, machine learning, underground coal mining
Procedia PDF Downloads 27426773 Classification of Poverty Level Data in Indonesia Using the Naïve Bayes Method
Authors: Anung Style Bukhori, Ani Dijah Rahajoe
Abstract:
Poverty poses a significant challenge in Indonesia, requiring an effective analytical approach to understand and address this issue. In this research, we applied the Naïve Bayes classification method to examine and classify poverty data in Indonesia. The main focus is on classifying data using RapidMiner, a powerful data analysis platform. The analysis process involves data splitting to train and test the classification model. First, we collected and prepared a poverty dataset that includes various factors such as education, employment, and health..The experimental results indicate that the Naïve Bayes classification model can provide accurate predictions regarding the risk of poverty. The use of RapidMiner in the analysis process offers flexibility and efficiency in evaluating the model's performance. The classification produces several values to serve as the standard for classifying poverty data in Indonesia using Naive Bayes. The accuracy result obtained is 40.26%, with a moderate recall result of 35.94%, a high recall result of 63.16%, and a low recall result of 38.03%. The precision for the moderate class is 58.97%, for the high class is 17.39%, and for the low class is 58.70%. These results can be seen from the graph below.Keywords: poverty, classification, naïve bayes, Indonesia
Procedia PDF Downloads 5726772 Sorghum Resilience and Sustainability under Limiting and Non-limiting Conditions of Water and Nitrogen
Authors: Muhammad Tanveer Altaf, Mehmet Bedir, Waqas Liaqat, Gönül Cömertpay, Volkan Çatalkaya, Celaluddin Barutçular, Nergiz Çoban, Ibrahim Cerit, Muhammad Azhar Nadeem, Tolga Karaköy, Faheem Shehzad Baloch
Abstract:
Food production needs to be almost double by 2050 in order to feed around 9 billion people around the Globe. Plant production mostly relies on fertilizers, which also have one of the main roles in environmental pollution. In addition to this, climatic conditions are unpredictable, and the earth is expected to face severe drought conditions in the future. Therefore, water and fertilizers, especially nitrogen are considered as main constraints for future food security. To face these challenges, developing integrative approaches for germplasm characterization and selecting the resilient genotypes performing under limiting conditions is very crucial for effective breeding to meet the food requirement under climatic change scenarios. This study is part of the European Research Area Network (ERANET) project for the characterization of the diversity panel of 172 sorghum accessions and six hybrids as control cultivars under limiting (+N/-H2O, -N/+H2O) and non-limiting conditions (+N+H2O). This study was planned to characterize the sorghum diversity in relation to resource Use Efficiency (RUE), with special attention on harnessing the interaction between genotype and environment (GxE) from a physiological and agronomic perspective. Experiments were conducted at Adana, a Mediterranean climate, with augmented design, and data on various agronomic and physiological parameters were recorded. Plentiful diversity was observed in the sorghum diversity panel and significant variations were seen among the limiting water and nitrogen conditions in comparison with the control experiment. Potential genotypes with the best performance are identified under limiting conditions. Whole genome resequencing was performed for whole germplasm under investigation for diversity analysis. GWAS analysis will be performed using genotypic and phenotypic data and linked markers will be identified. The results of this study will show the adaptation and improvement of sorghum under climate change conditions for future food security.Keywords: germplasm, sorghum, drought, nitrogen, resources use efficiency, sequencing
Procedia PDF Downloads 7726771 Web Search Engine Based Naming Procedure for Independent Topic
Authors: Takahiro Nishigaki, Takashi Onoda
Abstract:
In recent years, the number of document data has been increasing since the spread of the Internet. Many methods have been studied for extracting topics from large document data. We proposed Independent Topic Analysis (ITA) to extract topics independent of each other from large document data such as newspaper data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis. The topic represented by ITA is represented by a set of words. However, the set of words is quite different from the topics the user imagines. For example, the top five words with high independence of a topic are as follows. Topic1 = {"scor", "game", "lead", "quarter", "rebound"}. This Topic 1 is considered to represent the topic of "SPORTS". This topic name "SPORTS" has to be attached by the user. ITA cannot name topics. Therefore, in this research, we propose a method to obtain topics easy for people to understand by using the web search engine, topics given by the set of words given by independent topic analysis. In particular, we search a set of topical words, and the title of the homepage of the search result is taken as the topic name. And we also use the proposed method for some data and verify its effectiveness.Keywords: independent topic analysis, topic extraction, topic naming, web search engine
Procedia PDF Downloads 11926770 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas
Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards
Abstract:
Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.Keywords: airborne laser scanning, digital terrain models, filtering, forested areas
Procedia PDF Downloads 13926769 Estimating the Life-Distribution Parameters of Weibull-Life PV Systems Utilizing Non-Parametric Analysis
Authors: Saleem Z. Ramadan
Abstract:
In this paper, a model is proposed to determine the life distribution parameters of the useful life region for the PV system utilizing a combination of non-parametric and linear regression analysis for the failure data of these systems. Results showed that this method is dependable for analyzing failure time data for such reliable systems when the data is scarce.Keywords: masking, bathtub model, reliability, non-parametric analysis, useful life
Procedia PDF Downloads 56226768 Preliminary Design of Maritime Energy Management System: Naval Architectural Approach to Resolve Recent Limitations
Authors: Seyong Jeong, Jinmo Park, Jinhyoun Park, Boram Kim, Kyoungsoo Ahn
Abstract:
Energy management in the maritime industry is being required by economics and in conformity with new legislative actions taken by the International Maritime Organization (IMO) and the European Union (EU). In response, the various performance monitoring methodologies and data collection practices have been examined by different stakeholders. While many assorted advancements in operation and technology are applicable, their adoption in the shipping industry stays small. This slow uptake can be considered due to many different barriers such as data analysis problems, misreported data, and feedback problems, etc. This study presents a conceptual design of an energy management system (EMS) and proposes the methodology to resolve the limitations (e.g., data normalization using naval architectural evaluation, management of misrepresented data, and feedback from shore to ship through management of performance analysis history). We expect this system to make even short-term charterers assess the ship performance properly and implement sustainable fleet control.Keywords: data normalization, energy management system, naval architectural evaluation, ship performance analysis
Procedia PDF Downloads 44926767 Comparison of Volume of Fluid Model: Experimental and Empirical Results for Flows over Stacked Drop Manholes
Authors: Ramin Mansouri
Abstract:
The manhole is one of the types of structures that are installed at the site of change direction or change in the pipe diameter or sewage pipes as well as in step slope areas to reduce the flow velocity. In this study, the flow characteristics of hydraulic structures in a manhole structure have been investigated with a numerical model. In this research, the types of computational grid coarse, medium, and fines have been used for simulation. In order to simulate flow, k-ε model (standard, RNG, Realizable) and k-w model (standard SST) are used. Also, in order to find the best wall conditions, two types of standard and non-equilibrium wall functions were investigated. The turbulent model k-ε has the highest correlation with experimental results or all models. In terms of boundary conditions, constant speed is set for the flow input boundary, the output pressure is set in the boundaries which are in contact with the air, and the standard wall function is used for the effect of the wall function. In the numerical model, the depth at the output of the second manhole is estimated to be less than that of the laboratory and the output jet from the span. In the second regime, the jet flow collides with the manhole wall and divides into two parts, so hydraulic characteristics are the same as large vertical shaft hydraulic characteristics. In this situation, the turbulence is in a high range since it can be seen more energy loss in it. According to the results, energy loss in numerical is estimated at 9.359%, which is more than experimental data.Keywords: manhole, energy, depreciation, turbulence model, wall function, flow
Procedia PDF Downloads 8226766 The Rational Mode of Affordable Housing Based on the Special Residence Space Form of City Village in Xiamen
Authors: Pingrong Liao
Abstract:
Currently, as China is in the stage of rapid urbanization, a large number of rural population have flown into the city and it is urgent to solve the housing problem. Xiamen is the typical city of China characterized by high housing price and low-income. Due to the government failed to provide adequate public cheap housing, a large number of immigrants dwell in the informal rental housing represented by the "city village". Comfortable housing is the prerequisite for the harmony and stability of the city. Therefore, with "city village" and the affordable housing as the main object of study, this paper makes an analysis on the housing status, personnel distribution and mobility of the "city village" of Xiamen, and also carries out a primary research on basic facilities such as the residential form and commercial, property management services, with the combination of the existing status of the affordable housing in Xiamen, and finally summary and comparison are made by the author in an attempt to provide some references and experience for the construction and improvement of the government-subsidized housing to improve the residential quality of the urban-poverty stricken people. In this paper, the data and results are collated and quantified objectively based on the relevant literature, the latest market data and practical investigation as well as research methods of comparative study and case analysis. Informal rental housing, informal economy and informal management of "city village" as social-housing units in many ways fit in the housing needs of the floating population, providing a convenient and efficient condition for the flowing of people. However, the existing urban housing in Xiamen have some drawbacks, for example, the housing are unevenly distributed, the spatial form is single, the allocation standard of public service facilities is not targeted to the subsidized object, the property management system is imperfect and the cost is too high, therefore, this paper draws lessons from the informal model of city village”, and finally puts forward some improvement strategies.Keywords: urban problem, urban village, affordable housing, living mode, Xiamen constructing
Procedia PDF Downloads 24526765 Geospatial Data Complexity in Electronic Airport Layout Plan
Authors: Shyam Parhi
Abstract:
Airports GIS program collects Airports data, validate and verify it, and stores it in specific database. Airports GIS allows authorized users to submit changes to airport data. The verified data is used to develop several engineering applications. One of these applications is electronic Airport Layout Plan (eALP) whose primary aim is to move from paper to digital form of ALP. The first phase of development of eALP was completed recently and it was tested for a few pilot program airports across different regions. We conducted gap analysis and noticed that a lot of development work is needed to fine tune at least six mandatory sheets of eALP. It is important to note that significant amount of programming is needed to move from out-of-box ArcGIS to a much customized ArcGIS which will be discussed. The ArcGIS viewer capability to display essential features like runway or taxiway or the perpendicular distance between them will be discussed. An enterprise level workflow which incorporates coordination process among different lines of business will be highlighted.Keywords: geospatial data, geology, geographic information systems, aviation
Procedia PDF Downloads 41626764 Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising
Authors: Jianwei Ma, Diriba Gemechu
Abstract:
In seismic data processing, attenuation of random noise is the basic step to improve quality of data for further application of seismic data in exploration and development in different gas and oil industries. The signal-to-noise ratio of the data also highly determines quality of seismic data. This factor affects the reliability as well as the accuracy of seismic signal during interpretation for different purposes in different companies. To use seismic data for further application and interpretation, we need to improve the signal-to-noise ration while attenuating random noise effectively. To improve the signal-to-noise ration and attenuating seismic random noise by preserving important features and information about seismic signals, we introduce the concept of anisotropic total fractional order denoising algorithm. The anisotropic total fractional order variation model defined in fractional order bounded variation is proposed as a regularization in seismic denoising. The split Bregman algorithm is employed to solve the minimization problem of the anisotropic total fractional order variation model and the corresponding denoising algorithm for the proposed method is derived. We test the effectiveness of theproposed method for synthetic and real seismic data sets and the denoised result is compared with F-X deconvolution and non-local means denoising algorithm.Keywords: anisotropic total fractional order variation, fractional order bounded variation, seismic random noise attenuation, split Bregman algorithm
Procedia PDF Downloads 20726763 Noise of Aircraft Flyovers Affects Reading Saccades
Authors: Svea Missfeldt, Rainer Höger
Abstract:
A number of studies show that aircraft noise around airports negatively affects the reading comprehension of children attending schools in the neighbourhood. Yet little is known about the underlying mechanisms. Explanatory approaches discuss the attention capturing effect of noise sources which occupy mental capacity. Research suggests that attentional capacities are especially demanded when different modalities are involved at the same time. To explore whether aircraft noise affects reading processes in specific manners, students read texts in variable sound conditions while their eye movements were recorded. Besides noise caused by aircraft flyovers, which represent moving sound sources, saccades were also recorded under the condition of white noise, a natural sound setting and silence for comparison. Data showed an increase in regressive saccades when the sound of moving sources was presented. Interestingly, this effect was significantly high when the aircrafts moved in the opposite of the reading direction. Especially the latter result is not compatible with the hypothesis of a general impairment of cognitive processes by noise where the direction of movement should not have an influence. Reading is assumed to be based on two different attentional mechanisms: overt and covert attention, where the latter supports control and pre-planning of eye movements during reading. We believe that covert attention is affected by moving sound sources, resulting in an enhanced number of backwardly directed saccades.Keywords: aircraft noise, attentional processes, cognition, eye movements, reading saccades
Procedia PDF Downloads 32926762 A 20 Year Comparison of Australian Childhood Bicycle Injuries – Have We Made a Difference?
Authors: Bronwyn Griffin, Caroline Acton, Tona Gillen, Roy Kimble
Abstract:
Background: Bicycle riding is a common recreational activity enjoyed by many children throughout Australia that has been associated with the usual caveat of benefits related to exercise and recreation. Given Australia was the first country in the world to introduce cyclist helmet laws in 1991, very few publications have reviewed paediatric cycling injuries (fatal or non-fatal) since. Objectives: To identify trends in children (0-16 years) who required admission for greater than 24 hours following a bicycle-related injury (fatal and non-fatal) in Queensland. Further, to discuss changes that have occurred in paediatric cycling injury trends in Queensland since a prominent local study/publication in 1995. This paper aims to establish evidence to inform interventions promoting safer riding to parents, children and communities. Methods: Data on paediatric (0-16 years) cycling injuries in Queensland resulting in hospital admission more than 24 hours across three tertiary paediatric hospitals in Brisbane between November 2008-June 2015 was compiled by the Paediatric Trauma Data Registry for non-fatal injuries. The Child Death Review Team at the Queensland Families and Childhood Commission provided data on fatalities in children <17years from (June 2004 –June 2015). Comparing trends to a local study published in 1995 Results: Between 2008-2015 there were 197 patients admitted for greater than 24 hours following a cycling injury. The median age was 11 years, with males more frequently involved (n=139, 87%) compared to females. Mean length of stay was three days, with 47 (28%) children admitted to PICU, location of injury was most often the street (n=63, 37%). Between 2004 –2015 there were 15 fatalities (Incidence rate 0.25/100,000); all were male, 14/15 occurred on the street, with eight stated to have not been wearing a helmet, 11/15 children came from the least advantaged socio-economic group (SEIFA) compared to a local publication in 1995, finding of 94 fatalities between (1981-1992). Conclusions: There has been a notable decrease in incidence of fatalities between the two time periods with incidence rates dropping from 1.75-0.25/100,000. More statistics need to be run to ascertain if this is a true reduction or perhaps a decrease in children riding bicycles. Injuries that occur on the street that come in contact with a car remain of serious concern. The purpose of this paper is not to discourage bicycle riding among child and adolescent populations, rather, inform parents and the wider community about the risks associated with cycling in order to reduce injuries associated with this sport, whilst promoting safe cycling.Keywords: paediatric, cycling, trauma, prevention, emergency
Procedia PDF Downloads 24926761 Prevalence and Comparison for Detection Methods of Candida Species in Vaginal Specimens from Pregnant and Non-Pregnant Saudi Women
Authors: Yazeed Al-Sheikh
Abstract:
Pregnancy represents a risk factor in the occurrence of vulvovaginal candidiasis. To investigate the prevalence rate of vaginal carriage of Candida species in Saudi pregnant and non-pregnant women, high vaginal swab (HVS) specimens (707) were examined by direct microscopy (10% KOH and Giemsa staining) and parallel cultured on Sabouraud Dextrose Agar (SDA) as well as on “CHROM agar Candida” medium. As expected, Candida-positive cultures were frequently observed in pregnant-test group (24%) than in non-pregnant group (17%). The frequency of culture positive was correlated to pregnancy (P=0.047), parity (P=0.001), use of contraceptive (P=0.146), or antibiotics (P=0.128), and diabetic-patients (P < 0.0001). Out of 707 HVS examined specimens, 157 specimens were yeast-positive culture (22%) on Sabouraud Dextrose Agar or “CHROM agar Candida”. In comparison, the sensitivities of the direct 10% KOH and the Giemsa stain microscopic examination methods were 84% (132/157) and 95% (149/157) respectively but both with 100% specificity. As for the identity of recovered 157 yeast isolates, based on API 20C biotype carbohydrate assimilation, germ tube and chlamydospore formation, C. albicansand C. glabrata constitute 80.3 and 12.7% respectively. Rates of C. tropicalis, C. kefyr, C. famata or C. utilis were 2.6, 1.3, and 0.6% respectively. Sachromyces cerevisiae and Rhodotorula mucilaginosa yeasts were also encountered at a frequency of 1.3 and 0.6% respectively. Finally, among all recovered 157 yeast-isolates, strains resistant to ketoconazole were not detected, whereas 5% of the C. albicans and as high as 55% of the non-albicans yeast isolates (majority C. glabrata) showed resistance to fluconazole. Our findings may prove helpful for continuous determination of the existing vaginal candidiasis causative species during pregnancy, its lab-diagnosis and/or control and possible measures to minimize the incidence of the disease-associated pre-term delivery.Keywords: vaginal candidiasis, Candida spp., pregnancy, risk factors, API 20C-yeast biotypes, giemsa stain, antifungal agents
Procedia PDF Downloads 24126760 NSBS: Design of a Network Storage Backup System
Authors: Xinyan Zhang, Zhipeng Tan, Shan Fan
Abstract:
The first layer of defense against data loss is the backup data. This paper implements an agent-based network backup system used the backup, server-storage and server-backup agent these tripartite construction, and we realize the snapshot and hierarchical index in the NSBS. It realizes the control command and data flow separation, balances the system load, thereby improving the efficiency of the system backup and recovery. The test results show the agent-based network backup system can effectively improve the task-based concurrency, reasonably allocate network bandwidth, the system backup performance loss costs smaller and improves data recovery efficiency by 20%.Keywords: agent, network backup system, three architecture model, NSBS
Procedia PDF Downloads 45926759 A Protocol of Procedures and Interventions to Accelerate Post-Earthquake Reconstruction
Authors: Maria Angela Bedini, Fabio Bronzini
Abstract:
The Italian experiences, positive and negative, of the post-earthquake are conditioned by long times and structural bureaucratic constraints, also motivated by the attempt to contain mafia infiltration and corruption. The transition from the operational phase of the emergency to the planning phase of the reconstruction project is thus hampered by a series of inefficiencies and delays, incompatible with the need for rapid recovery of the territories in crisis. In fact, intervening in areas affected by seismic events means at the same time associating the reconstruction plan with an urban and territorial rehabilitation project based on strategies and tools in which prevention and safety play a leading role in the regeneration of territories in crisis and the return of the population. On the contrary, the earthquakes that took place in Italy have instead further deprived the territories affected of the minimum requirements for habitability, in terms of accessibility and services, accentuating the depopulation process, already underway before the earthquake. The objective of this work is to address with implementing and programmatic tools the procedures and strategies to be put in place, today and in the future, in Italy and abroad, to face the challenge of the reconstruction of activities, sociality, services, risk mitigation: a protocol of operational intentions and firm points, open to a continuous updating and implementation. The methodology followed is that of the comparison in a synthetic form between the different Italian experiences of the post-earthquake, based on facts and not on intentions, to highlight elements of excellence or, on the contrary, damage. The main results obtained can be summarized in technical comparison cards on good and bad practices. With this comparison, we intend to make a concrete contribution to the reconstruction process, certainly not only related to the reconstruction of buildings but privileging the primary social and economic needs. In this context, the recent instrument applied in Italy of the strategic urban and territorial SUM (Minimal Urban Structure) and the strategic monitoring process become dynamic tools for supporting reconstruction. The conclusions establish, by points, a protocol of interventions, the priorities for integrated socio-economic strategies, multisectoral and multicultural, and highlight the innovative aspects of 'inversion' of priorities in the reconstruction process, favoring the take-off of 'accelerator' interventions social and economic and a more updated system of coexistence with risks. In this perspective, reconstruction as a necessary response to the calamitous event can and must become a unique opportunity to raise the level of protection from risks and rehabilitation and development of the most fragile places in Italy and abroad.Keywords: an operational protocol for reconstruction, operational priorities for coexistence with seismic risk, social and economic interventions accelerators of building reconstruction, the difficult post-earthquake reconstruction in Italy
Procedia PDF Downloads 12726758 A t-SNE and UMAP Based Neural Network Image Classification Algorithm
Authors: Shelby Simpson, William Stanley, Namir Naba, Xiaodi Wang
Abstract:
Both t-SNE and UMAP are brand new state of art tools to predominantly preserve the local structure that is to group neighboring data points together, which indeed provides a very informative visualization of heterogeneity in our data. In this research, we develop a t-SNE and UMAP base neural network image classification algorithm to embed the original dataset to a corresponding low dimensional dataset as a preprocessing step, then use this embedded database as input to our specially designed neural network classifier for image classification. We use the fashion MNIST data set, which is a labeled data set of images of clothing objects in our experiments. t-SNE and UMAP are used for dimensionality reduction of the data set and thus produce low dimensional embeddings. Furthermore, we use the embeddings from t-SNE and UMAP to feed into two neural networks. The accuracy of the models from the two neural networks is then compared to a dense neural network that does not use embedding as an input to show which model can classify the images of clothing objects more accurately.Keywords: t-SNE, UMAP, fashion MNIST, neural networks
Procedia PDF Downloads 19826757 The Increasing of Perception of Consumers’ Awareness about Sustainability Brands during Pandemic: A Multi Mediation Model
Authors: Silvia Platania, Martina Morando, Giuseppe Santisi
Abstract:
Introduction: In the last thirty years, there is constant talk of sustainable consumption and a "transition" of consumer lifestyles towards greater awareness of consumer choices (United Nation, 1992). The 2019 coronavirus (COVID-19) epidemic that has hit the world population since 2020 has had significant consequences in all areas of people's lives; individuals have been forced to change their behaviors, to redefine their owngoals, priorities, practices, and lifestyles, to rebuild themselves in the new situation dictated by the pandemic. Method(Participants and procedure ): The data were collected through an online survey; moreover, we used convenience sampling from the general population. The participants were 669 Italians consumers (Female= 514, 76.8%; Male=155, 23.2%) that choice sustainability brands, aged between 18 and 65 years (Mₐ𝓰ₑ = 35.45; Standard Deviation, SD = 9.51).(Measure ): The following measures were used: The Muncy–Vitell Consumer Ethics Scale; Attitude Toward Business Scale; Perceived Consumer Effectiveness Scale; Consumers Perception on Sustainable Brand Attitudes. Results: Preliminary analyses were conducted to test our model. Pearson's bivariate correlation between variables shows that all variables of our model correlate significantly and positively, PCE with CPSBA (r = .56, p <.001). Furthermore, a CFA, according to Harman's single-factor test, was used to diagnose the extent to which common-method variance was a problem. A comparison between the hypothesised model and a model with one factor (with all items loading on a unique factor) revealed that the former provided a better fit for the data in all the CFA fit measures [χ² [6, n = 669] = 7.228, p = 0.024, χ² / df = 1.20, RMSEA = 0.07 (CI = 0.051-0.067), CFI = 0.95, GFI = 0.95, SRMR = 0.04, AIC = 66.501; BIC = 132,150). Next, amulti mediation was conducted to test our hypotheses. The results show that there is a direct effect of PCE on ethical consumption behavior (β = .38) and on ATB (β = .23); furthermore, there is a direct effect on the CPSBA outcome (β = .34). In addition, there is a mediating effect by ATB (C.I. =. 022-.119, 95% interval confidence) and by CES (C.I. =. 136-.328, 95% interval confidence). Conclusion: The spread of the COVID-19 pandemic has affected consumer consumption styles and has led to an increase in online shopping and purchases of sustainable products. Several theoretical and practical considerations emerge from the results of the study.Keywords: decision making, sustainability, pandemic, multimediation model
Procedia PDF Downloads 110