Search results for: STS benchmark dataset
450 Landslide Vulnerability Assessment in Context with Indian Himalayan
Authors: Neha Gupta
Abstract:
Landslide vulnerability is considered as the crucial parameter for the assessment of landslide risk. The term vulnerability defined as the damage or degree of elements at risk of different dimensions, i.e., physical, social, economic, and environmental dimensions. Himalaya region is very prone to multi-hazard such as floods, forest fires, earthquakes, and landslides. With the increases in fatalities rates, loss of infrastructure, and economy due to landslide in the Himalaya region, leads to the assessment of vulnerability. In this study, a methodology to measure the combination of vulnerability dimension, i.e., social vulnerability, physical vulnerability, and environmental vulnerability in one framework. A combined result of these vulnerabilities has rarely been carried out. But no such approach was applied in the Indian Scenario. The methodology was applied in an area of east Sikkim Himalaya, India. The physical vulnerability comprises of building footprint layer extracted from remote sensing data and Google Earth imaginary. The social vulnerability was assessed by using population density based on land use. The land use map was derived from a high-resolution satellite image, and for environment vulnerability assessment NDVI, forest, agriculture land, distance from the river were assessed from remote sensing and DEM. The classes of social vulnerability, physical vulnerability, and environment vulnerability were normalized at the scale of 0 (no loss) to 1 (loss) to get the homogenous dataset. Then the Multi-Criteria Analysis (MCA) was used to assign individual weights to each dimension and then integrate it into one frame. The final vulnerability was further classified into four classes from very low to very high.Keywords: landslide, multi-criteria analysis, MCA, physical vulnerability, social vulnerability
Procedia PDF Downloads 301449 Rural Water Supply Services in India: Developing a Composite Summary Score
Authors: Mimi Roy, Sriroop Chaudhuri
Abstract:
Sustainable water supply is among the basic needs for human development, especially in the rural areas of the developing nations where safe water supply and basic sanitation infrastructure is direly needed. In light of the above, we propose a simple methodology to develop a composite water sustainability index (WSI) to assess the collective performance of the existing rural water supply services (RWSS) in India over time. The WSI will be computed by summarizing the details of all the different varieties of water supply schemes presently available in India comprising of 40 liters per capita per day (lpcd), 55 lpcd, and piped water supply (PWS) per household. The WSI will be computed annually, between 2010 and 2016, to elucidate changes in holistic RWSS performances. Results will be integrated within a robust geospatial framework to identify the ‘hotspots’ (states/districts) which have persistent issues over adequate RWSS coverage and warrant spatially-optimized policy reforms in future to address sustainable human development. Dataset will be obtained from the National Rural Drinking Water Program (NRDWP), operating under the aegis of the Ministry of Drinking Water and Sanitation (MoDWS), at state/district/block levels to offer the authorities a cross-sectional view of RWSS at different levels of administrative hierarchy. Due to simplistic design, complemented by spatio-temporal cartograms, similar approaches can also be adopted in other parts of the world where RWSS need a thorough appraisal.Keywords: rural water supply services, piped water supply, sustainability, composite index, spatial, drinking water
Procedia PDF Downloads 301448 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records
Authors: Sara ElElimy, Samir Moustafa
Abstract:
Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).Keywords: big data analytics, machine learning, CDRs, 5G
Procedia PDF Downloads 140447 Improving Cheon-Kim-Kim-Song (CKKS) Performance with Vector Computation and GPU Acceleration
Authors: Smaran Manchala
Abstract:
Homomorphic Encryption (HE) enables computations on encrypted data without requiring decryption, mitigating data vulnerability during processing. Usable Fully Homomorphic Encryption (FHE) could revolutionize secure data operations across cloud computing, AI training, and healthcare, providing both privacy and functionality, however, the computational inefficiency of schemes like Cheon-Kim-Kim-Song (CKKS) hinders their widespread practical use. This study focuses on optimizing CKKS for faster matrix operations through the implementation of vector computation parallelization and GPU acceleration. The variable effects of vector parallelization on GPUs were explored, recognizing that while parallelization typically accelerates operations, it could introduce overhead that results in slower runtimes, especially in smaller, less computationally demanding operations. To assess performance, two neural network models, MLPN and CNN—were tested on the MNIST dataset using both ARM and x86-64 architectures, with CNN chosen for its higher computational demands. Each test was repeated 1,000 times, and outliers were removed via Z-score analysis to measure the effect of vector parallelization on CKKS performance. Model accuracy was also evaluated under CKKS encryption to ensure optimizations did not compromise results. According to the results of the trail runs, applying vector parallelization had a 2.63X efficiency increase overall with a 1.83X performance increase for x86-64 over ARM architecture. Overall, these results suggest that the application of vector parallelization in tandem with GPU acceleration significantly improves the efficiency of CKKS even while accounting for vector parallelization overhead, providing impact in future zero trust operations.Keywords: CKKS scheme, runtime efficiency, fully homomorphic encryption (FHE), GPU acceleration, vector parallelization
Procedia PDF Downloads 27446 Improving Similarity Search Using Clustered Data
Authors: Deokho Kim, Wonwoo Lee, Jaewoong Lee, Teresa Ng, Gun-Ill Lee, Jiwon Jeong
Abstract:
This paper presents a method for improving object search accuracy using a deep learning model. A major limitation to provide accurate similarity with deep learning is the requirement of huge amount of data for training pairwise similarity scores (metrics), which is impractical to collect. Thus, similarity scores are usually trained with a relatively small dataset, which comes from a different domain, causing limited accuracy on measuring similarity. For this reason, this paper proposes a deep learning model that can be trained with a significantly small amount of data, a clustered data which of each cluster contains a set of visually similar images. In order to measure similarity distance with the proposed method, visual features of two images are extracted from intermediate layers of a convolutional neural network with various pooling methods, and the network is trained with pairwise similarity scores which is defined zero for images in identical cluster. The proposed method outperforms the state-of-the-art object similarity scoring techniques on evaluation for finding exact items. The proposed method achieves 86.5% of accuracy compared to the accuracy of the state-of-the-art technique, which is 59.9%. That is, an exact item can be found among four retrieved images with an accuracy of 86.5%, and the rest can possibly be similar products more than the accuracy. Therefore, the proposed method can greatly reduce the amount of training data with an order of magnitude as well as providing a reliable similarity metric.Keywords: visual search, deep learning, convolutional neural network, machine learning
Procedia PDF Downloads 216445 Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins
Authors: Mohammad R. Jalali, Mohammad M. Jalali
Abstract:
The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.Keywords: Green–Naghdi equations, nonlinearity, numerical prediction, sloshing waves, solitary waves
Procedia PDF Downloads 287444 The Research on Diesel Bus Emissions in Ulaanbaatar City: Mongolia
Authors: Tsetsegmaa A., Bayarsuren B., Altantsetseg Ts.
Abstract:
To make the best decision on reducing harmful emissions from buses, we need to have a clear understanding of the current state of their actual emissions. The emissions from city buses running on high sulfur fuel, particularly particulate matter (PM) and nitrogen oxides (NOx) from the exhaust gases of conventional diesel engines, have been studied and measured with and without diesel particulate filter (DPF) in Ulaanbaatar city. The study was conducted by using the PEMS (Portable Emissions Measurement System) and gravimetric method in real traffic conditions. The obtained data were used to determine the actual emission rates and to evaluate the effectiveness of the selected particulate filters. Actual road and daily PM emissions from city buses were determined during the warm and cold seasons. A bus with an average daily mileage of 242 km was found to emit 166.155 g of PM into the city's atmosphere on average per day, with 141.3 g in summer and 175.8 g in winter. The actual PM of the city bus is 0.6866 g/km. The concentration of NOx in the exhaust gas averages 1410.94 ppm. The use of DPF reduced the exhaust gas opacity of 24 buses by an average of 97% and filtered a total of 340.4 kg of soot from these buses over a period of six months. Retrofitting an old conventional diesel engine with cassette-type silicon carbide (SiC) DPF, despite the laboriousness of cleaning, can significantly reduce particulate matter emissions. Innovation: First comprehensive road PM and NOx emission dataset and actual road emissions from public buses have been identified. PM and NOx mathematical model equations have been estimated as a function of the bus technical speed and engine revolution with and without DPF.Keywords: conventional diesel, silicon carbide, real-time onboard measurements, particulate matter, diesel retrofit, fuel sulphur
Procedia PDF Downloads 166443 Convolutional Neural Networks-Optimized Text Recognition with Binary Embeddings for Arabic Expiry Date Recognition
Authors: Mohamed Lotfy, Ghada Soliman
Abstract:
Recognizing Arabic dot-matrix digits is a challenging problem due to the unique characteristics of dot-matrix fonts, such as irregular dot spacing and varying dot sizes. This paper presents an approach for recognizing Arabic digits printed in dot matrix format. The proposed model is based on Convolutional Neural Networks (CNN) that take the dot matrix as input and generate embeddings that are rounded to generate binary representations of the digits. The binary embeddings are then used to perform Optical Character Recognition (OCR) on the digit images. To overcome the challenge of the limited availability of dotted Arabic expiration date images, we developed a True Type Font (TTF) for generating synthetic images of Arabic dot-matrix characters. The model was trained on a synthetic dataset of 3287 images and 658 synthetic images for testing, representing realistic expiration dates from 2019 to 2027 in the format of yyyy/mm/dd. Our model achieved an accuracy of 98.94% on the expiry date recognition with Arabic dot matrix format using fewer parameters and less computational resources than traditional CNN-based models. By investigating and presenting our findings comprehensively, we aim to contribute substantially to the field of OCR and pave the way for advancements in Arabic dot-matrix character recognition. Our proposed approach is not limited to Arabic dot matrix digit recognition but can also be extended to text recognition tasks, such as text classification and sentiment analysis.Keywords: computer vision, pattern recognition, optical character recognition, deep learning
Procedia PDF Downloads 96442 Probabilistic Models to Evaluate Seismic Liquefaction In Gravelly Soil Using Dynamic Penetration Test and Shear Wave Velocity
Authors: Nima Pirhadi, Shao Yong Bo, Xusheng Wan, Jianguo Lu, Jilei Hu
Abstract:
Although gravels and gravelly soils are assumed to be non-liquefiable because of high conductivity and small modulus; however, the occurrence of this phenomenon in some historical earthquakes, especially recently earthquakes during 2008 Wenchuan, Mw= 7.9, 2014 Cephalonia, Greece, Mw= 6.1 and 2016, Kaikoura, New Zealand, Mw = 7.8, has been promoted the essential consideration to evaluate risk assessment and hazard analysis of seismic gravelly soil liquefaction. Due to the limitation in sampling and laboratory testing of this type of soil, in situ tests and site exploration of case histories are the most accepted procedures. Of all in situ tests, dynamic penetration test (DPT), Which is well known as the Chinese dynamic penetration test, and shear wave velocity (Vs) test, have been demonstrated high performance to evaluate seismic gravelly soil liquefaction. However, the lack of a sufficient number of case histories provides an essential limitation for developing new models. This study at first investigates recent earthquakes that caused liquefaction in gravelly soils to collect new data. Then, it adds these data to the available literature’s dataset to extend them and finally develops new models to assess seismic gravelly soil liquefaction. To validate the presented models, their results are compared to extra available models. The results show the reasonable performance of the proposed models and the critical effect of gravel content (GC)% on the assessment.Keywords: liquefaction, gravel, dynamic penetration test, shear wave velocity
Procedia PDF Downloads 201441 Automatic Classification of Lung Diseases from CT Images
Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari
Abstract:
Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification
Procedia PDF Downloads 157440 Performance Study of Classification Algorithms for Consumer Online Shopping Attitudes and Behavior Using Data Mining
Authors: Rana Alaa El-Deen Ahmed, M. Elemam Shehab, Shereen Morsy, Nermeen Mekawie
Abstract:
With the growing popularity and acceptance of e-commerce platforms, users face an ever increasing burden in actually choosing the right product from the large number of online offers. Thus, techniques for personalization and shopping guides are needed by users. For a pleasant and successful shopping experience, users need to know easily which products to buy with high confidence. Since selling a wide variety of products has become easier due to the popularity of online stores, online retailers are able to sell more products than a physical store. The disadvantage is that the customers might not find products they need. In this research the customer will be able to find the products he is searching for, because recommender systems are used in some ecommerce web sites. Recommender system learns from the information about customers and products and provides appropriate personalized recommendations to customers to find the needed product. In this paper eleven classification algorithms are comparatively tested to find the best classifier fit for consumer online shopping attitudes and behavior in the experimented dataset. The WEKA knowledge analysis tool, which is an open source data mining workbench software used in comparing conventional classifiers to get the best classifier was used in this research. In this research by using the data mining tool (WEKA) with the experimented classifiers the results show that decision table and filtered classifier gives the highest accuracy and the lowest accuracy classification via clustering and simple cart.Keywords: classification, data mining, machine learning, online shopping, WEKA
Procedia PDF Downloads 352439 Power Production Performance of Different Wave Energy Converters in the Southwestern Black Sea
Authors: Ajab G. Majidi, Bilal Bingölbali, Adem Akpınar
Abstract:
This study aims to investigate the amount of energy (economic wave energy potential) that can be obtained from the existing wave energy converters in the high wave energy potential region of the Black Sea in terms of wave energy potential and their performance at different depths in the region. The data needed for this purpose were obtained using the calibrated nested layered SWAN wave modeling program version 41.01AB, which was forced with Climate Forecast System Reanalysis (CFSR) winds from 1979 to 2009. The wave dataset at a time interval of 2 hours was accumulated for a sub-grid domain for around Karaburun beach in Arnavutkoy, a district of Istanbul city. The annual sea state characteristic matrices for the five different depths along with a vertical line to the coastline were calculated for 31 years. According to the power matrices of different wave energy converter systems and characteristic matrices for each possible installation depth, the probability distribution tables of the specified mean wave period or wave energy period and significant wave height were calculated. Then, by using the relationship between these distribution tables, according to the present wave climate, the energy that the wave energy converter systems at each depth can produce was determined. Thus, the economically feasible potential of the relevant coastal zone was revealed, and the effect of different depths on energy converter systems is presented. The Oceantic at 50, 75 and 100 m depths and Oyster at 5 and 25 m depths presents the best performance. In the 31-year long period 1998 the most and 1989 is the least dynamic year.Keywords: annual power production, Black Sea, efficiency, power production performance, wave energy converter
Procedia PDF Downloads 134438 A Metric to Evaluate Conventional and Electrified Vehicles in Terms of Customer-Oriented Driving Dynamics
Authors: Stephan Schiffer, Andreas Kain, Philipp Wilde, Maximilian Helbing, Bernard Bäker
Abstract:
Automobile manufacturers progressively focus on a downsizing strategy to meet the EU's CO2 requirements concerning type-approval consumption cycles. The reduction in naturally aspirated engine power is compensated by increased levels of turbocharging. By downsizing conventional engines, CO2 emissions are reduced. However, it also implicates major challenges regarding longitudinal dynamic characteristics. An example of this circumstance is the delayed turbocharger-induced torque reaction which leads to a partially poor response behavior of the vehicle during acceleration operations. That is why it is important to focus conventional drive train design on real customer driving again. The currently considered dynamic maneuvers like the acceleration time 0-100 km/h discussed by journals and car manufacturers describe longitudinal dynamics experienced by a driver inadequately. For that reason we present the realization and evaluation of a comprehensive proband study. Subjects are provided with different vehicle concepts (electrified vehicles, vehicles with naturally aspired engines and vehicles with different concepts of turbochargers etc.) in order to find out which dynamic criteria are decisive for a subjectively strong acceleration and response behavior of a vehicle. Subsequently, realistic acceleration criteria are derived. By weighing the criteria an evaluation metric is developed to objectify customer-oriented transient dynamics. Fully-electrified vehicles are the benchmark in terms of customer-oriented longitudinal dynamics. The electric machine provides the desired torque almost without delay. This advantage compared to combustion engines is especially noticeable at low engine speeds. In conclusion, we will show the degree to which extent customer-relevant longitudinal dynamics of conventional vehicles can be approximated to electrified vehicle concepts. Therefore, various technical measures (turbocharger concepts, 48V electrical chargers etc.) and drive train designs (e.g. varying the final drive) are presented and evaluated in order to strengthen the vehicle’s customer-relevant transient dynamics. As a rating size the newly developed evaluation metric will be used.Keywords: 48V, customer-oriented driving dynamics, electric charger, electrified vehicles, vehicle concepts
Procedia PDF Downloads 407437 Simulation of Maximum Power Point Tracking in a Photovoltaic System: A Circumstance Using Pulse Width Modulation Analysis
Authors: Asowata Osamede
Abstract:
Optimized gain in respect to output power of stand-alone photovoltaic (PV) systems is one of the major focus of PV in recent times. This is evident to its low carbon emission and efficiency. Power failure or outage from commercial providers in general does not promote development to the public and private sector, these basically limit the development of industries. The need for a well-structured PV system is of importance for an efficient and cost-effective monitoring system. The purpose of this paper is to validate the maximum power point of an off-grid PV system taking into consideration the most effective tilt and orientation angles for PV's in the southern hemisphere. This paper is based on analyzing the system using a solar charger with MPPT from a pulse width modulation (PWM) perspective. The power conditioning device chosen is a solar charger with MPPT. The practical setup consists of a PV panel that is set to an orientation angle of 0o north, with a corresponding tilt angle of 36 o, 26o and 16o. The load employed in this set-up are three Lead Acid Batteries (LAB). The percentage fully charged, charging and not charging conditions are observed for all three batteries. The results obtained in this research is used to draw the conclusion that would provide a benchmark for researchers and scientist worldwide. This is done so as to have an idea of the best tilt and orientation angles for maximum power point in a basic off-grid PV system. A quantitative analysis would be employed in this research. Quantitative research tends to focus on measurement and proof. Inferential statistics are frequently used to generalize what is found about the study sample to the population as a whole. This would involve: selecting and defining the research question, deciding on a study type, deciding on the data collection tools, selecting the sample and its size, analyzing, interpreting and validating findings Preliminary results which include regression analysis (normal probability plot and residual plot using polynomial 6) showed the maximum power point in the system. The best tilt angle for maximum power point tracking proves that the 36o tilt angle provided the best average on time which in turns put the system into a pulse width modulation stage.Keywords: power-conversion, meteonorm, PV panels, DC-DC converters
Procedia PDF Downloads 149436 Assessment of the Impact of Trawling Activities on Marine Bottoms of Moroccan Atlantic
Authors: Rachida Houssa, Hassan Rhinane, Fadoumo Ali Malouw, Amina Oulmaalem
Abstract:
Since the early 70s, the Moroccan Atlantic sea was subjected to the pressure of the bottom trawling, one of the most destructive techniques seabed that cause havoc on fishing catch, nonselective, and responsible for more than half of all releases of fish around the world. The present paper aims to map and assess the impact of the activity of the bottom trawling of the Moroccan Atlantic coast. For this purpose, a dataset of thirty years, between 1962 and 1999, from foreign fishing vessels using bottom trawling, has been used and integrated in a GIS. To estimate the extent and the importance of the geographical distribution of the trawling effort, the Moroccan Atlantic area was divided into a grid of cells of 25 km2 (5x5 km). This grid was joined to the effort trawling data, creating a new entity with a table containing spatial overlay grid with the polygon of swept surfaces. This mapping model allowed to quantify the used fishing effort versus time and to generate the trace indicative of trawling efforts on the seabed. Indeed, for a given year, a grid cell may have a swept area equal to 0 (never been touched by the trawl) or 25 km2 (the trawled area is similar to the cell size) or may be 100 km2 indicating that for this year, the scanned surface is four times the cell area. The results show that the total cumulative sum of trawled area is approximately 28,738,326 km2, scattered throughout the Atlantic coast. 95% of the overall trawling effort is located in the southern zone, between 29°N and 20°30'N. Nearly 5% of the trawling effort is located in the northern coastal region, north of 33°N. The center area between 33°N and 29°N is the least swept by Russian commercial vessels because in this region the majority of the area is rocky, and non trawlable.Keywords: GIS, Moroccan Atlantic Ocean, seabed, trawling
Procedia PDF Downloads 329435 Comparative Study Using WEKA for Red Blood Cells Classification
Authors: Jameela Ali, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal, or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithm tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital-alaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.Keywords: K-nearest neighbors algorithm, radial basis function neural network, red blood cells, support vector machine
Procedia PDF Downloads 411434 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.Keywords: composite material, crashworthiness, finite element analysis, optimization
Procedia PDF Downloads 256433 Improving Rural Access to Specialist Emergency Mental Health Care: Using a Time and Motion Study in the Evaluation of a Telepsychiatry Program
Authors: Emily Saurman, David Lyle
Abstract:
In Australia, a well serviced rural town might have a psychiatrist visit once-a-month with more frequent visits from a psychiatric nurse, but many have no resident access to mental health specialists. Access to specialist care, would not only reduce patient distress and benefit outcomes, but facilitate the effective use of limited resources. The Mental Health Emergency Care-Rural Access Program (MHEC-RAP) was developed to improve access to specialist emergency mental health care in rural and remote communities using telehealth technologies. However, there has been no current benchmark to gauge program efficiency or capacity; to determine whether the program activity is justifiably sufficient. The evaluation of MHEC-RAP used multiple methods and applied a modified theory of access to assess the program and its aim of improved access to emergency mental health care. This was the first evaluation of a telepsychiatry service to include a time and motion study design examining program time expenditure, efficiency, and capacity. The time and motion study analysis was combined with an observational study of the program structure and function to assess the balance between program responsiveness and efficiency. Previous program studies have demonstrated that MHEC-RAP has improved access and is used and effective. The findings from the time and motion study suggest that MHEC-RAP has the capacity to manage increased activity within the current model structure without loss to responsiveness or efficiency in the provision of care. Enhancing program responsiveness and efficiency will also support a claim of the program’s value for money. MHEC-RAP is a practical telehealth solution for improving access to specialist emergency mental health care. The findings from this evaluation have already attracted the attention of other regions in Australia interested in implementing emergency telepsychiatry programs and are now informing the progressive establishment of mental health resource centres in rural New South Wales. Like MHEC-RAP, these centres will provide rapid, safe, and contextually relevant assessments and advice to support local health professionals to manage mental health emergencies in the smaller rural emergency departments. Sharing the application of this methodology and research activity may help to improve access to and future evaluations of telehealth and telepsychiatry services for others around the globe.Keywords: access, emergency, mental health, rural, time and motion
Procedia PDF Downloads 235432 Text Emotion Recognition by Multi-Head Attention based Bidirectional LSTM Utilizing Multi-Level Classification
Authors: Vishwanath Pethri Kamath, Jayantha Gowda Sarapanahalli, Vishal Mishra, Siddhesh Balwant Bandgar
Abstract:
Recognition of emotional information is essential in any form of communication. Growing HCI (Human-Computer Interaction) in recent times indicates the importance of understanding of emotions expressed and becomes crucial for improving the system or the interaction itself. In this research work, textual data for emotion recognition is used. The text being the least expressive amongst the multimodal resources poses various challenges such as contextual information and also sequential nature of the language construction. In this research work, the proposal is made for a neural architecture to resolve not less than 8 emotions from textual data sources derived from multiple datasets using google pre-trained word2vec word embeddings and a Multi-head attention-based bidirectional LSTM model with a one-vs-all Multi-Level Classification. The emotions targeted in this research are Anger, Disgust, Fear, Guilt, Joy, Sadness, Shame, and Surprise. Textual data from multiple datasets were used for this research work such as ISEAR, Go Emotions, Affect datasets for creating the emotions’ dataset. Data samples overlap or conflicts were considered with careful preprocessing. Our results show a significant improvement with the modeling architecture and as good as 10 points improvement in recognizing some emotions.Keywords: text emotion recognition, bidirectional LSTM, multi-head attention, multi-level classification, google word2vec word embeddings
Procedia PDF Downloads 174431 Applying the Underwriting Technique to Analyze and Mitigate the Credit Risks in Construction Project Management
Authors: Hai Chien Pham, Thi Phuong Anh Vo, Chansik Park
Abstract:
Risks management in construction projects is important to ensure the positive feasibility of the projects in which financial risks are most concerned while construction projects always run on a credit basis. Credit risks, therefore, require unique and technical tools to be well managed. Underwriting technique in credit risks, in its most basic sense, refers to the process of evaluating the risks and the potential exposure of losses. Risks analysis and underwriting are applied as a must in banks and financial institutions who are supporters for constructions projects when required. Recently, construction organizations, especially contractors, have recognized the significant increasing of credit risks which caused negative impacts to project performance and profit of construction firms. Despite the successful application of underwriting in banks and financial institutions for many years, there are few contractors who are applying this technique to analyze and mitigate the credit risks of their potential owners before signing contracts with them for delivering their performed services. Thus, contractors have taken credit risks during project implementation which might be not materialized due to the bankruptcy and/or protracted default made by their owners. With this regard, this study proposes a model using the underwriting technique for contractors to analyze and assess credit risks of their owners before making final decisions for the potential construction contracts. Contractor’s underwriters are able to analyze and evaluate the subjects such as owner, country, sector, payment terms, financial figures and their related concerns of the credit limit requests in details based on reliable information sources, and then input into the proposed model to have the Overall Assessment Score (OAS). The OAS is as a benchmark for the decision makers to grant the proper limits for the project. The proposed underwriting model is validated by 30 subjects in Asia Pacific region within 5 years to achieve their OAS, and then compare output OAS with their own practical performance in order to evaluate the potential of underwriting model for analyzing and assessing credit risks. The results revealed that the underwriting would be a powerful method to assist contractors in making precise decisions. The contribution of this research is to allow the contractors firstly to develop their own credit risk management model for proactively preventing the credit risks of construction projects and continuously improve and enhance the performance of this function during project implementation.Keywords: underwriting technique, credit risk, risk management, construction project
Procedia PDF Downloads 209430 An Advanced YOLOv8 for Vehicle Detection in Intelligent Traffic Management
Authors: A. Degale Desta, Cheng Jian
Abstract:
Background: Vehicle detection accuracy is critical to intelligent transportation systems and autonomous driving. The state-of-the-art object identification technology YOLOv8 has shown significant gains in efficiency and detection accuracy. This study uses the BDD100K dataset, which is renowned for its extensive and varied annotations, to assess how well YOLOv8 performs in vehicle detection. Objectives: The primary objective of this research is to assess YOLOv8's performance in intelligent transportation system vehicle identification and its ability to accurately identify cars in urban environments for safety prioritization. Methods: The primary objective of this research is to assess YOLOv8's performance in intelligent transportation system vehicle identification and its ability to accurately identify cars in urban environments for safety prioritization. Results: The results show that YOLOv8 achieves high mAP, recall, precision, and F1-score values, indicating state-of-the-art performance. This suggests that YOLOv8 can identify cars in complex urban environments with a high degree of accuracy and reliable results in a variety of traffic scenarios. Conclusion: The results indicate that YOLOv8 is a useful tool for enhancing vehicle detection accuracy in intelligent transportation systems, hence advancing urban public safety and security. The model's demonstrated performance shows how well it may be incorporated into autonomous driving applications to improve situational awareness and responsiveness.Keywords: vehicle detection, YOLOv8, BDD100K, object detection, deep learning
Procedia PDF Downloads 12429 Machine Learning Approach for Stress Detection Using Wireless Physical Activity Tracker
Authors: B. Padmaja, V. V. Rama Prasad, K. V. N. Sunitha, E. Krishna Rao Patro
Abstract:
Stress is a psychological condition that reduces the quality of sleep and affects every facet of life. Constant exposure to stress is detrimental not only for mind but also body. Nevertheless, to cope with stress, one should first identify it. This paper provides an effective method for the cognitive stress level detection by using data provided from a physical activity tracker device Fitbit. This device gathers people’s daily activities of food, weight, sleep, heart rate, and physical activities. In this paper, four major stressors like physical activities, sleep patterns, working hours and change in heart rate are used to assess the stress levels of individuals. The main motive of this system is to use machine learning approach in stress detection with the help of Smartphone sensor technology. Individually, the effect of each stressor is evaluated using logistic regression and then combined model is built and assessed using variants of ordinal logistic regression models like logit, probit and complementary log-log. Then the quality of each model is evaluated using Akaike Information Criterion (AIC) and probit is assessed as the more suitable model for our dataset. This system is experimented and evaluated in a real time environment by taking data from adults working in IT and other sectors in India. The novelty of this work lies in the fact that stress detection system should be less invasive as possible for the users.Keywords: physical activity tracker, sleep pattern, working hours, heart rate, smartphone sensor
Procedia PDF Downloads 257428 Linguistic Politeness in Higher Education Teaching Chinese as an Additional Language
Authors: Leei Wong
Abstract:
Changes in globalized contexts precipitate changing perceptions concerning linguistic politeness practices. Within these changing contexts, misunderstanding or stereotypification of politeness norms may lead to negative consequences such as hostility or even communication breakdown. With China’s rising influence, the country is offering a vast potential market for global economic development and diplomatic relations and opportunities for intercultural interaction, and many outside China are subsequently learning Chinese. These trends bring both opportunities and pitfalls for intercultural communication, including within the important field of politeness awareness. One internationally recognized benchmark for the study and classification of languages – the updated 2018 CEFR (Common European Framework of Reference for Language) Companion Volume New Descriptors (CEFR/CV) – classifies politeness as a B1 (or intermediate) level descriptor on the scale of Politeness Conventions. This provides some indication of the relevance of politeness awareness within new globalized contexts for fostering better intercultural communication. This study specifically examines Bald on record politeness strategies presented in current beginner TCAL textbooks used in Australian tertiary education through content-analysis. The investigation in this study involves the purposive sampling of commercial textbooks published in America and China followed by interpretive content analysis. The philosophical position of this study is therefore located within an interpretivist ontology, with a subjectivist epistemological perspective. It sets out with the aim to illuminate the characteristics of Chinese Bald on record strategies that are deemed significant in the present-world context through Chinese textbook writers and curriculum designers. The data reveals significant findings concerning politeness strategies in beginner stage curriculum, and also opens the way for further research on politeness strategies in intermediate and advanced level textbooks for additional language learners. This study will be useful for language teachers, and language teachers-in-training, by generating awareness and providing insights and advice into the teaching and learning of Bald on record politeness strategies. Authors of textbooks may also benefit from the findings of this study, as awareness is raised of the need to include reference to understanding politeness in language, and how this might be approached.Keywords: linguistic politeness, higher education, Chinese language, additional language
Procedia PDF Downloads 105427 Hope as a Predictor for Complicated Grief and Anxiety: A Bayesian Structural Equational Modeling Study
Authors: Bo Yan, Amy Y. M. Chow
Abstract:
Bereavement is recognized as a universal challenging experience. It is important to gather research evidence on protective factors in bereavement. Hope is considered as one of the protective factors in previous coping studies. The present study aims to add knowledge by investigating hope at the first month after death to predict psychological symptoms altogether including complicated grief (CG), anxiety, and depressive symptoms at the seventh month. The data were collected via one-on-one interview survey in a longitudinal project with Hong Kong hospice users (sample size 105). Most participants were at their middle age (49-year-old on average), female (72%), with no religious affiliation (58%). Bayesian Structural Equation Modeling (BSEM) analysis was conducted on the longitudinal dataset. The BSEM findings show that hope at the first month of bereavement negatively predicts both CG and anxiety symptoms at the seventh month but not for depressive symptoms. Age and gender are controlled in the model. The overall model fit is good. The current study findings suggest assessing hope at the first month of bereavement. Hope at the first month after the loss is identified as an excellent predictor for complicated grief and anxiety symptoms at the seventh month. The result from this sample is clear, so it encourages cross-cultural research on replicated modeling and development of further clinical application. Particularly, practical consideration for early intervention to increase the level of hope has the potential to reduce the psychological symptoms and thus to improve the bereaved persons’ wellbeing in the long run.Keywords: anxiety, complicated grief, depressive symptoms, hope, structural equational modeling
Procedia PDF Downloads 206426 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration
Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith
Abstract:
Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.Keywords: cycle consistency, deformable multimodal image registration, deep learning, GAN
Procedia PDF Downloads 132425 A Critical Examination of the Iranian National Legal Regulation of the Ecosystem of Lake Urmia
Authors: Siavash Ostovar
Abstract:
The Iranian national Law on the Ramsar Convention (officially known as the Convention of International Wetlands and Aquatic Birds' Habitat Wetlands) was approved by the Senate and became a law in 1974 after the ratification of the National Council. There are other national laws with the aim of preservation of environment in the country. However, Lake Urmia which is declared a wetland of international importance by the Ramsar Convention in 1971 and designated a UNESCO Biosphere Reserve in 1976 is now at the brink of total disappearance due mainly to the climate change, water mismanagement, dam construction, and agricultural deficiencies. Lake Urmia is located in the north western corner of Iran. It is the third largest salt water lake in the world and the largest lake in the Middle East. Locally, it is designated as a National Park. It is, indeed, a unique lake both nationally and internationally. This study investigated how effective the national legal regulation of the ecosystem of Lake Urmia is in Iran. To do so, the Iranian national laws as Enforcement of Ramsar Convention in the country including three nationally established laws of (i) Five sets of laws for the programme of economic, social and cultural development of Islamic Republic of Iran, (ii) The Iranian Penal Code, (iii) law of conservation, restoration and management of the country were investigated. Using black letter law methods, it was revealed that (i) regarding the national five sets of laws; the benchmark to force the implementation of the legislations and policies is not set clearly. In other words, there is no clear guarantee to enforce these legislations and policies at the time of deviation and violation; (ii) regarding the Penal Code, there is lack of determining the environmental crimes, determining appropriate penalties for the environmental crimes, implementing those penalties appropriately, monitoring and training programmes precisely; (iii) regarding the law of conservation, restoration and management, implementation of this regulation is adjourned to preparation, announcement and approval of several categories of enactments and guidelines. In fact, this study used a national environmental catastrophe caused by drying up of Lake Urmia as an excuse to direct the attention to the weaknesses of the existing national rules and regulations. Finally, as we all depend on the natural world for our survival, this study recommended further research on every environmental issue including the Lake Urmia.Keywords: conservation, environmental law, Lake Urmia, national laws, Ramsar Convention, water management, wetlands
Procedia PDF Downloads 203424 A Decision-Support Tool for Humanitarian Distribution Planners in the Face of Congestion at Security Checkpoints: A Real-World Case Study
Authors: Mohanad Rezeq, Tarik Aouam, Frederik Gailly
Abstract:
In times of armed conflicts, various security checkpoints are placed by authorities to control the flow of merchandise into and within areas of conflict. The flow of humanitarian trucks that is added to the regular flow of commercial trucks, together with the complex security procedures, creates congestion and long waiting times at the security checkpoints. This causes distribution costs to increase and shortages of relief aid to the affected people to occur. Our research proposes a decision-support tool to assist planners and policymakers in building efficient plans for the distribution of relief aid, taking into account congestion at security checkpoints. The proposed tool is built around a multi-item humanitarian distribution planning model based on multi-phase design science methodology that has as its objective to minimize distribution and back ordering costs subject to capacity constraints that reflect congestion effects using nonlinear clearing functions. Using the 2014 Gaza War as a case study, we illustrate the application of the proposed tool, model the underlying relief-aid humanitarian supply chain, estimate clearing functions at different security checkpoints, and conduct computational experiments. The decision support tool generated a shipment plan that was compared to two benchmarks in terms of total distribution cost, average lead time and work in progress (WIP) at security checkpoints, and average inventory and backorders at distribution centers. The first benchmark is the shipment plan generated by the fixed capacity model, and the second is the actual shipment plan implemented by the planners during the armed conflict. According to our findings, modeling and optimizing supply chain flows reduce total distribution costs, average truck wait times at security checkpoints, and average backorders when compared to the executed plan and the fixed-capacity model. Finally, scenario analysis concludes that increasing capacity at security checkpoints can lower total operations costs by reducing the average lead time.Keywords: humanitarian distribution planning, relief-aid distribution, congestion, clearing functions
Procedia PDF Downloads 82423 A Study of High Viscosity Oil-Gas Slug Flow Using Gamma Densitometer
Authors: Y. Baba, A. Archibong-Eso, H. Yeung
Abstract:
Experimental study of high viscosity oil-gas flows in horizontal pipelines published in literature has indicated that hydrodynamic slug flow is the dominant flow pattern observed. Investigations have shown that hydrodynamic slugging brings about high instabilities in pressure that can damage production facilities thereby making it inherent to study high viscous slug flow regime so as to improve the understanding of its flow dynamics. Most slug flow models used in the petroleum industry for the design of pipelines together with their closure relationships were formulated based on observations of low viscosity liquid-gas flows. New experimental investigations and data are therefore required to validate these models. In cases where these models underperform, improving upon or building new predictive models and correlations will also depend on the new experimental dataset and further understanding of the flow dynamics in high viscous oil-gas flows. In this study conducted at the Flow laboratory, Oil and Gas Engineering Centre of Cranfield University, slug flow variables such as pressure gradient, mean liquid holdup, frequency and slug length for oil viscosity ranging from 1..0 – 5.5 Pa.s are experimentally investigated and analysed. The study was carried out in a 0.076m ID pipe, two fast sampling gamma densitometer and pressure transducers (differential and point) were used to obtain experimental measurements. Comparison of the measured slug flow parameters to the existing slug flow prediction models available in the literature showed disagreement with high viscosity experimental data thus highlighting the importance of building new predictive models and correlations.Keywords: gamma densitometer, mean liquid holdup, pressure gradient, slug frequency and slug length
Procedia PDF Downloads 330422 A Comparative Study for Various Techniques Using WEKA for Red Blood Cells Classification
Authors: Jameela Ali, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifyig the red blood cells as normal, or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithm tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital-Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectivelyKeywords: red blood cells, classification, radial basis function neural networks, suport vector machine, k-nearest neighbors algorithm
Procedia PDF Downloads 481421 Utilization of a Telepresence Evaluation Tool for the Implementation of a Distant Education Program
Authors: Theresa Bacon-Baguley, Martina Reinhold
Abstract:
Introduction: Evaluation and analysis are the cornerstones of any successful program in higher education. When developing a program at a distant campus, it is essential that the process of evaluation and analysis be orchestrated in a timely manner with tools that can identify both the positive and negative components of distant education. We describe the utilization of a newly developed tool used to evaluate and analyze the successful expansion to a distant campus using Telepresence Technology. Like interactive television, Telepresence allows live interactive delivery but utilizes broadband cable. The tool developed is adaptable to any distant campus as the framework for the tool was derived from a systematic review of the literature. Methodology: Because Telepresence is a relatively new delivery system, the evaluation tool was developed based on a systematic review of literature in the area of distant education and ITV. The literature review identified four potential areas of concern: 1) technology, 2) confidence in the system, 3) faculty delivery of the content and, 4) resources at each site. Each of the four areas included multiple sub-components. Benchmark values were determined to be 80% or greater positive responses to each of the four areas and the individual sub-components. The tool was administered each semester during the didactic phase of the curriculum. Results: Data obtained identified site-specific issues (i.e., technology access, student engagement, laboratory access, and resources), as well as issues common at both sites (i.e., projection screen size). More specifically, students at the parent location did not have adequate access to printers or laboratory space, and students at the distant campus did not have adequate access to library resources. The evaluation tool identified that both sites requested larger screens for visualization of the faculty. The deficiencies were addressed by replacing printers, including additional orientation for students on library resources and increasing the screen size of the Telepresence system. When analyzed over time, the issues identified in the tool as deficiencies were resolved. Conclusions: Utilizing the tool allowed adjustments of the Telepresence delivery system in a timely manner resulting in successful implementation of an entire curriculum at a distant campus.Keywords: physician assistant, telepresence technology, distant education, assessment
Procedia PDF Downloads 126