Search results for: optimization algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5921

Search results for: optimization algorithm

971 Simulation of Bird Strike on Airplane Wings by Using SPH Methodology

Authors: Tuğçe Kiper Elibol, İbrahim Uslan, Mehmet Ali Guler, Murat Buyuk, Uğur Yolum

Abstract:

According to the FAA report, 142603 bird strikes were reported for a period of 24 years, between 1990 – 2013. Bird strike with aerospace structures not only threaten the flight security but also cause financial loss and puts life in danger. The statistics show that most of the bird strikes are happening with the nose and the leading edge of the wings. Also, a substantial amount of bird strikes is absorbed by the jet engines and causes damage on blades and engine body. Crash proof designs are required to overcome the possibility of catastrophic failure of the airplane. Using computational methods for bird strike analysis during the product development phase has considerable importance in terms of cost saving. Clearly, using simulation techniques to reduce the number of reference tests can dramatically affect the total cost of an aircraft, where for bird strike often full-scale tests are considered. Therefore, development of validated numerical models is required that can replace preliminary tests and accelerate the design cycle. In this study, to verify the simulation parameters for a bird strike analysis, several different numerical options are studied for an impact case against a primitive structure. Then, a representative bird mode is generated with the verified parameters and collided against the leading edge of a training aircraft wing, where each structural member of the wing was explicitly modeled. A nonlinear explicit dynamics finite element code, LS-DYNA was used for the bird impact simulations. SPH methodology was used to model the behavior of the bird. Dynamic behavior of the wing superstructure was observed and will be used for further design optimization purposes.

Keywords: bird impact, bird strike, finite element modeling, smoothed particle hydrodynamics

Procedia PDF Downloads 306
970 An Investigation of the Relationship Between Privacy Crisis, Public Discourse on Privacy, and Key Performance Indicators at Facebook (2004–2021)

Authors: Prajwal Eachempati, Laurent Muzellec, Ashish Kumar Jha

Abstract:

We use Facebook as a case study to investigate the complex relationship between the firm’s public discourse (and actions) surrounding data privacy and the performance of a business model based on monetizing user’s data. We do so by looking at the evolution of public discourse over time (2004–2021) and relate topics to revenue and stock market evolution Drawing from archival sources like Zuckerberg We use LDA topic modelling algorithm to reveal 19 topics regrouped in 6 major themes. We first show how, by using persuasive and convincing language that promises better protection of consumer data usage, but also emphasizes greater user control over their own data, the privacy issue is being reframed as one of greater user control and responsibility. Second, we aim to understand and put a value on the extent to which privacy disclosures have a potential impact on the financial performance of social media firms. There we found significant relationship between the topics pertaining to privacy and social media/technology, sentiment score and stock market prices. Revenue is found to be impacted by topics pertaining to politics and new product and service innovations while number of active users is not impacted by the topics unless moderated by external control variables like Return on Assets and Brand Equity.

Keywords: public discourses, data protection, social media, privacy, topic modeling, business models, financial performance

Procedia PDF Downloads 76
969 Metabolic Manipulation as a Strategy for Optimization of Biomass Productivity and Oil Content in the Microalgae Desmodesmus Sp.

Authors: Ivan A. Sandoval Salazar, Silvia F. Valderrama

Abstract:

The microalgae oil emerges as a promising source of raw material for many industrial applications. Thus, this study had as a main focus on the cultivation of the microalgae species Desmodesmus sp. in laboratory scale with a view to maximizing biomass production and triglyceride content in the lipid fraction. Initially, culture conditions were selected to optimize biomass production, which was subsequently subjected to nutritional stress by varying nitrate and phosphate concentrations in order to increase the content and productivity of fatty acids. The culture medium BOLD 3N, nitrate and phosphate, light intensity 250,500 and 1000 μmol photons.m².s⁻¹, photoperiod of 12:12 were evaluated. Under the best conditions of the tests, a maximum cell division of 1.13 div.dia⁻¹ was obtained on the sixth day of culture, beginning of the exponential phase, and a maximum concentration of 8.42x107 cell.mL⁻¹ and dry biomass of 3.49 gL⁻¹ on the 20th day, in the stationary phase. The lipid content in the first stage of culture was approximately 8% after 12 days and at the end of the culture in the stationary phase ranged from 12% to 16% (20 days). In the microalgae grown at 250 μmol fotons.m2.s-1 the fatty acid profile was mostly polyunsaturated (52%). The total of unsaturated fatty acids, identified in this species of microalga, reached values between 70 and 75%, being qualified for use in the food and pharmaceutical industry. In addition, this study showed that the cultivation conditions influenced mainly the production of polyunsaturated fatty acids, with the predominance of γ-linolenic acid. However, in the cultures submitted to the highest the intensity of light (1000 μmol photons.m².s⁻¹) and low concentrations of nitrate and phosphate, saturated and monounsaturated fatty acids, which present greater oxidative stability, were identified mainly (60 to 70 %) being qualified for the production of biodiesel and for oleochemistry.

Keywords: microalgae, Desmodesmus sp, fatty acids, biodiesel

Procedia PDF Downloads 130
968 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression

Procedia PDF Downloads 290
967 Small Scale Waste to Energy Systems: Optimization of Feedstock Composition for Improved Control of Ash Sintering and Quality of Generated Syngas

Authors: Mateusz Szul, Tomasz Iluk, Aleksander Sobolewski

Abstract:

Small-scale, distributed energy systems enabling cogeneration of heat and power based on gasification of sewage sludge, are considered as the most efficient and environmentally friendly ways of their treatment. However, economic aspects of such an investment are very demanding; therefore, for such a small scale sewage sludge gasification installation to be profitable, it needs to be efficient and simple at the same time. The article presents results of research on air gasification of sewage sludge in fixed bed GazEla reactor. Two of the most important aspects of the research considered the influence of the composition of sewage sludge blends with other feedstocks on properties of generated syngas and ash sintering problems occurring at the fixed bed. Different means of the fuel pretreatment and blending were proposed as a way of dealing with the above mentioned undesired characteristics. Influence of RDF (Refuse Derived Fuel) and biomasses in the fuel blends were evaluated. Ash properties were assessed based on proximate, ultimate, and ash composition analysis of the feedstock. The blends were specified based on complementary characteristics of such criteria as C content, moisture, volatile matter, Si, Al, Mg, and content of basic metals in the ash were analyzed, Obtained results were assessed with use of experimental gasification tests and laboratory ISO-procedure for analysis of ash characteristic melting temperatures. Optimal gasification process conditions were determined by energetic parameters of the generated syngas, its content of tars and lack of ash sinters within the reactor bed. Optimal results were obtained for co-gasification of herbaceous biomasses with sewage sludge where LHV (Lower Heating Value) of the obtained syngas reached a stable value of 4.0 MJ/Nm3 for air/steam gasification.

Keywords: ash fusibility, gasification, piston engine, sewage sludge

Procedia PDF Downloads 179
966 Composite Approach to Extremism and Terrorism Web Content Classification

Authors: Kolade Olawande Owoeye, George Weir

Abstract:

Terrorism and extremism activities on the internet are becoming the most significant threats to national security because of their potential dangers. In response to this challenge, law enforcement and security authorities are actively implementing comprehensive measures by countering the use of the internet for terrorism. To achieve the measures, there is need for intelligence gathering via the internet. This includes real-time monitoring of potential websites that are used for recruitment and information dissemination among other operations by extremist groups. However, with billions of active webpages, real-time monitoring of all webpages become almost impossible. To narrow down the search domain, there is a need for efficient webpage classification techniques. This research proposed a new approach tagged: SentiPosit-based method. SentiPosit-based method combines features of the Posit-based method and the Sentistrenght-based method for classification of terrorism and extremism webpages. The experiment was carried out on 7500 webpages obtained through TENE-webcrawler by International Cyber Crime Research Centre (ICCRC). The webpages were manually grouped into three classes which include the ‘pro-extremist’, ‘anti-extremist’ and ‘neutral’ with 2500 webpages in each category. A supervised learning algorithm is then applied on the classified dataset in order to build the model. Results obtained was compared with existing classification method using the prediction accuracy and runtime. It was observed that our proposed hybrid approach produced a better classification accuracy compared to existing approaches within a reasonable runtime.

Keywords: sentiposit, classification, extremism, terrorism

Procedia PDF Downloads 258
965 Experimental and Modal Determination of the State-Space Model Parameters of a Uni-Axial Shaker System for Virtual Vibration Testing

Authors: Jonathan Martino, Kristof Harri

Abstract:

In some cases, the increase in computing resources makes simulation methods more affordable. The increase in processing speed also allows real time analysis or even more rapid tests analysis offering a real tool for test prediction and design process optimization. Vibration tests are no exception to this trend. The so called ‘Virtual Vibration Testing’ offers solution among others to study the influence of specific loads, to better anticipate the boundary conditions between the exciter and the structure under test, to study the influence of small changes in the structure under test, etc. This article will first present a virtual vibration test modeling with a main focus on the shaker model and will afterwards present the experimental parameters determination. The classical way of modeling a shaker is to consider the shaker as a simple mechanical structure augmented by an electrical circuit that makes the shaker move. The shaker is modeled as a two or three degrees of freedom lumped parameters model while the electrical circuit takes the coil impedance and the dynamic back-electromagnetic force into account. The establishment of the equations of this model, describing the dynamics of the shaker, is presented in this article and is strongly related to the internal physical quantities of the shaker. Those quantities will be reduced into global parameters which will be estimated through experiments. Different experiments will be carried out in order to design an easy and practical method for the identification of the shaker parameters leading to a fully functional shaker model. An experimental modal analysis will also be carried out to extract the modal parameters of the shaker and to combine them with the electrical measurements. Finally, this article will conclude with an experimental validation of the model.

Keywords: lumped parameters model, shaker modeling, shaker parameters, state-space, virtual vibration

Procedia PDF Downloads 253
964 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 96
963 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach

Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed

Abstract:

Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.

Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model

Procedia PDF Downloads 443
962 Chronolgy and Developments in Inventory Control Best Practices for FMCG Sector

Authors: Roopa Singh, Anurag Singh, Ajay

Abstract:

Agriculture contributes a major share in the national economy of India. A major portion of Indian economy (about 70%) depends upon agriculture as it forms the main source of income. About 43% of India’s geographical area is used for agricultural activity which involves 65-75% of total population of India. The given work deals with the Fast moving Consumer Goods (FMCG) industries and their inventories which use agricultural produce as their raw material or input for their final product. Since the beginning of inventory practices, many developments took place which can be categorised into three phases, based on the review of various works. The first phase is related with development and utilization of Economic Order Quantity (EOQ) model and methods for optimizing costs and profits. Second phase deals with inventory optimization method, with the purpose of balancing capital investment constraints and service level goals. The third and recent phase has merged inventory control with electrical control theory. Maintenance of inventory is considered negative, as a large amount of capital is blocked especially in mechanical and electrical industries. But the case is different in food processing and agro-based industries and their inventories due to cyclic variation in the cost of raw materials of such industries which is the reason for selection of these industries in the mentioned work. The application of electrical control theory in inventory control makes the decision-making highly instantaneous for FMCG industries without loss in their proposed profits, which happened earlier during first and second phases, mainly due to late implementation of decision. The work also replaces various inventories and work-in-progress (WIP) related errors with their monetary values, so that the decision-making is fully target-oriented.

Keywords: control theory, inventory control, manufacturing sector, EOQ, feedback, FMCG sector

Procedia PDF Downloads 340
961 Developing a DNN Model for the Production of Biogas From a Hybrid BO-TPE System in an Anaerobic Wastewater Treatment Plant

Authors: Hadjer Sadoune, Liza Lamini, Scherazade Krim, Amel Djouadi, Rachida Rihani

Abstract:

Deep neural networks are highly regarded for their accuracy in predicting intricate fermentation processes. Their ability to learn from a large amount of datasets through artificial intelligence makes them particularly effective models. The primary obstacle in improving the performance of these models is to carefully choose the suitable hyperparameters, including the neural network architecture (number of hidden layers and hidden units), activation function, optimizer, learning rate, and other relevant factors. This study predicts biogas production from real wastewater treatment plant data using a sophisticated approach: hybrid Bayesian optimization with a tree-structured Parzen estimator (BO-TPE) for an optimised deep neural network (DNN) model. The plant utilizes an Upflow Anaerobic Sludge Blanket (UASB) digester that treats industrial wastewater from soft drinks and breweries. The digester has a working volume of 1574 m3 and a total volume of 1914 m3. Its internal diameter and height were 19 and 7.14 m, respectively. The data preprocessing was conducted with meticulous attention to preserving data quality while avoiding data reduction. Three normalization techniques were applied to the pre-processed data (MinMaxScaler, RobustScaler and StandardScaler) and compared with the Non-Normalized data. The RobustScaler approach has strong predictive ability for estimating the volume of biogas produced. The highest predicted biogas volume was 2236.105 Nm³/d, with coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) values of 0.712, 164.610, and 223.429, respectively.

Keywords: anaerobic digestion, biogas production, deep neural network, hybrid bo-tpe, hyperparameters tuning

Procedia PDF Downloads 22
960 Designing and Prototyping Permanent Magnet Generators for Wind Energy

Authors: T. Asefi, J. Faiz, M. A. Khan

Abstract:

This paper introduces dual rotor axial flux machines with surface mounted and spoke type ferrite permanent magnets with concentrated windings; they are introduced as alternatives to a generator with surface mounted Nd-Fe-B magnets. The output power, voltage, speed and air gap clearance for all the generators are identical. The machine designs are optimized for minimum mass using a population-based algorithm, assuming the same efficiency as the Nd-Fe-B machine. A finite element analysis (FEA) is applied to predict the performance, emf, developed torque, cogging torque, no load losses, leakage flux and efficiency of both ferrite generators and that of the Nd-Fe-B generator. To minimize cogging torque, different rotor pole topologies and different pole arc to pole pitch ratios are investigated by means of 3D FEA. It was found that the surface mounted ferrite generator topology is unable to develop the nominal electromagnetic torque, and has higher torque ripple and is heavier than the spoke type machine. Furthermore, it was shown that the spoke type ferrite permanent magnet generator has favorable performance and could be an alternative to rare-earth permanent magnet generators, particularly in wind energy applications. Finally, the analytical and numerical results are verified using experimental results.

Keywords: axial flux, permanent magnet generator, dual rotor, ferrite permanent magnet generator, finite element analysis, wind turbines, cogging torque, population-based algorithms

Procedia PDF Downloads 130
959 Geological and Geotechnical Approach for Stabilization of Cut-Slopes in Power House Area of Luhri HEP Stage-I (210 MW), India

Authors: S. P. Bansal, Mukesh Kumar Sharma, Ankit Prabhakar

Abstract:

Luhri Hydroelectric Project Stage-I (210 MW) is a run of the river type development with a dam toe surface powerhouse (122m long, 50.50m wide, and 65.50m high) on the right bank of river Satluj in Himachal Pradesh, India. The project is located in the inner lesser Himalaya between Dhauladhar Range in the south and higher Himalaya in the north in the seismically active region. At the project, the location river is confined within narrow V-shaped valleys with little or no flat areas close to the river bed. Nearly 120m high cut slopes behind the powerhouse are proposed from the powerhouse foundation level of 795m to ± 915m to accommodate the surface powerhouse. The stability of 120m high cut slopes is a prime concern for the reason of risk involved. The slopes behind the powerhouse will be excavated in mainly in augen gneiss, fresh to weathered in nature, and biotite rich at places. The foliation joints are favorable and dipping inside the hill. Two valleys dipping steeper joints will be encountered on the slopes, which can cause instability during excavation. Geological exploration plays a vital role in designing and optimization of cut slopes. SWEDGE software has been used to analyze the geometry and stability of surface wedges in cut slopes. The slopes behind powerhouse have been analyzed in three zones for stability analysis by providing a break in the continuity of cut slopes, which shall provide quite substantial relief for slope stabilization measure. Pseudo static analysis has been carried out for the stabilization of wedges. The results indicate that many large wedges are forming, which have a factor of safety less than 1. The stability measures (support system, bench width, slopes) have been planned so that no wedge failure may occur in the future.

Keywords: cut slopes, geotechnical investigations, Himalayan geology, surface powerhouse, wedge failure

Procedia PDF Downloads 104
958 A 0-1 Goal Programming Approach to Optimize the Layout of Hospital Units: A Case Study in an Emergency Department in Seoul

Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee

Abstract:

This paper proposes a method to optimize the layout of an emergency department (ED) based on real executions of care processes by considering several planning objectives simultaneously. Recently, demand for healthcare services has been dramatically increased. As the demand for healthcare services increases, so do the need for new healthcare buildings as well as the need for redesign and renovating existing ones. The importance of implementation of a standard set of engineering facilities planning and design techniques has been already proved in both manufacturing and service industry with many significant functional efficiencies. However, high complexity of care processes remains a major challenge to apply these methods in healthcare environments. Process mining techniques applied in this study to tackle the problem of complexity and to enhance care process analysis. Process related information such as clinical pathways extracted from the information system of an ED. A 0-1 goal programming approach is then proposed to find a single layout that simultaneously satisfies several goals. The proposed model solved by optimization software CPLEX 12. The solution reached using the proposed method has 42.2% improvement in terms of walking distance of normal patients and 47.6% improvement in walking distance of critical patients at minimum cost of relocation. It has been observed that lots of patients must unnecessarily walk long distances during their visit to the emergency department because of an inefficient design. A carefully designed layout can significantly decrease patient walking distance and related complications.

Keywords: healthcare operation management, goal programming, facility layout problem, process mining, clinical processes

Procedia PDF Downloads 271
957 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index

Authors: Todd Zhou, Mikhail Yurochkin

Abstract:

Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.

Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index

Procedia PDF Downloads 109
956 Myomectomy and Blood Loss: A Quality Improvement Project

Authors: Ena Arora, Rong Fan, Aleksandr Fuks, Kolawole Felix Akinnawonu

Abstract:

Introduction: Leiomyomas are benign tumors that are derived from the overgrowth of uterine smooth muscle cells. Women with symptomatic leiomyomas who desire future fertility, myomectomy should be the standard surgical treatment. Perioperative hemorrhage is a common complication in myomectomy. We performed the study to investigate blood transfusion rate in abdominal myomectomies, risk factors influencing blood loss and modalities to improve perioperative blood loss. Methods: Retrospective chart review was done for patients who underwent myomectomy from 2016 to 2022 at Queens hospital center, New York. We looked at preoperative patient demographics, clinical characteristics, intraoperative variables, and postoperative outcomes. Mann-Whitney U test were used for parametric and non-parametric continuous variable comparisons, respectively. Results: A total of 159 myomectomies were performed between 2016 and 2022, including 1 laparoscopic, 65 vaginal and 93 abdominal. 44 patients received blood transfusion during or within 72 hours of abdominal myomectomy. The blood transfusion rate was 47.3%. Blood transfusion rate was found to be twice higher than the average documented rate in literature which is 20%. Risk factors identified were black race, preoperative hematocrit<30%, preoperative blood transfusion within 72 hours, large fibroid burden, prolonged surgical time, and abdominal approach. Conclusion: Preoperative optimization with iron supplements or GnRH agonists is important for patients undergoing myomectomy. Interventions to decrease intra operative blood loss should include cell saver, tourniquet, vasopressin, misoprostol, tranexamic acid and gelatin-thrombin matrix hemostatic sealant.

Keywords: myomectomy, perioperative blood loss, cell saver, tranexamic acid

Procedia PDF Downloads 65
955 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 117
954 The Use of Unmanned Aerial System (UAS) in Improving the Measurement System on the Example of Textile Heaps

Authors: Arkadiusz Zurek

Abstract:

The potential of using drones is visible in many areas of logistics, especially in terms of their use for monitoring and control of many processes. The technologies implemented in the last decade concern new possibilities for companies that until now have not even considered them, such as warehouse inventories. Unmanned aerial vehicles are no longer seen as a revolutionary tool for Industry 4.0, but rather as tools in the daily work of factories and logistics operators. The research problem is to develop a method for measuring the weight of goods in a selected link of the clothing supply chain by drones. However, the purpose of this article is to analyze the causes of errors in traditional measurements, and then to identify adverse events related to the use of drones for the inventory of a heap of textiles intended for production purposes. On this basis, it will be possible to develop guidelines to eliminate the causes of these events in the measurement process using drones. In a real environment, work was carried out to determine the volume and weight of textiles, including, among others, weighing a textile sample to determine the average density of the assortment, establishing a local geodetic network, terrestrial laser scanning and photogrammetric raid using an unmanned aerial vehicle. As a result of the analysis of measurement data obtained in the facility, the volume and weight of the assortment and the accuracy of their determination were determined. In this article, this work presents how such heaps are currently being tested, what adverse events occur, indicate and describes the current use of photogrammetric techniques of this type of measurements so far performed by external drones for the inventory of wind farms or construction of the station and compare them with the measurement system of the aforementioned textile heap inside a large-format facility.

Keywords: drones, unmanned aerial system, UAS, indoor system, security, process automation, cost optimization, photogrammetry, risk elimination, industry 4.0

Procedia PDF Downloads 60
953 Investigating Non-suicidal Self-Injury Discussions on Twitter

Authors: Muhammad Abubakar Alhassan, Diane Pennington

Abstract:

Social networking sites have become a space for people to discuss public health issues such as non-suicidal self-injury (NSSI). There are thousands of tweets containing self-harm and self-injury hashtags on Twitter. It is difficult to distinguish between different users who participate in self-injury discussions on Twitter and how their opinions change over time. Also, it is challenging to understand the topics surrounding NSSI discussions on Twitter. We retrieved tweets using #selfham and #selfinjury hashtags and investigated those from the United kingdom. We applied inductive coding and grouped tweeters into different categories. This study used the Latent Dirichlet Allocation (LDA) algorithm to infer the optimum number of topics that describes our corpus. Our findings revealed that many of those participating in NSSI discussions are non-professional users as opposed to medical experts and academics. Support organisations, medical teams, and academics were campaigning positively on rais-ing self-injury awareness and recovery. Using LDAvis visualisation technique, we selected the top 20 most relevant terms from each topic and interpreted the topics as; children and youth well-being, self-harm misjudgement, mental health awareness, school and mental health support and, suicide and mental-health issues. More than 50% of these topics were discussed in England compared to Scotland, Wales, Ireland and Northern Ireland. Our findings highlight the advantages of using the Twitter social network in tackling the problem of self-injury through awareness. There is a need to study the potential risks associated with the use of social networks among self-injurers.

Keywords: self-harm, non-suicidal self-injury, Twitter, social networks

Procedia PDF Downloads 112
952 Reduction of False Positives in Head-Shoulder Detection Based on Multi-Part Color Segmentation

Authors: Lae-Jeong Park

Abstract:

The paper presents a method that utilizes figure-ground color segmentation to extract effective global feature in terms of false positive reduction in the head-shoulder detection. Conventional detectors that rely on local features such as HOG due to real-time operation suffer from false positives. Color cue in an input image provides salient information on a global characteristic which is necessary to alleviate the false positives of the local feature based detectors. An effective approach that uses figure-ground color segmentation has been presented in an effort to reduce the false positives in object detection. In this paper, an extended version of the approach is presented that adopts separate multipart foregrounds instead of a single prior foreground and performs the figure-ground color segmentation with each of the foregrounds. The multipart foregrounds include the parts of the head-shoulder shape and additional auxiliary foregrounds being optimized by a search algorithm. A classifier is constructed with the feature that consists of a set of the multiple resulting segmentations. Experimental results show that the presented method can discriminate more false positive than the single prior shape-based classifier as well as detectors with the local features. The improvement is possible because the presented approach can reduce the false positives that have the same colors in the head and shoulder foregrounds.

Keywords: pedestrian detection, color segmentation, false positive, feature extraction

Procedia PDF Downloads 260
951 Comparative Performance Analysis for Selected Behavioral Learning Systems versus Ant Colony System Performance: Neural Network Approach

Authors: Hassan M. H. Mustafa

Abstract:

This piece of research addresses an interesting comparative analytical study. Which considers two concepts of diverse algorithmic computational intelligence approaches related tightly with Neural and Non-Neural Systems. The first algorithmic intelligent approach concerned with observed obtained practical results after three neural animal systems’ activities. Namely, they are Pavlov’s, and Thorndike’s experimental work. Besides a mouse’s trial during its movement inside figure of eight (8) maze, to reach an optimal solution for reconstruction problem. Conversely, second algorithmic intelligent approach originated from observed activities’ results for Non-Neural Ant Colony System (ACS). These results obtained after reaching an optimal solution while solving Traveling Sales-man Problem (TSP). Interestingly, the effect of increasing number of agents (either neurons or ants) on learning performance shown to be similar for both introduced systems. Finally, performance of both intelligent learning paradigms shown to be in agreement with learning convergence process searching for least mean square error LMS algorithm. While its application for training some Artificial Neural Network (ANN) models. Accordingly, adopted ANN modeling is a relevant and realistic tool to investigate observations and analyze performance for both selected computational intelligence (biological behavioral learning) systems.

Keywords: artificial neural network modeling, animal learning, ant colony system, traveling salesman problem, computational biology

Procedia PDF Downloads 452
950 Model-Based Approach as Support for Product Industrialization: Application to an Optical Sensor

Authors: Frederic Schenker, Jonathan J. Hendriks, Gianluca Nicchiotti

Abstract:

In a product industrialization perspective, the end-product shall always be at the peak of technological advancement and developed in the shortest time possible. Thus, the constant growth of complexity and a shorter time-to-market calls for important changes on both the technical and business level. Undeniably, the common understanding of the system is beclouded by its complexity which leads to the communication gap between the engineers and the sale department. This communication link is therefore important to maintain and increase the information exchange between departments to ensure a punctual and flawless delivery to the end customer. This evolution brings engineers to reason with more hindsight and plan ahead. In this sense, they use new viewpoints to represent the data and to express the model deliverables in an understandable way that the different stakeholder may identify their needs and ideas. This article focuses on the usage of Model-Based System Engineering (MBSE) in a perspective of system industrialization and reconnect the engineering with the sales team. The modeling method used and presented in this paper concentrates on displaying as closely as possible the needs of the customer. Firstly, by providing a technical solution to the sales team to help them elaborate commercial offers without omitting technicalities. Secondly, the model simulates between a vast number of possibilities across a wide range of components. It becomes a dynamic tool for powerful analysis and optimizations. Thus, the model is no longer a technical tool for the engineers, but a way to maintain and solidify the communication between departments using different views of the model. The MBSE contribution to cost optimization during New Product Introduction (NPI) activities is made explicit through the illustration of a case study describing the support provided by system models to architectural choices during the industrialization of a novel optical sensor.

Keywords: analytical model, architecture comparison, MBSE, product industrialization, SysML, system thinking

Procedia PDF Downloads 141
949 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image

Authors: Salah Abdul Hameed Saleh, Ghada Hasan

Abstract:

The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.

Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area

Procedia PDF Downloads 430
948 Technological Development of a Biostimulant Bioproduct for Fruit Seedlings: An Engineering Overview

Authors: Andres Diaz Garcia

Abstract:

The successful technological development of any bioproduct, including those of the biostimulant type, requires to adequately completion of a series of stages allied to different disciplines that are related to microbiological, engineering, pharmaceutical chemistry, legal and market components, among others. Engineering as a discipline has a key contribution in different aspects of fermentation processes such as the design and optimization of culture media, the standardization of operating conditions within the bioreactor and the scaling of the production process of the active ingredient that it will be used in unit operations downstream. However, all aspects mentioned must take into account many biological factors of the microorganism such as the growth rate, the level of assimilation to various organic and inorganic sources and the mechanisms of action associated with its biological activity. This paper focuses on the practical experience within the Colombian Corporation for Agricultural Research (Agrosavia), which led to the development of a biostimulant bioproduct based on native rhizobacteria Bacillus amyloliquefaciens, oriented mainly to plant growth promotion in cape gooseberry nurseries and fruit crops in Colombia, and the challenges that were overcome from the expertise in the area of engineering. Through the application of strategies and engineering tools, a culture medium was optimized to obtain concentrations higher than 1E09 CFU (colony form units)/ml in liquid fermentation, the process of biomass production was standardized and a scale-up strategy was generated based on geometric (H/D of bioreactor relationships), and operational criteria based on a minimum dissolved oxygen concentration and that took into account the differences in the capacity of control of the process in the laboratory and pilot scales. Currently, the bioproduct obtained through this technological process is in stages of registration in Colombia for cape gooseberry fruits for export.

Keywords: biochemical engineering, liquid fermentation, plant growth promoting, scale-up process

Procedia PDF Downloads 102
947 Realization of Hybrid Beams Inertial Amplifier

Authors: Somya Ranjan Patro, Abhigna Bhatt, Arnab Banerjee

Abstract:

Inertial amplifier has recently gained increasing attention as a new mechanism for vibration control of structures. Currently, theoretical investigations are undertaken by researchers to reveal its fundamentals and to understand its underline principles in altering the structural response of structures against dynamic loadings. This paper investigates experimental and analytical studies on the dynamic characteristics of hybrid beam inertial amplifier (HBIA). The analytical formulation of the HBIA has been derived by implementing the spectral element method and rigid body dynamics. This formulation gives the relation between dynamic force and the response of the structure in the frequency domain. Further, for validation of the proposed HBIA, the experiments have been performed. The experimental setup consists of a 3D printed HBIA of polylactic acid (PLA) material screwed at the base plate of the shaker system. Two numbers of accelerometers are used to study the response, one at the base plate of the shaker second one placed at the top of the inertial amplifier. A force transducer is also placed in between the base plate and the inertial amplifier to calculate the total amount of load transferred from the base plate to the inertial amplifier. The obtained time domain response from the accelerometers have been converted into the frequency domain using the Fast Fourier Transform (FFT) algorithm. The experimental transmittance values are successfully validated with the analytical results, providing us essential confidence in our proposed methodology.

Keywords: inertial amplifier, fast fourier transform, natural frequencies, polylactic acid, transmittance, vibration absorbers

Procedia PDF Downloads 83
946 Biodiesel Production from Yellow Oleander Seed Oil

Authors: S. Rashmi, Devashish Das, N. Spoorthi, H. V. Manasa

Abstract:

Energy is essential and plays an important role for overall development of a nation. The global economy literally runs on energy. The use of fossil fuels as energy is now widely accepted as unsustainable due to depleting resources and also due to the accumulation of greenhouse gases in the environment, renewable and carbon neutral biodiesel are necessary for environment and economic sustainability. Unfortunately biodiesel produced from oil crop, waste cooking oil and animal fats are not able to replace fossil fuel. Fossil fuels remain the dominant source of primary energy, accounting for 84% of the overall increase in demand. Today biodiesel has come to mean a very specific chemical modification of natural oils. Objectives: To produce biodiesel from yellow oleander seed oil, to test the yield of biodiesel using different types of catalyst (KOH & NaOH). Methodology: Oil is extracted from dried yellow oleander seeds using Soxhlet extractor and oil expeller (bulk). The FFA content of the oil is checked and depending on the FFA value either two steps or single step process is followed to produce biodiesel. Two step processes includes esterfication and transesterification, single step includes only transesterification. The properties of biodiesel are checked. Engine test is done for biodiesel produced. Result: It is concluded that biodiesel quality parameters such as yield(85% & 90%), flash point(1710C & 1760C),fire point(1950C & 1980C), viscosity(4.9991 and 5.21 mm2/s) for the biodiesel from seed oil of Thevetiaperuviana produced by using KOH & NaOH respectively. Thus the seed oil of Thevetiaperuviana is a viable feedstock for good quality fuel.The outcomes of our project are a substitute for conventional fuel, to reduce petro diesel requirement,improved performance in terms of emissions. Future prospects: Optimization of biodiesel production using response surface method.

Keywords: yellow oleander seeds, biodiesel, quality parameters, renewable sources

Procedia PDF Downloads 427
945 Indian Business-Papers in Industrial Revolution 4.0: A Paradigm Shift

Authors: Disha Batra

Abstract:

The Industrial Revolution 4.0 is quite different, and a paradigm shift is underway in the media industry. With the advent of automated journalism and social media platforms, newspaper organizations have changed the way news was gathered and reported. The emergence of the fourth industrial revolution in the early 21st century has made the newspapers to adapt the changing technologies to remain relevant. This paper investigates the content of Indian business-papers in the era of the fourth industrial revolution and how these organizations have emerged in the time of convergence. The study is the content analyses of the top three Indian business dailies as per IRS (Indian Readership Survey) 2017 over a decade. The parametric analysis of the different parameters (source of information, use of illustrations, advertisements, layout, and framing, etc.) have been done in order to come across with the distinct adaptations and modifications by these dailies. The paper significantly dwells upon the thematic analysis of these newspapers in order to explore and find out the coverage given to various sub-themes of EBF (economic, business, and financial) journalism. Further, this study reveals the effect of high-speed algorithm-based trading, the aftermath of the fourth industrial revolution on the creative and investigative aspect of delivering financial stories by these respective newspapers. The study indicates a change heading towards an ongoing paradigm shift in the business newspaper industry with an adequate change in the source of information gathering along with the subtle increase in the coverage of financial news stories over the time.

Keywords: business-papers, business news, financial news, industrial revolution 4.0.

Procedia PDF Downloads 100
944 FWGE Production From Wheat Germ Using Co-culture of Saccharomyces cerevisiae and Lactobacillus plantarum

Authors: Valiollah Babaeipour, Mahdi Rahaie

Abstract:

food supplements are rich in specific nutrients and bioactive compounds that eliminate free radicals and improve cellular metabolism. The major bioactive compounds are found in bran and cereal sprouts. Secondary metabolites of these microorganisms have antioxidant properties that can be used alone or in combination with chemotherapy and radiation therapy to treat cancer. Biologically active compounds such as benzoquinone derivatives extracted from fermented wheat germ extract (FWGE) have several positive effects on the overall state of human health and strengthen the immune system. The present work describes the discontinuous fermentation of raw wheat germ for FWGE production through the simultaneous culture process using the probiotic strains of Saccharomyces cerevisiae, Lactobacillus plantarum, and the possibility of using solid waste. To increase production efficiency, first to select important factors in the optimization of each fermentation process, using a factorial statistical scheme of stirring fraction (120 to 200 rpm), dilution of solids to solvent (1 to 8-12), fermentation time (16 to 24 hours) and strain to wheat germ ratio (20% to 50%) were studied and then simultaneous culture was performed to increase the yields of 2 and 6 dimethoxybenzoquinone (2,6-DMBQ). Since 2 and 6 dimethoxy benzoquinone were fermented as the main biologically active compound in wheat germ extract, UV-Vis analysis was performed to confirm the presence of 2 and 6 dimethoxy benzoquinone in the final product. In addition, 2,6-DMBQ of some products was isolated in a non-polar C-18 column and quantified using high performance liquid chromatography (HPLC). Based on our findings, it can be concluded that the increase of 2 and 6 dimethoxybenzoquinone in the simultaneous culture of Saccharomyces cerevisiae - Lactobacillus plantarum compared to pure culture of Saccharomyces cerevisiae (from 1.89 mg / g) to 28.9% (2.66 mg / g) Increased.

Keywords: wheat germ, FWGE, saccharomyces cerevisiae, lactobacillus plantarum, co-culture, 2, 6-DMBQ

Procedia PDF Downloads 112
943 Sensory Gap Analysis on Port Wine Promotion and Perceptions

Authors: José Manue Carvalho Vieira, Mariana Magalhães, Elizabeth Serra

Abstract:

The Port Wine industry is essential to Portugal because it carries a tangible cultural heritage and for social and economic reasons. Positioned as a luxury product, brands need to pay more attention to the new generation's habits, preferences, languages, and sensory perceptions. Healthy lifestyles, anti-alcohol campaigns, and digitalisation of their buying decision process need to be better understood to understand the wine market in the future. The purpose of this study is to clarify the sensory perception gap between Port Wine descriptors promotion and the new generation's perceptions to help wineries to align their strategies. Based on the interpretivist approach - multiple methods and techniques (mixed-methods), different world views and different assumptions, and different data collection methods and analysis, this research integrated qualitative semi-structured interviews, Port Wine promotion contents, and social media perceptions mined by Sentiment Analysis Enginius algorithm. Findings confirm that Port Wine CEOs' strategies, brands' promotional content, and social perceptions are not sufficiently aligned. The central insight for Port Wine brands' managers is that there is a long and continuous work of understanding and associating their descriptors with the most relevant perceptual values and criteria of their targets to reposition (when necessary) and sustainably revitalise their brands. Finally, this study hypothesised a sensory gap that leads to a decrease in consumption, trying to find recommendations on how to transform it into an advantage for a better attraction towards the young age group (18-25).

Keywords: port wine, consumer habits, sensory gap analysis, wine marketing

Procedia PDF Downloads 220
942 Analysis of Urban Rail Transit Station's Accessibility Reliability: A Case Study of Hangzhou Metro, China

Authors: Jin-Qu Chen, Jie Liu, Yong Yin, Zi-Qi Ju, Yu-Yao Wu

Abstract:

Increase in travel fare and station’s failure will have huge impact on passengers’ travel. The Urban Rail Transit (URT) station’s accessibility reliability under increasing travel fare and station failure are analyzed in this paper. Firstly, the passenger’s travel path is resumed based on stochastic user equilibrium and Automatic Fare Collection (AFC) data. Secondly, calculating station’s importance by combining LeaderRank algorithm and Ratio of Station Affected Passenger Volume (RSAPV), and then the station’s accessibility evaluation indicators are proposed based on the analysis of passenger’s travel characteristic. Thirdly, station’s accessibility under different scenarios are measured and rate of accessibility change is proposed as station’s accessibility reliability indicator. Finally, the accessibility of Hangzhou metro stations is analyzed by the formulated models. The result shows that Jinjiang station and Liangzhu station are the most important and convenient station in the Hangzhou metro, respectively. Station failure and increase in travel fare and station failure have huge impact on station’s accessibility, except for increase in travel fare. Stations in Hangzhou metro Line 1 have relatively worse accessibility reliability and Fengqi Road station’s accessibility reliability is weakest. For Hangzhou metro operational department, constructing new metro line around Line 1 and protecting Line 1’s station preferentially can effective improve the accessibility reliability of Hangzhou metro.

Keywords: automatic fare collection data, AFC, station’s accessibility reliability, stochastic user equilibrium, urban rail transit, URT

Procedia PDF Downloads 116