Search results for: pilot optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4126

Search results for: pilot optimization

1156 Adaptive Swarm Balancing Algorithms for Rare-Event Prediction in Imbalanced Healthcare Data

Authors: Jinyan Li, Simon Fong, Raymond Wong, Mohammed Sabah, Fiaidhi Jinan

Abstract:

Clinical data analysis and forecasting have make great contributions to disease control, prevention and detection. However, such data usually suffer from highly unbalanced samples in class distributions. In this paper, we target at the binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat-inspired algorithm, and combine both of them with the synthetic minority over-sampling technique (SMOTE) for processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reveal that while the performance improvements obtained by the former methods are not scalable to larger data scales, the later one, which we call Adaptive Swarm Balancing Algorithms, leads to significant efficiency and effectiveness improvements on large datasets. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. Leading to more credible performances of the classifier, and shortening the running time compared with the brute-force method.

Keywords: Imbalanced dataset, meta-heuristic algorithm, SMOTE, big data

Procedia PDF Downloads 446
1155 Computational Aerodynamic Shape Optimisation Using a Concept of Control Nodes and Modified Cuckoo Search

Authors: D. S. Naumann, B. J. Evans, O. Hassan

Abstract:

This paper outlines the development of an automated aerodynamic optimisation algorithm using a novel method of parameterising a computational mesh by employing user–defined control nodes. The shape boundary movement is coupled to the movement of the novel concept of the control nodes via a quasi-1D-linear deformation. Additionally, a second order smoothing step has been integrated to act on the boundary during the mesh movement based on the change in its second derivative. This allows for both linear and non-linear shape transformations dependent on the preference of the user. The domain mesh movement is then coupled to the shape boundary movement via a Delaunay graph mapping. A Modified Cuckoo Search (MCS) algorithm is used for optimisation within the prescribed design space defined by the allowed range of control node displacement. A finite volume compressible NavierStokes solver is used for aerodynamic modelling to predict aerodynamic design fitness. The resulting coupled algorithm is applied to a range of test cases in two dimensions including the design of a subsonic, transonic and supersonic intake and the optimisation approach is compared with more conventional optimisation strategies. Ultimately, the algorithm is tested on a three dimensional wing optimisation case.

Keywords: mesh movement, aerodynamic shape optimization, cuckoo search, shape parameterisation

Procedia PDF Downloads 343
1154 3D Human Reconstruction over Cloud Based Image Data via AI and Machine Learning

Authors: Kaushik Sathupadi, Sandesh Achar

Abstract:

Human action recognition modeling is a critical task in machine learning. These systems require better techniques for recognizing body parts and selecting optimal features based on vision sensors to identify complex action patterns efficiently. Still, there is a considerable gap and challenges between images and videos, such as brightness, motion variation, and random clutters. This paper proposes a robust approach for classifying human actions over cloud-based image data. First, we apply pre-processing and detection, human and outer shape detection techniques. Next, we extract valuable information in terms of cues. We extract two distinct features: fuzzy local binary patterns and sequence representation. Then, we applied a greedy, randomized adaptive search procedure for data optimization and dimension reduction, and for classification, we used a random forest. We tested our model on two benchmark datasets, AAMAZ and the KTH Multi-view football datasets. Our HMR framework significantly outperforms the other state-of-the-art approaches and achieves a better recognition rate of 91% and 89.6% over the AAMAZ and KTH multi-view football datasets, respectively.

Keywords: computer vision, human motion analysis, random forest, machine learning

Procedia PDF Downloads 49
1153 Information Theoretic Approach for Beamforming in Wireless Communications

Authors: Syed Khurram Mahmud, Athar Naveed, Shoaib Arif

Abstract:

Beamforming is a signal processing technique extensively utilized in wireless communications and radars for desired signal intensification and interference signal minimization through spatial selectivity. In this paper, we present a method for calculation of optimal weight vectors for smart antenna array, to achieve a directive pattern during transmission and selective reception in interference prone environment. In proposed scheme, Mutual Information (MI) extrema are evaluated through an energy constrained objective function, which is based on a-priori information of interference source and desired array factor. Signal to Interference plus Noise Ratio (SINR) performance is evaluated for both transmission and reception. In our scheme, MI is presented as an index to identify trade-off between information gain, SINR, illumination time and spatial selectivity in an energy constrained optimization problem. The employed method yields lesser computational complexity, which is presented through comparative analysis with conventional methods in vogue. MI based beamforming offers enhancement of signal integrity in degraded environment while reducing computational intricacy and correlating key performance indicators.

Keywords: beamforming, interference, mutual information, wireless communications

Procedia PDF Downloads 283
1152 The Potential of Sentiment Analysis to Categorize Social Media Comments Using German Libraries

Authors: Felix Boehnisch, Alexander Lutz

Abstract:

Based on the number of users and the amount of content posted daily, Facebook is considered the largest social network in the world. This content includes images or text posts from companies but also private persons, which are also commented on by other users. However, it can sometimes be difficult for companies to keep track of all the posts and the reactions to them, especially when there are several posts a day that contain hundreds to thousands of comments. To facilitate this, the following paper deals with the possible applications of sentiment analysis to social media comments in order to be able to support the work in social media marketing. In a first step, post comments were divided into positive and negative by a subjective rating, then the same comments were checked for their polarity value by the two german python libraries TextBlobDE and SentiWS and also grouped into positive, negative, or even neutral. As a control, the subjective classifications were compared with the machine-generated ones by a confusion matrix, and relevant quality criteria were determined. The accuracy of both libraries was not really meaningful, with 60% to 66%. However, many words or sentences were not evaluated at all, so there seems to be room for optimization to possibly get more accurate results. In future studies, the use of these specific German libraries can be optimized to gain better insights by either applying them to stricter cleaned data or by adding a sentiment value to emojis, which have been removed from the comments in advance, as they are not contained in the libraries.

Keywords: Facebook, German libraries, polarity, sentiment analysis, social media comments

Procedia PDF Downloads 185
1151 Process Optimization of Electrospun Fish Sarcoplasmic Protein Based Nanofibers

Authors: Sena Su, Burak Ozbek, Yesim M. Sahin, Sevil Yucel, Dilek Kazan, Faik N. Oktar, Nazmi Ekren, Oguzhan Gunduz

Abstract:

In recent years, protein, lipid or polysaccharide-based polymers have been used in order to develop biodegradable materials and their chemical nature determines the physical properties of the resulting films. Among these polymers, proteins from different sources have been extensively employed because of their relative abundance, film forming ability, and nutritional qualities. In this study, the biodegradable composite nanofiber films based on fish sarcoplasmic protein (FSP) were prepared via electrospinning technique. Biodegradable polycaprolactone (PCL) was blended with the FSP to obtain hybrid FSP/PCL nanofiber mats with desirable physical properties. Mixture solutions of FSP and PCL were produced at different concentrations and their density, viscosity, electrical conductivity and surface tension were measured. Mechanical properties of electrospun nanofibers were evaluated. Morphology of composite nanofibers was observed using scanning electron microscopy (SEM). Moreover, Fourier transform infrared spectrometer (FTIR) studies were used for analysis chemical composition of composite nanofibers. This study revealed that the FSP based nanofibers have the potential to be used for different applications such as biodegradable packaging, drug delivery, and wound dressing, etc.

Keywords: edible film, electrospinning, fish sarcoplasmic protein, nanofiber

Procedia PDF Downloads 298
1150 Safety and Feasibility of Distal Radial Balloon Aortic Valvuloplasty - The DR-BAV Study

Authors: Alexandru Achim, Tamás Szűcsborus, Viktor Sasi, Ferenc Nagy, Zoltán Jambrik, Attila Nemes, Albert Varga, Călin Homorodean, Olivier F. Bertrand, Zoltán Ruzsa

Abstract:

Aim: Our study aimed to establish the safety and the technical success of distal radial access for balloon aortic valvuloplasty (DR-BAV). The secondary objective was to determine the effectiveness and appropriate role of DR-BAV within half year follow-up. Methods: Clinical and angiographic data from 32 consecutive patients with symptomatic aortic stenosis were evaluated in a prospective pilot single-center study. Between 2020 and 2021, the patients were treated utilizing dual distal radial access with 6-10F compatible balloons. The efficacy endpoint was divided into technical success (successful valvuloplasty balloon inflation at the aortic valve and absence of intra- or periprocedural major complications), hemodynamic success (a reduction of the mean invasive gradient >30%), and clinical success (an improvement of at least one clinical category in the NYHA classification). The safety endpoints were vascular complications (major and minor Valve Academic Research Consortium (VARC)-2 bleeding, diminished or lost arterial pulse or the presence of any pseudo-aneurysm or arteriovenous fistula during the clinical follow-up) and major adverse events, MAEs (the composite of death, stroke, myocardial infarction, and urgent major aortic valve replacement or implantation during the hospital stay and or at one-month follow-up). Results: 32 patients (40 % male, mean age 80 ± 8,5) with severe aortic valve stenosis were included in the study and 4 patients were excluded. Technical success was achieved in all patients (100%). Hemodynamic success was achieved in 30 patients (93,75%). Invasive max and mean gradients were reduced from 73±22 mm Hg and 49±22 mm Hg to 49±19 mm Hg and 20±13 mm Hg, respectively (p = <.001). Clinical success was achieved in 29 patients (90,6%). In total, no major adverse cardiac or cerebrovascular event nor vascular complications (according to VARC 2 criteria) occurred during the intervention. All-cause death at 6 months was 12%. Conclusion: According to our study, dual distal radial artery access is a safe and effective option for balloon aortic valvuloplasty in patients with severe aortic valve stenosis and can be performed in all patients with sufficient lumen diameter. Future randomized studies are warranted to investigate whether this technique is superior to other approaches.

Keywords: mean invasive gradient, distal radial access for balloon aortic valvuloplasty (DR-BAV), aortic valve stenosis, pseudo-aneurysm, arteriovenous fistula, valve academic research consortium (VARC)-2

Procedia PDF Downloads 96
1149 Reducing Support Structures in Design for Additive Manufacturing: A Neural Networks Approach

Authors: Olivia Borgue, Massimo Panarotto, Ola Isaksson

Abstract:

This article presents a neural networks-based strategy for reducing the need for support structures when designing for additive manufacturing (AM). Additive manufacturing is a relatively new and immature industrial technology, and the information to make confident decisions when designing for AM is limited. This lack of information impacts especially the early stages of engineering design, for instance, it is difficult to actively consider the support structures needed for manufacturing a part. This difficulty is related to the challenge of designing a product geometry accounting for customer requirements, manufacturing constraints and minimization of support structure. The approach presented in this article proposes an automatized geometry modification technique for reducing the use of the support structures while designing for AM. This strategy starts with a neural network-based strategy for shape recognition to achieve product classification, using an STL file of the product as input. Based on the classification, an automatic part geometry modification based on MATLAB© is implemented. At the end of the process, the strategy presents different geometry modification alternatives depending on the type of product to be designed. The geometry alternatives are then evaluated adopting a QFD-like decision support tool.

Keywords: additive manufacturing, engineering design, geometry modification optimization, neural networks

Procedia PDF Downloads 258
1148 Sources of Precipitation and Hydrograph Components of the Sutri Dhaka Glacier, Western Himalaya

Authors: Ajit Singh, Waliur Rahaman, Parmanand Sharma, Laluraj C. M., Lavkush Patel, Bhanu Pratap, Vinay Kumar Gaddam, Meloth Thamban

Abstract:

The Himalayan glaciers are the potential source of perennial water supply to Asia’s major river systems like the Ganga, Brahmaputra and the Indus. In order to improve our understanding about the source of precipitation and hydrograph components in the interior Himalayan glaciers, it is important to decipher the sources of moisture and their contribution to the glaciers in this river system. In doing so, we conducted an extensive pilot study in a Sutri Dhaka glacier, western Himalaya during 2014-15. To determine the moisture sources, rain, surface snow, ice, and stream meltwater samples were collected and analyzed for stable oxygen (δ¹⁸O) and hydrogen (δD) isotopes. A two-component hydrograph separation was performed for the glacier stream using these isotopes assuming the contribution of rain, groundwater and spring water contribution is negligible based on field studies and available literature. To validate the results obtained from hydrograph separation using above method, snow and ice melt ablation were measured using a network of bamboo stakes and snow pits. The δ¹⁸O and δD in rain samples range from -5.3% to -20.8% and -31.7% to -148.4% respectively. It is noteworthy to observe that the rain samples showed enriched values in the early season (July-August) and progressively get depleted at the end of the season (September). This could be due to the ‘amount effect’. Similarly, old snow samples have shown enriched isotopic values compared to fresh snow. This could because of the sublimation processes operating over the old surface snow. The δ¹⁸O and δD values in glacier ice samples range from -11.6% to -15.7% and -31.7% to -148.4%, whereas in a Sutri Dhaka meltwater stream, it ranges from -12.7% to -16.2% and -82.9% to -112.7% respectively. The mean deuterium excess (d-excess) value in all collected samples exceeds more than 16% which suggests the predominant moisture source of precipitation is from the Western Disturbances. Our detailed estimates of the hydrograph separation of Sutri Dhaka meltwater using isotope hydrograph separation and glaciological field methods agree within their uncertainty; stream meltwater budget is dominated by glaciers ice melt over snowmelt. The present study provides insights into the sources of moisture, controlling mechanism of the isotopic characteristics of Sutri Dhaka glacier water and helps in understanding the snow and ice melt components in Chandra basin, Western Himalaya.

Keywords: D-excess, hydrograph separation, Sutri Dhaka, stable water isotope, western Himalaya

Procedia PDF Downloads 155
1147 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization

Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu

Abstract:

This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.

Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection

Procedia PDF Downloads 66
1146 Measuring Systems Interoperability: A Focal Point for Standardized Assessment of Regional Disaster Resilience

Authors: Joel Thomas, Alexa Squirini

Abstract:

The key argument of this research is that every element of systems interoperability is an enabler of regional disaster resilience, and arguably should become a focal point for standardized measurement of communities’ ability to work together. Few resilience research efforts have focused on the development and application of solutions that measurably improve communities’ ability to work together at a regional level, yet a majority of the most devastating and disruptive disasters are those that have had a regional impact. The key findings of the research include a unique theoretical, mathematical, and operational approach to tangibly and defensibly measure and assess systems interoperability required to support crisis information management activities performed by governments, the private sector, and humanitarian organizations. A most effective way for communities to measurably improve regional disaster resilience is through deliberately executed disaster preparedness activities. Developing interoperable crisis information management capabilities is a crosscutting preparedness activity that greatly affects a community’s readiness and ability to work together in times of crisis. Thus, improving communities’ human and technical posture to work together in advance of a crisis, with the ultimate goal of enabling information sharing to support coordination and the careful management of available resources, is a primary means by which communities may improve regional disaster resilience. This model describes how systems interoperability can be qualitatively and quantitatively assessed when characterized as five forms of capital: governance; standard operating procedures; technology; training and exercises; and usage. The unique measurement framework presented defines the relationships between systems interoperability, information sharing and safeguarding, operational coordination, community preparedness and regional disaster resilience, and offers a means by which to implement real-world solutions and measure progress over the course of a multi-year program. The model is being developed and piloted in partnership with the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T) and the North Atlantic Treaty Organization (NATO) Advanced Regional Civil Emergency Coordination Pilot (ARCECP) with twenty-three organizations in Bosnia and Herzegovina, Croatia, Macedonia, and Montenegro. The intended effect of the model implementation is to enable communities to answer two key questions: 'Have we measurably improved crisis information management capabilities as a result of this effort?' and, 'As a result, are we more resilient?'

Keywords: disaster, interoperability, measurement, resilience

Procedia PDF Downloads 146
1145 Transformative Digital Trends in Supply Chain Management: The Role of Artificial Intelligence

Authors: Srinivas Vangari

Abstract:

With the technological advancements around the globe, artificial intelligence (AI) has boosted supply chain management (SCM) by improving efficiency, sensitivity, and promptness. Artificial intelligence-based SCM provides comprehensive perceptions of consumer behavior in dynamic market situations and trends, foreseeing the accurate demand. It reduces overproduction and stockouts while optimizing production planning and streamlining operations. Consequently, the AI-driven SCM produces a customer-centric supply with resilient and robust operations. Intending to delve into the transformative significance of AI in SCM, this study focuses on improving efficiency in SCM with the integration of AI, understanding the production demand, accurate forecasting, and particular production planning. The study employs a mixed-method approach and expert survey insights to explore the challenges and benefits of AI applications in SCM. Further, a case analysis is incorporated to identify the best practices and potential challenges with the critical success features in AI-driven SCM. Key findings of the study indicate the significant advantages of the AI-integrated SCM, including optimized inventory management, improved transportation and logistics management, cost optimization, and advanced decision-making, positioning AI as a pivotal force in the future of supply chain management.

Keywords: artificial intelligence, supply chain management, accurate forecast, accurate planning of production, understanding demand

Procedia PDF Downloads 27
1144 Effect of Drought Stress on Yield and Yield Components of Maize Cultivars in Golestan Province

Authors: Mojtaba Esmaeilzad Limoudehi, Ebrahim Amiri

Abstract:

Water scarcity is now one of the leading challenges for human societies. In this regard, recognizing the relationship between soil, water, plant growth, and plant response to stress is very significant. In this paper, considering the importance of drought stress and the role of choosing suitable cultivars in resistance against drought, a split-plot experiment using early, intermediate, and late-maturing cultivars was carried out in Katul filed, Golestan province during two cultivation years of 2015 and 2016. The main factor was irrigation intervals at four levels, including 7 days, 14 days, 21 days, and 28 days. The subfactor was the subplot of six maize cultivars (two early maturing cultivars, two medium maturing cultivars, and two late-maturing cultivars). The results of variance analysis have revealed that irrigation interval and cultivars treatment have significant effects on the number of grain in each corn, number of rows in each corn, number of grain per row, the weight of 1000 grains, grain yield, and biomass yield. Although, the interaction of these two factors on the mentioned attributes was meaningful. The best grain yield was achieved at 7 days irrigation interval and late maturing maize cultivars treatment, which was equal to 12301 kg/ha.

Keywords: corn, growth period, optimization, stress

Procedia PDF Downloads 147
1143 Modeling of Ductile Fracture Using Stress-Modified Critical Strain Criterion for Typical Pressure Vessel Steel

Authors: Carlos Cuenca, Diego Sarzosa

Abstract:

Ductile fracture occurs by the mechanism of void nucleation, void growth and coalescence. Potential sites for initiation are second phase particles or non-metallic inclusions. Modelling of ductile damage at the microscopic level is very difficult and complex task for engineers. Therefore, conservative predictions of ductile failure using simple models are necessary during the design and optimization of critical structures like pressure vessels and pipelines. Nowadays, it is well known that the initiation phase is strongly influenced by the stress triaxiality and plastic deformation at the microscopic level. Thus, a simple model used to study the ductile failure under multiaxial stress condition is the Stress Modified Critical Strain (SMCS) approach. Ductile rupture has been study for a structural steel under different stress triaxiality conditions using the SMCS method. Experimental tests are carried out to characterize the relation between stress triaxiality and equivalent plastic strain by notched round bars. After calibration of the plasticity and damage properties, predictions are made for low constraint bending specimens with and without side grooves. Stress/strain fields evolution are compared between the different geometries. Advantages and disadvantages of the SMCS methodology are discussed.

Keywords: damage, SMSC, SEB, steel, failure

Procedia PDF Downloads 301
1142 Comparative Analysis of Various Waste Oils for Biodiesel Production

Authors: Olusegun Ayodeji Olagunju, Christine Tyreesa Pillay

Abstract:

Biodiesel from waste sources is regarded as an economical and most viable fuel alternative to depleting fossil fuels. In this work, biodiesel was produced from three different sources of waste cooking oil; from cafeterias, which is vegetable-based using the transesterification method. The free fatty acids (% FFA) of the feedstocks were conducted successfully through the titration method. The results for sources 1, 2, and 3 were 0.86 %, 0.54 % and 0.20 %, respectively. The three variables considered in this process were temperature, reaction time, and catalyst concentration within the following range: 50 oC – 70 oC, 30 min – 90 min, and 0.5 % – 1.5 % catalyst. Produced biodiesel was characterized using ASTM standard methods for biodiesel property testing to determine the fuel properties, including kinematic viscosity, specific gravity, flash point, pour point, cloud point, and acid number. The results obtained indicate that the biodiesel yield from source 3 was greater than the other sources. All produced biodiesel fuel properties are within the standard biodiesel fuel specifications ASTM D6751. The optimum yield of biodiesel was obtained at 98.76%, 96.4%, and 94.53% from source 3, source 2, and source 1, respectively at optimum operating variables of 65 oC temperature, 90 minutes reaction time, and 0.5 wt% potassium hydroxide.

Keywords: waste cooking oil, biodiesel, free fatty acid content, potassium hydroxide catalyst, optimization analysis

Procedia PDF Downloads 81
1141 Numerical Investigation of the Evaporation and Mixing of UWS in a Diesel Exhaust Pipe

Authors: Tae Hyun Ahn, Gyo Woo Lee, Man Young Kim

Abstract:

Because of high thermal efficiency and low CO2 emission, diesel engines are being used widely in many industrial fields although it makes many PM and NOx which give both human health and environment a negative effect. NOx regulations for diesel engines, however, are being strengthened and it is impossible to meet the emission standard without NOx reduction devices such as SCR (Selective Catalytic Reduction), LNC (Lean NOx Catalyst), and LNT (Lean NOx Trap). Among the NOx reduction devices, urea-SCR system is known as the most stable and efficient method to solve the problem of NOx emission. But this device has some issues associated with the ammonia slip phenomenon which is occurred by shortage of evaporation and thermolysis time, and that makes it difficult to achieve uniform distribution of the injected urea in front of monolith. Therefore, this study has focused on the mixing enhancement between urea and exhaust gases to enhance the efficiency of the SCR catalyst equipped in catalytic muffler by changing inlet gas temperature and spray conditions to improve the spray uniformity of the urea water solution. Finally, it can be found that various parameters such as inlet gas temperature and injector and injection angles significantly affect the evaporation and mixing of the urea water solution with exhaust gases, and therefore, optimization of these parameters are required.

Keywords: UWS (Urea-Water-Solution), selective catalytic reduction (SCR), evaporation, thermolysis, injection

Procedia PDF Downloads 400
1140 Efficiency of Maritime Simulator Training in Oil Spill Response Competence Development

Authors: Antti Lanki, Justiina Halonen, Juuso Punnonen, Emmi Rantavuo

Abstract:

Marine oil spill response operation requires extensive vessel maneuvering and navigation skills. At-sea oil containment and recovery include both single vessel and multi-vessel operations. Towing long oil containment booms that are several hundreds of meters in length, is a challenge in itself. Boom deployment and towing in multi-vessel configurations is an added challenge that requires precise coordination and control of the vessels. Efficient communication, as a prerequisite for shared situational awareness, is needed in order to execute the response task effectively. To gain and maintain adequate maritime skills, practical training is needed. Field exercises are the most effective way of learning, but especially the related vessel operations are resource-intensive and costly. Field exercises may also be affected by environmental limitations such as high sea-state or other adverse weather conditions. In Finland, the seasonal ice-coverage also limits the training period to summer seasons only. In addition, environmental sensitiveness of the sea area restricts the use of real oil or other target substances. This paper examines, whether maritime simulator training can offer a complementary method to overcome the training challenges related to field exercises. The objective is to assess the efficiency and the learning impact of simulator training, and the specific skills that can be trained most effectively in simulators. This paper provides an overview of learning results from two oil spill response pilot courses, in which maritime navigational bridge simulators were used to train the oil spill response authorities. The simulators were equipped with an oil spill functionality module. The courses were targeted at coastal Fire and Rescue Services responsible for near shore oil spill response in Finland. The competence levels of the participants were surveyed before and after the course in order to measure potential shifts in competencies due to the simulator training. In addition to the quantitative analysis, the efficiency of the simulator training is evaluated qualitatively through feedback from the participants. The results indicate that simulator training is a valid and effective method for developing marine oil spill response competencies that complement traditional field exercises. Simulator training provides a safe environment for assessing various oil containment and recovery tactics. One of the main benefits of the simulator training was found to be the immediate feedback the spill modelling software provides on the oil spill behaviour as a reaction to response measures.

Keywords: maritime training, oil spill response, simulation, vessel manoeuvring

Procedia PDF Downloads 176
1139 Sustainable Hydrogen Generation via Gasification of Pig Hair Biowaste with NiO/Al₂O₃ Catalysts

Authors: Jamshid Hussain, Kuen Song Lin

Abstract:

Over one thousand tons of pig hair biowaste (PHB) are produced yearly in Taiwan. The improper disposal of PHB can have a negative impact on the environment, consequently contributing to the spread of diseases. The treatment of PHB has become a major environmental and economic challenge. Innovative treatments must be developed because of the heavy metal and sulfur content of PHB. Like most organic materials, PHB is composed of many organic volatiles that contain large amounts of hydrogen. Hydrogen gas can be effectively produced by the catalytic gasification of PHB using a laboratory-scale fixed-bed gasifier, employing 15 wt% NiO/Al₂O₃ catalyst at 753–913 K. The derived kinetic parameters were obtained and refined using simulation calculations. FE–SEM microphotograph showed that NiO/Al₂O₃ catalyst particles are Spherical or irregularly shaped with diameters of 10–20 nm. HR–TEM represented that the fresh Ni particles were evenly dispersed and uniform in the microstructure of Al₂O₃ support. The sizes of the NiO nanoparticles were vital in determining catalyst activity. As displayed in the pre-edge XANES spectra of the NiO/Al₂O₃ catalysts, it exhibited a non-intensive absorbance nature for the 1s to 3d transition, which is prohibited by the selection rule for an ideal octahedral symmetry. Similarly, the populace of Ni(II) and Ni(0) onto Al₂O₃ supports are proportional to the strength of the 1s to 4pxy transition, respectively. The weak shoulder at 8329–8334 eV and a strong character at 8345–8353 eV were ascribed to the 1s to 4pxy shift, which suggested the presence of NiO types onto Al₂O₃ support in PHB catalytic gasification. As determined by the XANES analyses, Ni(II)→Ni(0) reduction was mostly observed. The oxidation of PHB onto the NiO/Al₂O₃ surface may have resulted in Ni(0) and the formation of tar during the gasification process. The EXAFS spectra revealed that the Ni atoms with Ni–Ni/Ni–O bonds were found. The Ni–O bonding proved that the produced syngas were unable to reduce NiO to Ni(0) completely. The weakness of the Ni–Ni bonds may have been caused by the highly dispersed Ni in the Al₂O₃ support. The central Ni atoms have Ni–O (2.01 Å) and Ni–Ni (2.34 Å) bond distances in the fresh NiO/Al₂O₃ catalyst. The PHB was converted into hydrogen-rich syngas (CO + H₂, >89.8% dry basis). When PHB (250 kg h−1) was catalytically gasified at 753–913 K, syngas was produced at approximately 5.45 × 105 kcal h−1 of heat recovery with 76.5%–83.5% cold gas efficiency. The simulation of the pilot-scale PHB catalytic gasification demonstrated that the system could provide hydrogen (purity > 99.99%) and generate electricity for an internal combustion engine of 100 kW and a proton exchange membrane fuel cell (PEMFC) of 175 kW. A projected payback for a PHB catalytic gasification plant with a capacity of 10- or 20-TPD (ton per day) was around 3.2 or 2.5 years, respectively.

Keywords: pig hair biowaste, catalytic gasification, hydrogen production, PEMFC, resource recovery

Procedia PDF Downloads 24
1138 Optimizing Scribe Resourcing to Improve Hospitalist Workloads

Authors: Ahmed Hamzi, Bryan Norman

Abstract:

Having scribes help document patient records in electronic health record systems can improve hospitalists’ productivity. But hospitals need to determine the optimum number of scribes to hire to maximize scribe cost effectiveness. Scribe attendance uncertainty due to planned and unplanned absences is a primary challenge. This paper presents simulation and analytical models to determine the optimum number of scribes for a hospital to hire. Scribe staffing practices vary from one context to another; different staffing scenarios are considered where having extra attending scribes provides or does not provide additional value and utilizing on-call scribes to fill in for potentially absent scribes. These staffing scenarios are assessed for different scribe revenue ratios (ratio of the value of the scribe relative to scribe costs) ranging from 100% to 300%. The optimum solution depends on the absenteeism rate, revenue ratio, and desired service level. The analytical model obtains solutions easier and faster than the simulation model, but the simulation model is more accurate. Therefore, the analytical model’s solutions are compared with the simulation model’s solutions regarding both the number of scribes hired and cost-effectiveness. Additionally, an Excel tool has been developed to facilitate decision-makers in easily obtaining solutions using the analytical model.

Keywords: hospitalists, workload, optimization cost, economic analysis

Procedia PDF Downloads 50
1137 Optimization and Evaluation of the Oil Extraction Process Using Supercritical CO2 and Co-solvents from Spent Coffee Ground

Authors: Sergio Clemente, Carla Bartolomé, Miriam Lorenzo, Sergio Valverde

Abstract:

The generation of urban waste is a consequence of human activity, and the fraction of urban organic waste is one of the major components of municipal waste. The development of new materials and energy recovery technologies is becoming a thriving topic throughout Europe. ITENE is working to increase the circularity of coffee grounds from West Macedonia. Although these residues have a high content of carbohydrates, fatty acids and polyphenols, they are usually valorized energetically or discarded, losing all these compounds of interest. ITENE is studying the extraction of oils from spent coffee grounds using supercritical CO2, as it is a more sustainable method and does not destroy the most valuable compounds. In the HOOP project, the extraction process is optimized to maximize oil production and the possibility of using co-solvents together with supercritical CO2 is studied. The production of fatty acids by scCO2 extraction is optimized and then compared with other conventional extraction methods such as hexane extraction and the Folch method. The conditions for scCO2 were temperatures of 313.15K, 323.15K and 333.15K, pressures from 150 bar to 200 bar, and extraction times between 1 and 3 h. In addition, a complete characterization of the resulting lipid fraction is performed to evaluate its fatty acid content and profile, as well as its antioxidant properties, lipid oxidation, total phenol content and moisture.

Keywords: Supercritical co2, coffee, valorization, extraction

Procedia PDF Downloads 12
1136 The Effect of Damping Treatment for Noise Control on Offshore Platforms Using Statistical Energy Analysis

Authors: Ji Xi, Cheng Song Chin, Ehsan Mesbahi

Abstract:

Structure-borne noise is an important aspect of offshore platform sound field. It can be generated either directly by vibrating machineries induced mechanical force, indirectly by the excitation of structure or excitation by incident airborne noise. Therefore, limiting of the transmission of vibration energy throughout the offshore platform is the key to control the structure-borne noise. This is usually done by introducing damping treatment to the steel structures. Two types of damping treatment using on-board are presented. By conducting a statistical energy analysis (SEA) simulation on a jack-up rig, the noise level in the source room, the neighboring rooms, and remote living quarter cabins are compared before and after the damping treatments been applied. The results demonstrated that, in the source neighboring room and living quarter area, there is a significant noise reduction with the damping treatment applied, whereas in the source room where air-borne sound predominates that of structure-borne sound, the impact is not obvious. The subsequent optimization design of damping treatment in the offshore platform can be made which enable acoustic professionals to implement noise control during the design stage for offshore crews’ hearing protection and habitant comfortability.

Keywords: statistical energy analysis, damping treatment, noise control, offshore platform

Procedia PDF Downloads 557
1135 Silymarin Loaded Mesoporous Silica Nanoparticles: Preparation, Optimization, Pharmacodynamic and Oral Multi-Dose Safety Assessment

Authors: Sarah Nasr, Maha M. A. Nasra, Ossama Y. Abdallah

Abstract:

The present work aimed to prepare Silymarin loaded MCM-41 type mesoporous silica nanoparticles (MSNs) and to assess the system’s solubility enhancement ability on the pharmacodynamic performance of Silymarin as a hepatoprotective agent. MSNs prepared by soft-templating technique, were loaded with Silymarin, characterized for particle size, zeta potential, surface properties, DSC and XRPD. DSC and specific surface area data confirmed deposition of Silymarin in an amorphous state in MSNs’ pores. In-vitro drug dissolution testing displayed enhanced dissolution rate of Silymarin upon loading on MSNs. High dose Acetaminophen was then used to inflict hepatic injury upon albino male Wistar rats simultaneously receiving either free Silymarin, Silymarin loaded MSNs or blank MSNs. Plasma AST, ALT, albumin and total protein and liver homogenate content of TBARs or LDH as measures of antioxidant drug action were assessed for all animal groups. Results showed a significant superiority of Silymarin loaded MSNs to free drug in almost all parameters. Meanwhile prolonged administration of blank MSNs had no evident toxicity on rats.

Keywords: mesoporous silica nanoparticles, safety, solubility enhancement, silymarin

Procedia PDF Downloads 336
1134 Single Pole-To-Earth Fault Detection and Location on the Tehran Railway System Using ICA and PSO Trained Neural Network

Authors: Masoud Safarishaal

Abstract:

Detecting the location of pole-to-earth faults is essential for the safe operation of the electrical system of the railroad. This paper aims to use a combination of evolutionary algorithms and neural networks to increase the accuracy of single pole-to-earth fault detection and location on the Tehran railroad power supply system. As a result, the Imperialist Competitive Algorithm (ICA) and Particle Swarm Optimization (PSO) are used to train the neural network to improve the accuracy and convergence of the learning process. Due to the system's nonlinearity, fault detection is an ideal application for the proposed method, where the 600 Hz harmonic ripple method is used in this paper for fault detection. The substations were simulated by considering various situations in feeding the circuit, the transformer, and typical Tehran metro parameters that have developed the silicon rectifier. Required data for the network learning process has been gathered from simulation results. The 600Hz component value will change with the change of the location of a single pole to the earth's fault. Therefore, 600Hz components are used as inputs of the neural network when fault location is the output of the network system. The simulation results show that the proposed methods can accurately predict the fault location.

Keywords: single pole-to-pole fault, Tehran railway, ICA, PSO, artificial neural network

Procedia PDF Downloads 132
1133 A New Intelligent, Dynamic and Real Time Management System of Sewerage

Authors: R. Tlili Yaakoubi, H.Nakouri, O. Blanpain, S. Lallahem

Abstract:

The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of this project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 19 to 100 %. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 40 % of total volume rejected to the natural environment and of 65 % in the number of discharges.

Keywords: automation, optimization, paradigm, RTC

Procedia PDF Downloads 302
1132 Trajectory Tracking of a Redundant Hybrid Manipulator Using a Switching Control Method

Authors: Atilla Bayram

Abstract:

This paper presents the trajectory tracking control of a spatial redundant hybrid manipulator. This manipulator consists of two parallel manipulators which are a variable geometry truss (VGT) module. In fact, each VGT module with 3-degress of freedom (DOF) is a planar parallel manipulator and their operational planes of these VGT modules are arranged to be orthogonal to each other. Also, the manipulator contains a twist motion part attached to the top of the second VGT module to supply the missing orientation of the endeffector. These three modules constitute totally 7-DOF hybrid (parallel-parallel) redundant spatial manipulator. The forward kinematics equations of this manipulator are obtained, then, according to these equations, the inverse kinematics is solved based on an optimization with the joint limit avoidance. The dynamic equations are formed by using virtual work method. In order to test the performance of the redundant manipulator and the controllers presented, two different desired trajectories are followed by using the computed force control method and a switching control method. The switching control method is combined with the computed force control method and genetic algorithm. In the switching control method, the genetic algorithm is only used for fine tuning in the compensation of the trajectory tracking errors.

Keywords: computed force method, genetic algorithm, hybrid manipulator, inverse kinematics of redundant manipulators, variable geometry truss

Procedia PDF Downloads 352
1131 Challenges and Opportunities for Implementing Integrated Project Delivery Method in Public Sector Construction

Authors: Ahsan Ahmed, Ming Lu, Syed Zaidi, Farhan Khan

Abstract:

The Integrated Project Delivery (IPD) method has been proposed as the solution to tackle complexity and fragmentation in the real world while addressing the construction industry’s growing needs for productivity and sustainability. Although the private sector has taken the initiative in implementing IPD and taken advantage of new technology such as building information modeling (BIM) in delivering projects, IPD remains less known and rarely used in public sector construction. The focus of this paper is set on the use of IPD in projects in public sector, which is potentially complemented by the use of analytical functionalities for workface planning and construction oriented design enabled by recent research advances in BIM. Experiences and lessons learned from implementing IPD in the private sector and in BIM-based construction automation research would play a vital role in reducing barriers and eliminating issues in connection with project delivery in the public sector. The paper elaborates issues challenges, contractual relationships and the interactions throughout the planning, design and construction phases in the context of implementing IPD on construction projects in the public sector. A slab construction case is used as a ‘sandbox’ model to elaborate (1) the ideal way of communication, integration, and collaboration among all the parties involved in project delivery in planning and (2) the execution of projects by using IDP principles and optimization, simulation analyses.

Keywords: integrated project delivery, IPD, building information modeling, BIM

Procedia PDF Downloads 207
1130 Internet of Things Edge Device Power Modelling and Optimization Simulator

Authors: Cian O'Shea, Ross O'Halloran, Peter Haigh

Abstract:

Wireless Sensor Networks (WSN) are Internet of Things (IoT) edge devices. They are becoming widely adopted in many industries, including health care, building energy management, and conditional monitoring. As the scale of WSN deployments increases, the cost and complexity of battery replacement and disposal become more significant and in time may become a barrier to adoption. Harvesting ambient energies provide a pathway to reducing dependence on batteries and in the future may lead to autonomously powered sensors. This work describes a simulation tool that enables the user to predict the battery life of a wireless sensor that utilizes energy harvesting to supplement the battery power. To create this simulator, all aspects of a typical WSN edge device were modelled including, sensors, transceiver, and microcontroller as well as the energy source components (batteries, solar cells, thermoelectric generators (TEG), supercapacitors and DC/DC converters). The tool allows the user to plug and play different pre characterized devices as well as add user-defined devices. The goal of this simulation tool is to predict the lifetime of a device and scope for extension using ambient energy sources.

Keywords: Wireless Sensor Network, IoT, edge device, simulation, solar cells, TEG, supercapacitor, energy harvesting

Procedia PDF Downloads 136
1129 The Role of Demographics and Service Quality in the Adoption and Diffusion of E-Government Services: A Study in India

Authors: Sayantan Khanra, Rojers P. Joseph

Abstract:

Background and Significance: This study is aimed at analyzing the role of demographic and service quality variables in the adoption and diffusion of e-government services among the users in India. The study proposes to examine the users' perception about e-Government services and investigate the key variables that are most salient to the Indian populace. Description of the Basic Methodologies: The methodology to be adopted in this study is Hierarchical Regression Analysis, which will help in exploring the impact of the demographic variables and the quality dimensions on the willingness to use e-government services in two steps. First, the impact of demographic variables on the willingness to use e-government services is to be examined. In the second step, quality dimensions would be used as inputs to the model for explaining variance in excess of prior contribution by the demographic variables. Present Status: Our study is in the data collection stage in collaboration with a highly reliable, authentic and adequate source of user data. Assuming that the population of the study comprises all the Internet users in India, a massive sample size of more than 10,000 random respondents is being approached. Data is being collected using an online survey questionnaire. A pilot survey has already been carried out to refine the questionnaire with inputs from an expert in management information systems and a small group of users of e-government services in India. The first three questions in the survey pertain to the Internet usage pattern of a respondent and probe whether the person has used e-government services. If the respondent confirms that he/she has used e-government services, then an aggregate of 15 indicators are used to measure the quality dimensions under consideration and the willingness of the respondent to use e-government services, on a five-point Likert scale. If the respondent reports that he/she has not used e-government services, then a few optional questions are asked to understand the reason(s) behind the same. Last four questions in the survey are dedicated to collect data related to the demographic variables. An indication of the Major Findings: Based on the extensive literature review carried out to develop several propositions; a research model is prescribed to start with. A major outcome expected at the completion of the study is the development of a research model that would help to understand the relationship involving the demographic variables and service quality dimensions, and the willingness to adopt e-government services, particularly in an emerging economy like India. Concluding Statement: Governments of emerging economies and other relevant agencies can use the findings from the study in designing, updating, and promoting e-government services to enhance public participation, which in turn, would help to improve efficiency, convenience, engagement, and transparency in implementing these services.

Keywords: adoption and diffusion of e-government services, demographic variables, hierarchical regression analysis, service quality dimensions

Procedia PDF Downloads 273
1128 Reduction in Hot Metal Silicon through Statistical Analysis at G-Blast Furnace, Tata Steel Jamshedpur

Authors: Shoumodip Roy, Ankit Singhania, Santanu Mallick, Abhiram Jha, M. K. Agarwal, R. V. Ramna, Uttam Singh

Abstract:

The quality of hot metal at any blast furnace is judged by the silicon content in it. Lower hot metal silicon not only enhances process efficiency at steel melting shops but also reduces hot metal costs. The Hot metal produced at G-Blast furnace Tata Steel Jamshedpur has a significantly higher Si content than Benchmark Blast furnaces. The higher content of hot metal Si is mainly due to inferior raw material quality than those used in benchmark blast furnaces. With minimum control over raw material quality, the only option left to control hot metal Si is via optimizing the furnace parameters. Therefore, in order to identify the levers to reduce hot metal Si, Data mining was carried out, and multiple regression models were developed. The statistical analysis revealed that Slag B3{(CaO+MgO)/SiO2}, Slag Alumina and Hot metal temperature are key controllable parameters affecting hot metal silicon. Contour Plots were used to determine the optimum range of levels identified through statistical analysis. A trial plan was formulated to operate relevant parameters, at G blast furnace, in the identified range to reduce hot metal silicon. This paper details out the process followed and subsequent reduction in hot metal silicon by 15% at G blast furnace.

Keywords: blast furnace, optimization, silicon, statistical tools

Procedia PDF Downloads 225
1127 The Relationship between the Competence Perception of Student and Graduate Nurses and Their Autonomy and Critical Thinking Disposition

Authors: Zülfiye Bıkmaz, Aytolan Yıldırım

Abstract:

This study was planned as a descriptive regressive study in order to determine the relationship between the competency levels of working nurses, the levels of competency expected by nursing students, the critical thinking disposition of nurses, their perceived autonomy levels, and certain socio demographic characteristics. It is also a methodological study with regard to the intercultural adaptation of the Nursing Competence Scale (NCS) in both working and student samples. The sample of the study group of nurses at a university hospital for at least 6 months working properly and consists of 443 people filled out questionnaires. The student group, consisting of 543 individuals from the 4 public university nursing 3rd and 4th grade students. Data collection tools consisted of a questionnaire prepared in order to define the socio demographic, economic, and personal characteristics of the participants, the ‘Nursing Competency Scale’, the ‘Autonomy Subscale of the Sociotropy – Autonomy Scale’, and the ‘California Critical Thinking Disposition Inventory’. In data evaluation, descriptive statistics, nonparametric tests, Rasch analysis and correlation and regression tests were used. The language validity of the ‘NCS’ was performed by translation and back translation, and the context validity of the scale was performed with expert views. The scale, which was formed into its final structure, was applied in a pilot application from a group consisting of graduate and student nurses. The time constancy of the test was obtained by analysis testing retesting method. In order to reduce the time problems with the two half reliability method was used. The Cronbach Alfa coefficient of the scale was found to be 0.980 for the nurse group and 0.986 for the student group. Statistically meaningful relationships between competence and critical thinking and variables such as age, gender, marital status, family structure, having had critical thinking training, education level, class of the students, service worked in, employment style and position, and employment duration were found. Statistically meaningful relationships between autonomy and certain variables of the student group such as year, employment status, decision making style regarding self, total duration of employment, employment style, and education status were found. As a result, it was determined that the NCS which was adapted interculturally was a valid and reliable measurement tool and was found to be associated with autonomy and critical thinking.

Keywords: nurse, nursing student, competence, autonomy, critical thinking, Rasch analysis

Procedia PDF Downloads 399