Search results for: floor estimation algorithm
193 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency
Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino
Abstract:
In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.Keywords: base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability
Procedia PDF Downloads 279192 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning
Authors: Sagir M. Yusuf, Chris Baber
Abstract:
In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.Keywords: Levy flight, distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence
Procedia PDF Downloads 144191 Management Tools for Assessment of Adverse Reactions Caused by Contrast Media at the Hospital
Authors: Pranee Suecharoen, Ratchadaporn Soontornpas, Jaturat Kanpittaya
Abstract:
Background: Contrast media has an important role for disease diagnosis through detection of pathologies. Contrast media can, however, cause adverse reactions after administration of its agents. Although non-ionic contrast media are commonly used, the incidence of adverse events is relatively low. The most common reactions found (10.5%) were mild and manageable and/or preventable. Pharmacists can play an important role in evaluating adverse reactions, including awareness of the specific preparation and the type of adverse reaction. As most common types of adverse reactions are idiosyncratic or pseudo-allergic reactions, common standards need to be established to prevent and control adverse reactions promptly and effectively. Objective: To measure the effect of using tools for symptom evaluation in order to reduce the severity, or prevent the occurrence, of adverse reactions from contrast media. Methods: Retrospective review descriptive research with data collected on adverse reactions assessment and Naranjo’s algorithm between June 2015 and May 2016. Results: 158 patients (10.53%) had adverse reactions. Of the 1,500 participants with an adverse event evaluation, 137 (9.13%) had a mild adverse reaction, including hives, nausea, vomiting, dizziness, and headache. These types of symptoms can be treated (i.e., with antihistamines, anti-emetics) and the patient recovers completely within one day. The group with moderate adverse reactions, numbering 18 cases (1.2%), had hypertension or hypotension, and shortness of breath. Severe adverse reactions numbered 3 cases (0.2%) and included swelling of the larynx, cardiac arrest, and loss of consciousness, requiring immediate treatment. No other complications under close medical supervision were recorded (i.e., corticosteroids use, epinephrine, dopamine, atropine, or life-saving devices). Using the guideline, therapies are divided into general and specific and are performed according to the severity, risk factors and ingestion of contrast media agents. Patients who have high-risk factors were screened and treated (i.e., prophylactic premedication) for prevention of severe adverse reactions, especially those with renal failure. Thus, awareness for the need for prescreening of different risk factors is necessary for early recognition and prompt treatment. Conclusion: Studying adverse reactions can be used to develop a model for reducing the level of severity and setting a guideline for a standardized, multidisciplinary approach to adverse reactions.Keywords: role of pharmacist, management of adverse reactions, guideline for contrast media, non-ionic contrast media
Procedia PDF Downloads 303190 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design
Authors: Sebastian Kehne, Alexander Epple, Werner Herfs
Abstract:
A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design
Procedia PDF Downloads 287189 Production Optimization under Geological Uncertainty Using Distance-Based Clustering
Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe
Abstract:
It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization
Procedia PDF Downloads 144188 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264
Authors: V. Ziegler, F. Schneider, M. Pesch
Abstract:
With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection
Procedia PDF Downloads 151187 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs
Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu
Abstract:
This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network
Procedia PDF Downloads 63186 Aquaporin-1 as a Differential Marker in Toxicant-Induced Lung Injury
Authors: Ekta Yadav, Sukanta Bhattacharya, Brijesh Yadav, Ariel Hus, Jagjit Yadav
Abstract:
Background and Significance: Respiratory exposure to toxicants (chemicals or particulates) causes disruption of lung homeostasis leading to lung toxicity/injury manifested as pulmonary inflammation, edema, and/or other effects depending on the type and extent of exposure. This emphasizes the need for investigating toxicant type-specific mechanisms to understand therapeutic targets. Aquaporins, aka water channels, are known to play a role in lung homeostasis. Particularly, the two major lung aquaporins AQP5 and AQP1 expressed in alveolar epithelial and vasculature endothelia respectively allow for movement of the fluid between the alveolar air space and the associated vasculature. In view of this, the current study is focused on understanding the regulation of lung aquaporins and other targets during inhalation exposure to toxic chemicals (Cigarette smoke chemicals) versus toxic particles (Carbon nanoparticles) or co-exposures to understand their relevance as markers of injury and intervention. Methodologies: C57BL/6 mice (5-7 weeks old) were used in this study following an approved protocol by the University of Cincinnati Institutional Animal Care and Use Committee (IACUC). The mice were exposed via oropharyngeal aspiration to multiwall carbon nanotube (MWCNT) particles suspension once (33 ugs/mouse) followed by housing for four weeks or to Cigarette smoke Extract (CSE) using a daily dose of 30µl/mouse for four weeks, or to co-exposure using the combined regime. Control groups received vehicles following the same dosing schedule. Lung toxicity/injury was assessed in terms of homeostasis changes in the lung tissue and lumen. Exposed lungs were analyzed for transcriptional expression of specific targets (AQPs, surfactant protein A, Mucin 5b) in relation to tissue homeostasis. Total RNA from lungs extracted using TRIreagent kit was analyzed using qRT-PCR based on gene-specific primers. Total protein in bronchoalveolar lavage (BAL) fluid was determined by the DC protein estimation kit (BioRad). GraphPad Prism 5.0 (La Jolla, CA, USA) was used for all analyses. Major findings: CNT exposure alone or as co-exposure with CSE increased the total protein content in the BAL fluid (lung lumen rinse), implying compromised membrane integrity and cellular infiltration in the lung alveoli. In contrast, CSE showed no significant effect. AQP1, required for water transport across membranes of endothelial cells in lungs, was significantly upregulated in CNT exposure but downregulated in CSE exposure and showed an intermediate level of expression for the co-exposure group. Both CNT and CSE exposures had significant downregulating effects on Muc5b, and SP-A expression and the co-exposure showed either no significant effect (Muc5b) or significant downregulating effect (SP-A), suggesting an increased propensity for infection in the exposed lungs. Conclusions: The current study based on the lung toxicity mouse model showed that both toxicant types, particles (CNT) versus chemicals (CSE), cause similar downregulation of lung innate defense targets (SP-A, Muc5b) and mostly a summative effect when presented as co-exposure. However, the two toxicant types show differential induction of aquaporin-1 coinciding with the corresponding differential damage to alveolar integrity (vascular permeability). Interestingly, this implies the potential of AQP1 as a differential marker of toxicant type-specific lung injury.Keywords: aquaporin, gene expression, lung injury, toxicant exposure
Procedia PDF Downloads 184185 Analysis of Ozone Episodes in the Forest and Vegetation Areas with Using HYSPLIT Model: A Case Study of the North-West Side of Biga Peninsula, Turkey
Authors: Deniz Sari, Selahattin İncecik, Nesimi Ozkurt
Abstract:
Surface ozone, which named as one of the most critical pollutants in the 21th century, threats to human health, forest and vegetation. Specifically, in rural areas surface ozone cause significant influences on agricultural productions and trees. In this study, in order to understand to the surface ozone levels in rural areas we focus on the north-western side of Biga Peninsula which covers by the mountainous and forested area. Ozone concentrations were measured for the first time with passive sampling at 10 sites and two online monitoring stations in this rural area from 2013 and 2015. Using with the daytime hourly O3 measurements during light hours (08:00–20:00) exceeding the threshold of 40 ppb over the 3 months (May, June and July) for agricultural crops, and over the six months (April to September) for forest trees AOT40 (Accumulated hourly O3 concentrations Over a Threshold of 40 ppb) cumulative index was calculated. AOT40 is defined by EU Directive 2008/50/EC to evaluate whether ozone pollution is a risk for vegetation, and is calculated by using hourly ozone concentrations from monitoring systems. In the present study, we performed the trajectory analysis by The Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model to follow the long-range transport sources contributing to the high ozone levels in the region. The ozone episodes observed between 2013 and 2015 were analysed using the HYSPLIT model developed by the NOAA-ARL. In addition, the cluster analysis is used to identify homogeneous groups of air mass transport patterns can be conducted through air trajectory clustering by grouping similar trajectories in terms of air mass movement. Backward trajectories produced for 3 years by HYSPLIT model were assigned to different clusters according to their moving speed and direction using a k-means clustering algorithm. According to cluster analysis results, northerly flows to study area cause to high ozone levels in the region. The results present that the ozone values in the study area are above the critical levels for forest and vegetation based on EU Directive 2008/50/EC.Keywords: AOT40, Biga Peninsula, HYSPLIT, surface ozone
Procedia PDF Downloads 255184 Modeling the Effects of Leachate-Impacted Groundwater on the Water Quality of a Large Tidal River
Authors: Emery Coppola Jr., Marwan Sadat, Il Kim, Diane Trube, Richard Kurisko
Abstract:
Contamination sites like landfills often pose significant risks to receptors like surface water bodies. Surface water bodies are often a source of recreation, including fishing and swimming, which not only enhances their value but also serves as a direct exposure pathway to humans, increasing their need for protection from water quality degradation. In this paper, a case study presents the potential effects of leachate-impacted groundwater from a large closed sanitary landfill on the surface water quality of the nearby Raritan River, situated in New Jersey. The study, performed over a two year period, included in-depth field evaluation of both the groundwater and surface water systems, and was supplemented by computer modeling. The analysis required delineation of a representative average daily groundwater discharge from the Landfill shoreline into the large, highly tidal Raritan River, with a corresponding estimate of daily mass loading of potential contaminants of concern. The average daily groundwater discharge into the river was estimated from a high-resolution water level study and a 24-hour constant-rate aquifer pumping test. The significant tidal effects induced on groundwater levels during the aquifer pumping test were filtered out using an advanced algorithm, from which aquifer parameter values were estimated using conventional curve match techniques. The estimated hydraulic conductivity values obtained from individual observation wells closely agree with tidally-derived values for the same wells. Numerous models were developed and used to simulate groundwater contaminant transport and surface water quality impacts. MODFLOW with MT3DMS was used to simulate the transport of potential contaminants of concern from the down-gradient edge of the Landfill to the Raritan River shoreline. A surface water dispersion model based upon a bathymetric and flow study of the river was used to simulate the contaminant concentrations over space within the river. The modeling results helped demonstrate that because of natural attenuation, the Landfill does not have a measurable impact on the river, which was confirmed by an extensive surface water quality study.Keywords: groundwater flow and contaminant transport modeling, groundwater/surface water interaction, landfill leachate, surface water quality modeling
Procedia PDF Downloads 262183 Investigation of Subsurface Structures within Bosso Local Government for Groundwater Exploration Using Magnetic and Resistivity Data
Authors: Adetona Abbassa, Aliyu Shakirat B.
Abstract:
The study area is part of Bosso local Government, enclosed within Longitude 6.25’ to 6.31’ and Latitude 9.35’ to 9.45’, an area of 16x8 km², within the basement region of central Nigeria. The region is a host to Nigerian Airforce base 12 (NAF 12quick response) and its staff quarters, the headquarters of Bosso local government, the Independent National Electoral Commission’s two offices, four government secondary schools, six primary schools and Minna international airport. The area suffers an acute shortage of water from November when rains stop to June when rains commence within North Central Nigeria. A way of addressing this problem is a reconnaissance method to delineate possible fractures and fault lines that exists within the region by sampling the Aeromagnetic data and using an appropriate analytical algorithm to delineate these fractures. This is followed by an appropriate ground truthing method that will confirm if the fracture is connected to underground water movement. The first vertical derivative for structural analysis, reveals a set of lineaments labeled AA’, BB’, CC’, DD’, EE’ and FF’ all trending in the Northeast – Southwest directions. AA’ is just below latitude 9.45’ above Maikunkele village, cutting off the upper part of the field, it runs through Kangwo, Nini, Lawo and other communities. BB’ is at Latitude 9.43’ it truncated at about 2Km before Maikunkele and Kuyi. CC’ is around 9.40’ sitting below Maikunkele runs down through Nanaum. DD’ is from Latitude 9.38’; interestingly no community within this region where the fault passes through. A result from the three sites where Vertical Electrical Sounding was carried out reveals three layers comprised of topsoil, intermediate Clay formation and weathered/fractured or fresh basement. The depth to basement map was also produced, depth to the basement from the ground surface with VES A₂, B5, D₂ and E₁ to be relatively deeper with depth values range between 25 to 35 m while the shallower region of the area has a depth range value between 10 to 20 m. Hence, VES A₂, A₅, B₄, B₅, C₂, C₄, D₄, D₅, E₁, E₃, and F₄ are high conductivity zone that are prolific for groundwater potential. The depth range of the aquifer potential zones is between 22.7 m to 50.4 m. The result from site C is quite unique though the 3 layers were detected in the majority of the VES points, the maximum depth to the basement in 90% of the VES points is below 8 km, only three VES points shows considerably viability, which are C₆, E₂ and F₂ with depths of 35.2 m and 38 m respectively but lack of connectivity will be a big challenge of chargeability.Keywords: lithology, aeromagnetic, aquifer, geoelectric, iso-resistivity, basement, vertical electrical sounding(VES)
Procedia PDF Downloads 139182 CSoS-STRE: A Combat System-of-System Space-Time Resilience Enhancement Framework
Authors: Jiuyao Jiang, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge
Abstract:
Modern warfare has transitioned from the paradigm of isolated combat forces to system-to-system confrontations due to advancements in combat technologies and application concepts. A combat system-of-systems (CSoS) is a combat network composed of independently operating entities that interact with one another to provide overall operational capabilities. Enhancing the resilience of CSoS is garnering increasing attention due to its significant practical value in optimizing network architectures, improving network security and refining operational planning. Accordingly, a unified framework called CSoS space-time resilience enhancement (CSoS-STRE) has been proposed, which enhances the resilience of CSoS by incorporating spatial features. Firstly, a multilayer spatial combat network model has been constructed, which incorporates an information layer depicting the interrelations among combat entities based on the OODA loop, along with a spatial layer that considers the spatial characteristics of equipment entities, thereby accurately reflecting the actual combat process. Secondly, building upon the combat network model, a spatiotemporal resilience optimization model is proposed, which reformulates the resilience optimization problem as a classical linear optimization model with spatial features. Furthermore, the model is extended from scenarios without obstacles to those with obstacles, thereby further emphasizing the importance of spatial characteristics. Thirdly, a resilience-oriented recovery optimization method based on improved non dominated sorting genetic algorithm II (R-INSGA) is proposed to determine the optimal recovery sequence for the damaged entities. This method not only considers spatial features but also provides the optimal travel path for multiple recovery teams. Finally, the feasibility, effectiveness, and superiority of the CSoS-STRE are demonstrated through a case study. Simultaneously, under deliberate attack conditions based on degree centrality and maximum operational loop performance, the proposed CSoS-STRE method is compared with six baseline recovery strategies, which are based on performance, time, degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. The comparison demonstrates that CSoS-STRE achieves faster convergence and superior performance.Keywords: space-time resilience enhancement, resilience optimization model, combat system-of-systems, recovery optimization method, no-obstacles and obstacles
Procedia PDF Downloads 17181 Simple Model of Social Innovation Based on Entrepreneurship Incidence in Mexico
Authors: Vicente Espinola, Luis Torres, Christhian Gonzalez
Abstract:
Entrepreneurship is a topic of current interest in Mexico and the World, which has been fostered through public policies with great impact on its generation. The strategies used in Mexico have not been successful, being motivational strategies aimed at the masses with the intention that someone in the process generates a venture. The strategies used for its development have been "picking of winners" favoring those who have already overcome the initial stages of undertaking without effective support. This situation shows a disarticulation that appears even more in social entrepreneurship; due to this, it is relevant to research on those elements that could develop them and thus integrate a model of entrepreneurship and social innovation for Mexico. Social entrepreneurship should be generating social innovation, which is translated into business models in order to make the benefits reach the population. These models are proposed putting the social impact before the economic impact, without forgetting its sustainability in the medium and long term. In this work, we present a simple model of innovation and social entrepreneurship for Guanajuato, Mexico. This algorithm was based on how social innovation could be generated in a systemic way for Mexico through different institutions that promote innovation. In this case, the technological parks of the state of Guanajuato were studied because these are considered one of the areas of Mexico where its main objectives are to make technology transfer to companies but overlooking the social sector and entrepreneurs. An experimental design of n = 60 was carried out with potential entrepreneurs to identify their perception of the social approach that the enterprises should have, the skills they consider required to create a venture, as well as their interest in generating ventures that solve social problems. This experiment had a 2K design, the value of k = 3 and the computational simulation was performed in R statistical language. A simple model of interconnected variables is proposed, which allows us to identify where it is necessary to increase efforts for the generation of social enterprises. The 96.67% of potential entrepreneurs expressed interest in ventures that solve social problems. In the analysis of the variables interaction, it was identified that the isolated development of entrepreneurial skills would only replicate the generation of traditional ventures. The variable of social approach presented positive interactions, which may influence the generation of social entrepreneurship if this variable was strengthened and permeated in the processes of training and development of entrepreneurs. In the future, it will be necessary to analyze the institutional actors that are present in the social entrepreneurship ecosystem, in order to analyze the interaction necessary to strengt the innovation and social entrepreneurship ecosystem.Keywords: social innovation, model, entrepreneurship, technological parks
Procedia PDF Downloads 274180 Influence of Thermal Annealing on Phase Composition and Structure of Quartz-Sericite Minerale
Authors: Atabaev I. G., Fayziev Sh. A., Irmatova Sh. K.
Abstract:
Raw materials with high content of Kalium oxide widely used in ceramic technology for prevention or decreasing of deformation of ceramic goods during drying process and under thermal annealing. Becouse to low melting temperature it is also used to decreasing of the temperature of thermal annealing during fabrication of ceramic goods [1,2]. So called “Porceline or China stones” - quartz-sericite (muscovite) minerals is also can be used for prevention of deformation as the content of Kalium oxide in muscovite is rather high (SiO2, + KAl2[AlSi3O10](OH)2). [3] . To estimation of possibility of use of this mineral for ceramic manufacture, in the presented article the influence of thermal processing on phase and a chemical content of this raw material is investigated. As well as to other ceramic raw materials (kaoline, white burning clays) the basic requirements of the industry to quality of "a porcelain stone» are following: small size of particles, relative high uniformity of disrtribution of components and phase, white color after burning, small content of colorant oxides or chromophores (Fe2O3, FeO, TiO2, etc) [4,5]. In the presented work natural minerale from the Boynaksay deposit (Uzbekistan) is investigated. The samples was mechanically polished for investigation by Scanning Electron Microscope. Powder with size of particle up to 63 μm was used to X-ray diffractometry and chemical analysis. The annealing of samples was performed at 900, 1120, 1350oC during 1 hour. Chemical composition of Boynaksay raw material according to chemical analysis presented in the table 1. For comparison the composition of raw materials from Russia and USA are also presented. In the Boynaksay quartz – sericite the average parity of quartz and sericite makes 55-60 and 30-35 % accordingly. The distribution of quartz and sericite phases in raw material was investigated using electron probe scanning electronic microscope «JEOL» JXA-8800R. In the figure 1 the scanning electron microscope (SEM) micrograps of the surface and the distributions of Al, Si and K atoms in the sample are presented. As it seen small granular, white and dense mineral includes quartz, sericite and small content of impurity minerals. Basically, crystals of quartz have the sizes from 80 up to 500 μm. Between quartz crystals the sericite inclusions having a tablet form with radiant structure are located. The size of sericite crystals is ~ 40-250 μm. Using data on interplanar distance [6,7] and ASTM Powder X-ray Diffraction Data it is shown that natural «a porcelain stone» quartz – sericite consists the quartz SiO2, sericite (muscovite type) KAl2[AlSi3O10](OH)2 and kaolinite Al203SiO22Н2О (See Figure 2 and Table 2). As it seen in the figure 3 and table 3a after annealing at 900oC the quartz – sericite contains quartz – SiO2 and muscovite - KAl2[AlSi3O10](OH)2, the peaks related with Kaolinite are absent. After annealing at 1120oC the full disintegration of muscovite and formation of mullite phase Al203 SiO2 is observed (the weak peaks of mullite appears in fig 3b and table 3b). After annealing at 1350oC the samples contains crystal phase of quartz and mullite (figure 3c and table 3с). Well known Mullite gives to ceramics high density, abrasive and chemical stability. Thus the obtained experimental data on formation of various phases during thermal annealing can be used for development of fabrication technology of advanced materials. Conclusion: The influence of thermal annealing in the interval 900-1350oC on phase composition and structure of quartz-sericite minerale is investigated. It is shown that during annealing the phase content of raw material is changed. After annealing at 1350oC the samples contains crystal phase of quartz and mullite (which gives gives to ceramics high density, abrasive and chemical stability).Keywords: quartz-sericite, kaolinite, mullite, thermal processing
Procedia PDF Downloads 415179 A Study of Non-Coplanar Imaging Technique in INER Prototype Tomosynthesis System
Authors: Chia-Yu Lin, Yu-Hsiang Shen, Cing-Ciao Ke, Chia-Hao Chang, Fan-Pin Tseng, Yu-Ching Ni, Sheng-Pin Tseng
Abstract:
Tomosynthesis is an imaging system that generates a 3D image by scanning in a limited angular range. It could provide more depth information than traditional 2D X-ray single projection. Radiation dose in tomosynthesis is less than computed tomography (CT). Because of limited angular range scanning, there are many properties depending on scanning direction. Therefore, non-coplanar imaging technique was developed to improve image quality in traditional tomosynthesis. The purpose of this study was to establish the non-coplanar imaging technique of tomosynthesis system and evaluate this technique by the reconstructed image. INER prototype tomosynthesis system contains an X-ray tube, a flat panel detector, and a motion machine. This system could move X-ray tube in multiple directions during the acquisition. In this study, we investigated three different imaging techniques that were 2D X-ray single projection, traditional tomosynthesis, and non-coplanar tomosynthesis. An anthropopathic chest phantom was used to evaluate the image quality. It contained three different size lesions (3 mm, 5 mm and, 8 mm diameter). The traditional tomosynthesis acquired 61 projections over a 30 degrees angular range in one scanning direction. The non-coplanar tomosynthesis acquired 62 projections over 30 degrees angular range in two scanning directions. A 3D image was reconstructed by iterative image reconstruction algorithm (ML-EM). Our qualitative method was to evaluate artifacts in tomosynthesis reconstructed image. The quantitative method was used to calculate a peak-to-valley ratio (PVR) that means the intensity ratio of the lesion to the background. We used PVRs to evaluate the contrast of lesions. The qualitative results showed that in the reconstructed image of non-coplanar scanning, anatomic structures of chest and lesions could be identified clearly and no significant artifacts of scanning direction dependent could be discovered. In 2D X-ray single projection, anatomic structures overlapped and lesions could not be discovered. In traditional tomosynthesis image, anatomic structures and lesions could be identified clearly, but there were many artifacts of scanning direction dependent. The quantitative results of PVRs show that there were no significant differences between non-coplanar tomosynthesis and traditional tomosynthesis. The PVRs of the non-coplanar technique were slightly higher than traditional technique in 5 mm and 8 mm lesions. In non-coplanar tomosynthesis, artifacts of scanning direction dependent could be reduced and PVRs of lesions were not decreased. The reconstructed image was more isotropic uniformity in non-coplanar tomosynthesis than in traditional tomosynthesis. In the future, scan strategy and scan time will be the challenges of non-coplanar imaging technique.Keywords: image reconstruction, non-coplanar imaging technique, tomosynthesis, X-ray imaging
Procedia PDF Downloads 370178 Individual Cylinder Ignition Advance Control Algorithms of the Aircraft Piston Engine
Authors: G. Barański, P. Kacejko, M. Wendeker
Abstract:
The impact of the ignition advance control algorithms of the ASz-62IR-16X aircraft piston engine on a combustion process has been presented in this paper. This aircraft engine is a nine-cylinder 1000 hp engine with a special electronic control ignition system. This engine has two spark plugs per cylinder with an ignition advance angle dependent on load and the rotational speed of the crankshaft. Accordingly, in most cases, these angles are not optimal for power generated. The scope of this paper is focused on developing algorithms to control the ignition advance angle in an electronic ignition control system of an engine. For this type of engine, i.e. radial engine, an ignition advance angle should be controlled independently for each cylinder because of the design of such an engine and its crankshaft system. The ignition advance angle is controlled in an open-loop way, which means that the control signal (i.e. ignition advance angle) is determined according to the previously developed maps, i.e. recorded tables of the correlation between the ignition advance angle and engine speed and load. Load can be measured by engine crankshaft speed or intake manifold pressure. Due to a limited memory of a controller, the impact of other independent variables (such as cylinder head temperature or knock) on the ignition advance angle is given as a series of one-dimensional arrays known as corrective characteristics. The value of the ignition advance angle specified combines the value calculated from the primary characteristics and several correction factors calculated from correction characteristics. Individual cylinder control can proceed in line with certain indicators determined from pressure registered in a combustion chamber. Control is assumed to be based on the following indicators: maximum pressure, maximum pressure angle, indicated mean effective pressure. Additionally, a knocking combustion indicator was defined. Individual control can be applied to a single set of spark plugs only, which results from two fundamental ideas behind designing a control system. Independent operation of two ignition control systems – if two control systems operate simultaneously. It is assumed that the entire individual control should be performed for a front spark plug only and a rear spark plug shall be controlled with a fixed (or specific) offset relative to the front one or from a reference map. The developed algorithms will be verified by simulation and engine test sand experiments. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: algorithm, combustion process, radial engine, spark plug
Procedia PDF Downloads 293177 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 224176 Liability of AI in Workplace: A Comparative Approach Between Shari’ah and Common Law
Authors: Barakat Adebisi Raji
Abstract:
In the workplace, Artificial Intelligence has, in recent years, emerged as a transformative technology that revolutionizes how organizations operate and perform tasks. It is a technology that has a significant impact on transportation, manufacturing, education, cyber security, robotics, agriculture, healthcare, and so many other organizations. By harnessing AI technology, workplaces can enhance productivity, streamline processes, and make more informed decisions. Given the potential of AI to change the way we work and its impact on the labor market in years to come, employers understand that it entails legal challenges and risks despite the advantages inherent in it. Therefore, as AI continues to integrate into various aspects of the workplace, understanding the legal and ethical implications becomes paramount. Also central to this study is the question of who is held liable where AI makes any defaults; the person (company) who created the AI, the person who programmed the AI algorithm or the person who uses the AI? Thus, the aim of this paper is to provide a detailed overview of how AI-related liabilities are addressed under each legal tradition and shed light on potential areas of accord and divergence between the two legal cultures. The objectives of this paper are to (i) examine the ability of Common law and Islamic law to accommodate the issues and damage caused by AI in the workplace and the legality of compensation for such injury sustained; (ii) to discuss the extent to which AI can be described as a legal personality to bear responsibility: (iii) examine the similarities and disparities between Common Law and Islamic Jurisprudence on the liability of AI in the workplace. The methodology adopted in this work was qualitative, and the method was purely a doctrinal research method where information is gathered from the primary and secondary sources of law, such as comprehensive materials found in journal articles, expert-authored books and online news sources. Comparative legal method was also used to juxtapose the approach of Islam and Common Law. The paper concludes that since AI, in its current legal state, is not recognized as a legal entity, operators or manufacturers of AI should be held liable for any damage that arises, and the determination of who bears the responsibility should be dependent on the circumstances surrounding each scenario. The study recommends the granting of legal personality to AI systems, the establishment of legal rights and liabilities for AI, the establishment of a holistic Islamic virtue-based AI ethics framework, and the consideration of Islamic ethics.Keywords: AI, health- care, agriculture, cyber security, common law, Shari'ah
Procedia PDF Downloads 38175 Model Reference Adaptive Approach for Power System Stabilizer for Damping of Power Oscillations
Authors: Jožef Ritonja, Bojan Grčar, Boštjan Polajžer
Abstract:
In recent years, electricity trade between neighboring countries has become increasingly intense. Increasing power transmission over long distances has resulted in an increase in the oscillations of the transmitted power. The damping of the oscillations can be carried out with the reconfiguration of the network or the replacement of generators, but such solution is not economically reasonable. The only cost-effective solution to improve the damping of power oscillations is to use power system stabilizers. Power system stabilizer represents a part of synchronous generator control system. It utilizes semiconductor’s excitation system connected to the rotor field excitation winding to increase the damping of the power system. The majority of the synchronous generators are equipped with the conventional power system stabilizers with fixed parameters. The control structure of the conventional power system stabilizers and the tuning procedure are based on the linear control theory. Conventional power system stabilizers are simple to realize, but they show non-sufficient damping improvement in the entire operating conditions. This is the reason that advanced control theories are used for development of better power system stabilizers. In this paper, the adaptive control theory for power system stabilizers design and synthesis is studied. The presented work is focused on the use of model reference adaptive control approach. Control signal, which assures that the controlled plant output will follow the reference model output, is generated by the adaptive algorithm. Adaptive gains are obtained as a combination of the "proportional" term and with the σ-term extended "integral" term. The σ-term is introduced to avoid divergence of the integral gains. The necessary condition for asymptotic tracking is derived by means of hyperstability theory. The benefits of the proposed model reference adaptive power system stabilizer were evaluated as objectively as possible by means of a theoretical analysis, numerical simulations and laboratory realizations. Damping of the synchronous generator oscillations in the entire operating range was investigated. Obtained results show the improved damping in the entire operating area and the increase of the power system stability. The results of the presented work will help by the development of the model reference power system stabilizer which should be able to replace the conventional stabilizers in power systems.Keywords: power system, stability, oscillations, power system stabilizer, model reference adaptive control
Procedia PDF Downloads 138174 Structural Optimization, Design, and Fabrication of Dissolvable Microneedle Arrays
Authors: Choupani Andisheh, Temucin Elif Sevval, Bediz Bekir
Abstract:
Due to their various advantages compared to many other drug delivery systems such as hypodermic injections and oral medications, microneedle arrays (MNAs) are a promising drug delivery system. To achieve enhanced performance of the MN, it is crucial to develop numerical models, optimization methods, and simulations. Accordingly, in this work, the optimized design of dissolvable MNAs, as well as their manufacturing, is investigated. For this purpose, a mechanical model of a single MN, having the geometry of an obelisk, is developed using commercial finite element software. The model considers the condition in which the MN is under pressure at the tip caused by the reaction force when penetrating the skin. Then, a multi-objective optimization based on non-dominated sorting genetic algorithm II (NSGA-II) is performed to obtain geometrical properties such as needle width, tip (apex) angle, and base fillet radius. The objective of the optimization study is to reach a painless and effortless penetration into the skin along with minimizing its mechanical failures caused by the maximum stress occurring throughout the structure. Based on the obtained optimal design parameters, master (male) molds are then fabricated from PMMA using a mechanical micromachining process. This fabrication method is selected mainly due to the geometry capability, production speed, production cost, and the variety of materials that can be used. Then to remove any chip residues, the master molds are cleaned using ultrasonic cleaning. These fabricated master molds can then be used repeatedly to fabricate Polydimethylsiloxane (PDMS) production (female) molds through a micro-molding approach. Finally, Polyvinylpyrrolidone (PVP) as a dissolvable polymer is cast into the production molds under vacuum to produce the dissolvable MNAs. This fabrication methodology can also be used to fabricate MNAs that include bioactive cargo. To characterize and demonstrate the performance of the fabricated needles, (i) scanning electron microscope images are taken to show the accuracy of the fabricated geometries, and (ii) in-vitro piercing tests are performed on artificial skin. It is shown that optimized MN geometries can be precisely fabricated using the presented fabrication methodology and the fabricated MNAs effectively pierce the skin without failure.Keywords: microneedle, microneedle array fabrication, micro-manufacturing structural optimization, finite element analysis
Procedia PDF Downloads 113173 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization
Authors: Younis Elhaddad, Alfonso Ortega
Abstract:
Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production
Procedia PDF Downloads 162172 Isotope Effects on Inhibitors Binding to HIV Reverse Transcriptase
Authors: Agnieszka Krzemińska, Katarzyna Świderek, Vicente Molinier, Piotr Paneth
Abstract:
In order to understand in details the interactions between ligands and the enzyme isotope effects were studied between clinically used drugs that bind in the active site of Human Immunodeficiency Virus Reverse Transcriptase, HIV-1 RT, as well as triazole-based inhibitor that binds in the allosteric pocket of this enzyme. The magnitudes and origins of the resulting binding isotope effects were analyzed. Subsequently, binding isotope effect of the same triazole-based inhibitor bound in the active site were analyzed and compared. Together, these results show differences in binding origins in two sites of the enzyme and allow to analyze binding mode and place of newly synthesized inhibitors. Typical protocol is described below on the example of triazole ligand in the allosteric pocket. Triazole was docked into allosteric cavity of HIV-1 RT with Glide using extra-precision mode as implemented in Schroedinger software. The structure of HIV-1 RT was obtained from Protein Data Bank as structure of PDB ID 2RKI. The pKa for titratable amino acids was calculated using PROPKA software, and in order to neutralize the system 15 Cl- were added using tLEaP package implemented in AMBERTools ver.1.5. Also N-terminals and C-terminals were build using tLEaP. The system was placed in 144x160x144Å3 orthorhombic box of water molecules using NAMD program. Missing parameters for triazole were obtained at the AM1 level using Antechamber software implemented in AMBERTools. The energy minimizations were carried out by means of a conjugate gradient algorithm using NAMD. Then system was heated from 0 to 300 K with temperature increment 0.001 K. Subsequently 2 ns Langevin−Verlet (NVT) MM MD simulation with AMBER force field implemented in NAMD was carried out. Periodic Boundary Conditions and cut-offs for the nonbonding interactions, range radius from 14.5 to 16 Å, are used. After 2 ns relaxation 200 ps of QM/MM MD at 300 K were simulated. The triazole was treated quantum mechanically at the AM1 level, protein was described using AMBER and water molecules were described using TIP3P, as implemented in fDynamo library. Molecules 20 Å apart from the triazole were kept frozen, with cut-offs established on range radius from 14.5 to 16 Å. In order to describe interactions between triazole and RT free energy of binding using Free Energy Perturbation method was done. The change in frequencies from ligand in solution to ligand bounded in enzyme was used to calculate binding isotope effects.Keywords: binding isotope effects, molecular dynamics, HIV, reverse transcriptase
Procedia PDF Downloads 432171 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.Keywords: artificial intelligence, computer science, criminal investigation, digital forensics
Procedia PDF Downloads 212170 Automatic and High Precise Modeling for System Optimization
Authors: Stephanie Chen, Mitja Echim, Christof Büskens
Abstract:
To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization
Procedia PDF Downloads 409169 Development of Coastal Inundation–Inland and River Flow Interface Module Based on 2D Hydrodynamic Model
Authors: Eun-Taek Sin, Hyun-Ju Jang, Chang Geun Song, Yong-Sik Han
Abstract:
Due to the climate change, the coastal urban area repeatedly suffers from the loss of property and life by flooding. There are three main causes of inland submergence. First, when heavy rain with high intensity occurs, the water quantity in inland cannot be drained into rivers by increase in impervious surface of the land development and defect of the pump, storm sewer. Second, river inundation occurs then water surface level surpasses the top of levee. Finally, Coastal inundation occurs due to rising sea water. However, previous studies ignored the complex mechanism of flooding, and showed discrepancy and inadequacy due to linear summation of each analysis result. In this study, inland flooding and river inundation were analyzed together by HDM-2D model. Petrov-Galerkin stabilizing method and flux-blocking algorithm were applied to simulate the inland flooding. In addition, sink/source terms with exponentially growth rate attribute were added to the shallow water equations to include the inland flooding analysis module. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. To consider the coastal surge, another module was developed by adding seawater to the existing Inland Flooding-River Inundation binding module for comprehensive flooding analysis. Based on the combined modules, the Coastal Inundation – Inland & River Flow Interface was simulated by inputting the flow rate and depth data in artificial flume. Accordingly, it was able to analyze the flood patterns of coastal cities over time. This study is expected to help identify the complex causes of flooding in coastal areas where complex flooding occurs, and assist in analyzing damage to coastal cities. Acknowledgements—This research was supported by a grant ‘Development of the Evaluation Technology for Complex Causes of Inundation Vulnerability and the Response Plans in Coastal Urban Areas for Adaptation to Climate Change’ [MPSS-NH-2015-77] from the Natural Hazard Mitigation Research Group, Ministry of Public Safety and Security of Korea.Keywords: flooding analysis, river inundation, inland flooding, 2D hydrodynamic model
Procedia PDF Downloads 362168 CyberSteer: Cyber-Human Approach for Safely Shaping Autonomous Robotic Behavior to Comply with Human Intention
Authors: Vinicius G. Goecks, Gregory M. Gremillion, William D. Nothwang
Abstract:
Modern approaches to train intelligent agents rely on prolonged training sessions, high amounts of input data, and multiple interactions with the environment. This restricts the application of these learning algorithms in robotics and real-world applications, in which there is low tolerance to inadequate actions, interactions are expensive, and real-time processing and action are required. This paper addresses this issue introducing CyberSteer, a novel approach to efficiently design intrinsic reward functions based on human intention to guide deep reinforcement learning agents with no environment-dependent rewards. CyberSteer uses non-expert human operators for initial demonstration of a given task or desired behavior. The trajectories collected are used to train a behavior cloning deep neural network that asynchronously runs in the background and suggests actions to the deep reinforcement learning module. An intrinsic reward is computed based on the similarity between actions suggested and taken by the deep reinforcement learning algorithm commanding the agent. This intrinsic reward can also be reshaped through additional human demonstration or critique. This approach removes the need for environment-dependent or hand-engineered rewards while still being able to safely shape the behavior of autonomous robotic agents, in this case, based on human intention. CyberSteer is tested in a high-fidelity unmanned aerial vehicle simulation environment, the Microsoft AirSim. The simulated aerial robot performs collision avoidance through a clustered forest environment using forward-looking depth sensing and roll, pitch, and yaw references angle commands to the flight controller. This approach shows that the behavior of robotic systems can be shaped in a reduced amount of time when guided by a non-expert human, who is only aware of the high-level goals of the task. Decreasing the amount of training time required and increasing safety during training maneuvers will allow for faster deployment of intelligent robotic agents in dynamic real-world applications.Keywords: human-robot interaction, intelligent robots, robot learning, semisupervised learning, unmanned aerial vehicles
Procedia PDF Downloads 259167 Determination of Aquifer Geometry Using Geophysical Methods: A Case Study from Sidi Bouzid Basin, Central Tunisia
Authors: Dhekra Khazri, Hakim Gabtni
Abstract:
Because of Sidi Bouzid water table overexploitation, this study aims at integrating geophysical methods to determinate aquifers geometry assessing their geological situation and geophysical characteristics. However in highly tectonic zones controlled by Atlassic structural features with NE-SW major directions (central Tunisia), Bouguer gravimetric responses of some areas can be as much dominated by the regional structural tendency, as being non-identified or either defectively interpreted such as the case of Sidi Bouzid basin. This issue required a residual gravity anomaly elaboration isolating the Sidi Bouzid basin gravity response ranging between -8 and -14 mGal and crucial for its aquifers geometry characterization. Several gravity techniques helped constructing the Sidi Bouzid basin's residual gravity anomaly, such as Upwards continuation compared to polynomial regression trends and power spectrum analysis detecting deep basement sources at (3km), intermediate (2km) and shallow sources (1km). A 3D Euler Deconvolution was also performed detecting deepest accidents trending NE-SW, N-S and E-W with depth values reaching 5500 m and delineating the main outcropping structures of the study area. Further gravity treatments highlighted the subsurface geometry and structural features of Sidi Bouzid basin over Horizontal and vertical gradient, and also filters based on them such as Tilt angle and Source Edge detector locating rooted edges or peaks from potential field data detecting a new E-W lineament compartmentalizing the Sidi Bouzid gutter into two unequally residual anomaly and subsiding domains. This subsurface morphology is also detected by the used 2D seismic reflection sections defining the Sidi Bouzid basin as a deep gutter within a tectonic set of negative flower structures, and collapsed and tilted blocks. Furthermore, these structural features were confirmed by forward gravity modeling process over several modeled residual gravity profiles crossing the main area. Sidi Bouzid basin (central Tunisia) is also of a big interest cause of the unknown total thickness and the undefined substratum of its siliciclastic Tertiary package, and its aquifers unbounded structural subsurface features and deep accidents. The Combination of geological, hydrogeological and geophysical methods is then of an ultimate need. Therefore, a geophysical methods integration based on gravity survey supporting available seismic data through forward gravity modeling, enhanced lateral and vertical extent definition of the basin's complex sedimentary fill via 3D gravity models, improved depth estimation by a depth to basement modeling approach, and provided 3D isochronous seismic mapping visualization of the basin's Tertiary complex refining its geostructural schema. A subsurface basin geomorphology mapping, over an ultimate matching between the basin's residual gravity map and the calculated theoretical signature map, was also displayed over the modeled residual gravity profiles. An ultimate multidisciplinary geophysical study of the Sidi Bouzid basin aquifers can be accomplished via an aeromagnetic survey and a 4D Microgravity reservoir monitoring offering temporal tracking of the target aquifer's subsurface fluid dynamics enhancing and rationalizing future groundwater exploitation in this arid area of central Tunisia.Keywords: aquifer geometry, geophysics, 3D gravity modeling, improved depths, source edge detector
Procedia PDF Downloads 284166 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose
Authors: Kumar Shashvat, Amol P. Bhondekar
Abstract:
In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.Keywords: odor classification, generative models, naive bayes, linear discriminant analysis
Procedia PDF Downloads 388165 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction
Authors: Mohammad Ghahramani, Fahimeh Saei Manesh
Abstract:
Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.Keywords: soccer, analytics, machine learning, database
Procedia PDF Downloads 238164 Assessing Online Learning Paths in an Learning Management Systems Using a Data Mining and Machine Learning Approach
Authors: Alvaro Figueira, Bruno Cabral
Abstract:
Nowadays, students are used to be assessed through an online platform. Educators have stepped up from a period in which they endured the transition from paper to digital. The use of a diversified set of question types that range from quizzes to open questions is currently common in most university courses. In many courses, today, the evaluation methodology also fosters the students’ online participation in forums, the download, and upload of modified files, or even the participation in group activities. At the same time, new pedagogy theories that promote the active participation of students in the learning process, and the systematic use of problem-based learning, are being adopted using an eLearning system for that purpose. However, although there can be a lot of feedback from these activities to student’s, usually it is restricted to the assessments of online well-defined tasks. In this article, we propose an automatic system that informs students of abnormal deviations of a 'correct' learning path in the course. Our approach is based on the fact that by obtaining this information earlier in the semester, may provide students and educators an opportunity to resolve an eventual problem regarding the student’s current online actions towards the course. Our goal is to prevent situations that have a significant probability to lead to a poor grade and, eventually, to failing. In the major learning management systems (LMS) currently available, the interaction between the students and the system itself is registered in log files in the form of registers that mark beginning of actions performed by the user. Our proposed system uses that logged information to derive new one: the time each student spends on each activity, the time and order of the resources used by the student and, finally, the online resource usage pattern. Then, using the grades assigned to the students in previous years, we built a learning dataset that is used to feed a machine learning meta classifier. The produced classification model is then used to predict the grades a learning path is heading to, in the current year. Not only this approach serves the teacher, but also the student to receive automatic feedback on her current situation, having past years as a perspective. Our system can be applied to online courses that integrate the use of an online platform that stores user actions in a log file, and that has access to other student’s evaluations. The system is based on a data mining process on the log files and on a self-feedback machine learning algorithm that works paired with the Moodle LMS.Keywords: data mining, e-learning, grade prediction, machine learning, student learning path
Procedia PDF Downloads 122