Search results for: movement modeling
1204 Avoiding Gas Hydrate Problems in Qatar Oil and Gas Industry: Environmentally Friendly Solvents for Gas Hydrate Inhibition
Authors: Nabila Mohamed, Santiago Aparicio, Bahman Tohidi, Mert Atilhan
Abstract:
Qatar's one of the biggest problem in processing its natural resource, which is natural gas, is the often occurring blockage in the pipelines caused due to uncontrolled gas hydrate formation in the pipelines. Several millions of dollars are being spent at the process site to dehydrate the blockage safely by using chemical inhibitors. We aim to establish national database, which addresses the physical conditions that promotes Qatari natural gas to form gas hydrates in the pipelines. Moreover, we aim to design and test novel hydrate inhibitors that are suitable for Qatari natural gas and its processing facilities. From these perspectives we are aiming to provide more effective and sustainable reservoir utilization and processing of Qatari natural gas. In this work, we present the initial findings of a QNRF funded project, which deals with the natural gas hydrate formation characteristics of Qatari type gas in both experimental (PVTx) and computational (molecular simulations) methods. We present the data from the two fully automated apparatus: a gas hydrate autoclave and a rocking cell. Hydrate equilibrium curves including growth/dissociation conditions for multi-component systems for several gas mixtures that represent Qatari type natural gas with and without the presence of well known kinetic and thermodynamic hydrate inhibitors. Ionic liquids were designed and used for testing their inhibition performance and their DFT and molecular modeling simulation results were also obtained and compared with the experimental results. Results showed significant performance of ionic liquids with up to 0.5 % in volume with up to 2 to 4 0C inhibition at high pressures.Keywords: gas hydrates, natural gas, ionic liquids, inhibition, thermodynamic inhibitors, kinetic inhibitors
Procedia PDF Downloads 13231203 Third Party Logistics (3PL) Selection Criteria for an Indian Heavy Industry Using SEM
Authors: Nadama Kumar, P. Parthiban, T. Niranjan
Abstract:
In the present paper, we propose an incorporated approach for 3PL supplier choice that suits the distinctive strategic needs of the outsourcing organization in southern part of India. Four fundamental criteria have been used in particular Performance, IT, Service and Intangible. These are additionally subdivided into fifteen sub-criteria. The proposed strategy coordinates Structural Equation Modeling (SEM) and Non-additive Fuzzy Integral strategies. The presentation of fluffiness manages the unclearness of human judgments. The SEM approach has been used to approve the determination criteria for the proposed show though the Non-additive Fuzzy Integral approach uses the SEM display contribution to assess a supplier choice score. The case organization has a exclusive vertically integrated assembly that comprises of several companies focusing on a slight array of the value chain. To confirm manufacturing and logistics proficiency, it significantly relies on 3PL suppliers to attain supply chain superiority. However, 3PL supplier selection is an intricate decision-making procedure relating multiple selection criteria. The goal of this work is to recognize the crucial 3PL selection criteria by using the non-additive fuzzy integral approach. Unlike the outmoded multi criterion decision-making (MCDM) methods which frequently undertake independence among criteria and additive importance weights, the nonadditive fuzzy integral is an effective method to resolve the dependency among criteria, vague information, and vital fuzziness of human judgment. In this work, we validate an empirical case that engages the nonadditive fuzzy integral to assess the importance weight of selection criteria and indicate the most suitable 3PL supplier.Keywords: 3PL, non-additive fuzzy integral approach, SEM, fuzzy
Procedia PDF Downloads 2821202 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 991201 Adding a Few Language-Level Constructs to Improve OOP Verifiability of Semantic Correctness
Authors: Lian Yang
Abstract:
Object-oriented programming (OOP) is the dominant programming paradigm in today’s software industry and it has literally enabled average software developers to develop millions of commercial strength software applications in the era of INTERNET revolution over the past three decades. On the other hand, the lack of strict mathematical model and domain constraint features at the language level has long perplexed the computer science academia and OOP engineering community. This situation resulted in inconsistent system qualities and hard-to-understand designs in some OOP projects. The difficulties with regards to fix the current situation are also well known. Although the power of OOP lies in its unbridled flexibility and enormously rich data modeling capability, we argue that the ambiguity and the implicit facade surrounding the conceptual model of a class and an object should be eliminated as much as possible. We listed the five major usage of class and propose to separate them by proposing new language constructs. By using well-established theories of set and FSM, we propose to apply certain simple, generic, and yet effective constraints at OOP language level in an attempt to find a possible solution to the above-mentioned issues regarding OOP. The goal is to make OOP more theoretically sound as well as to aid programmers uncover warning signs of irregularities and domain-specific issues in applications early on the development stage and catch semantic mistakes at runtime, improving correctness verifiability of software programs. On the other hand, the aim of this paper is more practical than theoretical.Keywords: new language constructs, set theory, FSM theory, user defined value type, function groups, membership qualification attribute (MQA), check-constraint (CC)
Procedia PDF Downloads 2411200 Targeting and Developing the Remaining Pay in an Ageing Field: The Ovhor Field Experience
Authors: Christian Ihwiwhu, Nnamdi Obioha, Udeme John, Edward Bobade, Oghenerunor Bekibele, Adedeji Awujoola, Ibi-Ada Itotoi
Abstract:
Understanding the complexity in the distribution of hydrocarbon in a simple structure with flow baffles and connectivity issues is critical in targeting and developing the remaining pay in a mature asset. Subtle facies changes (heterogeneity) can have a drastic impact on reservoir fluids movement, and this can be crucial to identifying sweet spots in mature fields. This study aims to evaluate selected reservoirs in Ovhor Field, Niger Delta, Nigeria, with the objective of optimising production from the field by targeting undeveloped oil reserves, bypassed pay, and gaining an improved understanding of the selected reservoirs to increase the company’s reservoir limits. The task at the Ovhor field is complicated by poor stratigraphic seismic resolution over the field. 3-D geological (sedimentology and stratigraphy) interpretation, use of results from quantitative interpretation, and proper understanding of production data have been used in recognizing flow baffles and undeveloped compartments in the field. The full field 3-D model has been constructed in such a way as to capture heterogeneities and the various compartments in the field to aid the proper simulation of fluid flow in the field for future production prediction, proper history matching and design of good trajectories to adequately target undeveloped oil in the field. Reservoir property models (porosity, permeability, and net-to-gross) have been constructed by biasing log interpreted properties to a defined environment of deposition model whose interpretation captures the heterogeneities expected in the studied reservoirs. At least, two scenarios have been modelled for most of the studied reservoirs to capture the range of uncertainties we are dealing with. The total original oil in-place volume for the four reservoirs studied is 157 MMstb. The cumulative oil and gas production from the selected reservoirs are 67.64 MMstb and 9.76 Bscf respectively, with current production rate of about 7035 bopd and 4.38 MMscf/d (as at 31/08/2019). Dynamic simulation and production forecast on the 4 reservoirs gave an undeveloped reserve of about 3.82 MMstb from two (2) identified oil restoration activities. These activities include side-tracking and re-perforation of existing wells. This integrated approach led to the identification of bypassed oil in some areas of the selected reservoirs and an improved understanding of the studied reservoirs. New wells have/are being drilled now to test the results of our studies, and the results are very confirmatory and satisfying.Keywords: facies, flow baffle, bypassed pay, heterogeneities, history matching, reservoir limit
Procedia PDF Downloads 1311199 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values
Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi
Abstract:
A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest
Procedia PDF Downloads 1891198 Effect of Surface Treatments on the Cohesive Response of Nylon 6/silica Interfaces
Authors: S. Arabnejad, D. W. C. Cheong, H. Chaobin, V. P. W. Shim
Abstract:
Debonding is the one of the fundamental damage mechanisms in particle field composites. This phenomenon gains more importance in nano composites because of the extensive interfacial region present in these materials. Understanding the debonding mechanism accurately, can help in understanding and predicting the response of nano composites as the interface deteriorates. The small length scale of the phenomenon makes the experimental characterization complicated and the results of it, far from real physical behavior. In this study the damage process in nylon-6/silica interface is examined through Molecular Dynamics (MD) modeling and simulations. The silica has been modeled with three forms of surfaces – without any surface treatment, with the surface treatment of 3-aminopropyltriethoxysilane (APTES) and with Hexamethyldisilazane (HMDZ) surface treatment. The APTES surface modification used to create functional groups on the silica surface, reacts and form covalent bonds with nylon 6 chains while the HMDZ surface treatment only interacts with both particle and polymer by non-bond interaction. The MD model in this study uses a PCFF force field. The atomic model is generated in a periodic box with a layer of vacuum on top of the polymer layer. This layer of vacuum is large enough that assures us from not having any interaction between particle and substrate after debonding. Results show that each of these three models show a different traction separation behavior. However, all of them show an almost bilinear traction separation behavior. The study also reveals a strong correlation between the length of APTES surface treatment and the cohesive strength of the interface.Keywords: debonding, surface treatment, cohesive response, separation behaviour
Procedia PDF Downloads 4601197 Prospects for the Development of e-Commerce in Georgia
Authors: Nino Damenia
Abstract:
E-commerce opens a new horizon for business development, which is why the presence of e-commerce is a necessary condition for the formation, growth, and development of the country's economy. Worldwide, e-commerce turnover is growing at a high rate every year, as the electronic environment provides great opportunities for product promotion. E-commerce in Georgia is developing at a fast pace, but it is still a relatively young direction in the country's economy. Movement restrictions and other public health measures caused by the COVID-19 pandemic have reduced economic activity in most economic sectors and countries, significantly affecting production, distribution, and consumption. The pandemic has accelerated digital transformation. Digital solutions enable people and businesses to continue part of their economic and social activities remotely. This has also led to the growth of e-commerce. According to the data of the National Statistics Service of Georgia, the share of online trade is higher in cities (27.4%) than in rural areas (9.1%). The COVID-19 pandemic has forced local businesses to expand their digital offerings. The size of the local market increased 3.2 times in 2020 to 138 million GEL. And in 2018-2020, the share of local e-commerce increased from 11% to 23%. In Georgia, the state is actively engaged in the promotion of activities based on information technologies. Many measures have been taken for this purpose, but compared to other countries, this process is slow in Georgia. The purpose of the study is to determine development prospects for the economy of Georgia based on the analysis of electronic commerce. Research was conducted around the issues using Georgian and foreign scientists' articles, works, reports of international organizations, collections of scientific conferences, and scientific electronic databases. The empirical base of the research is the data and annual reports of the National Statistical Service of Georgia, internet resources of world statistical materials, and others. While working on the article, a questionnaire was developed, based on which an electronic survey of certain types of respondents was conducted. The conducted research was related to determining how intensively Georgian citizens use online shopping, including which age category uses electronic commerce, for what purposes, and how satisfied they are. Various theoretical and methodological research tools, as well as analysis, synthesis, comparison, and other types of methods, are used to achieve the set goal in the research process. The research results and recommendations will contribute to the development of e-commerce in Georgia and economic growth based on it.Keywords: e-commerce, information technology, pandemic, digital transformation
Procedia PDF Downloads 771196 Evaluation of Newly Synthesized Steroid Derivatives Using In silico Molecular Descriptors and Chemometric Techniques
Authors: Milica Ž. Karadžić, Lidija R. Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Z. Kovačević, Anamarija I. Mandić, Katarina Penov-Gaši, Andrea R. Nikolić, Aleksandar M. Oklješa
Abstract:
This study considered selection of the in silico molecular descriptors and the models for newly synthesized steroid derivatives description and their characterization using chemometric techniques. Multiple linear regression (MLR) models were established and gave the best molecular descriptors for quantitative structure-retention relationship (QSRR) modeling of the retention of the investigated molecules. MLR models were without multicollinearity among the selected molecular descriptors according to the variance inflation factor (VIF) values. Used molecular descriptors were ranked using generalized pair correlation method (GPCM). In this method, the significant difference between independent variables can be noticed regardless almost equal correlation between dependent variable. Generated MLR models were statistically and cross-validated and the best models were kept. Models were ranked using sum of ranking differences (SRD) method. According to this method, the most consistent QSRR model can be found and similarity or dissimilarity between the models could be noticed. In this study, SRD was performed using average values of experimentally observed data as a golden standard. Chemometric analysis was conducted in order to characterize newly synthesized steroid derivatives for further investigation regarding their potential biological activity and further synthesis. This article is based upon work from COST Action (CM1105), supported by COST (European Cooperation in Science and Technology).Keywords: generalized pair correlation method, molecular descriptors, regression analysis, steroids, sum of ranking differences
Procedia PDF Downloads 3481195 A Systematic Review of Sensory Processing Patterns of Children with Autism Spectrum Disorders
Authors: Ala’a F. Jaber, Bara’ah A. Bsharat, Noor T. Ismael
Abstract:
Background: Sensory processing is a fundamental skill needed for the successful performance of daily living activities. These skills are impaired as parts of the neurodevelopmental process issues among children with autism spectrum disorder (ASD). This systematic review aimed to summarize the evidence on the differences in sensory processing and motor characteristic between children with ASD and children with TD. Method: This systematic review followed the guidelines of the preferred reporting items for systematic reviews and meta-analysis. The search terms included sensory, motor, condition, and child-related terms or phrases. The electronic search utilized Academic Search Ultimate, CINAHL Plus with Full Text, ERIC, MEDLINE, MEDLINE Complete, Psychology, and Behavioral Sciences Collection, and SocINDEX with full-text databases. The hand search included looking for potential studies in the references of related studies. The inclusion criteria included studies published in English between years 2009-2020 that included children aged 3-18 years with a confirmed ASD diagnosis, according to the DSM-V criteria, included a control group of typical children, included outcome measures related to the sensory processing and/or motor functions, and studies available in full-text. The review of included studies followed the Oxford Centre for Evidence-Based Medicine guidelines, and the Guidelines for Critical Review Form of Quantitative Studies, and the guidelines for conducting systematic reviews by the American Occupational Therapy Association. Results: Eighty-eight full-text studies related to the differences between children with ASD and children with TD in terms of sensory processing and motor characteristics were reviewed, of which eighteen articles were included in the quantitative synthesis. The results reveal that children with ASD had more extreme sensory processing patterns than children with TD, like hyper-responsiveness and hypo-responsiveness to sensory stimuli. Also, children with ASD had limited gross and fine motor abilities and lower strength, endurance, balance, eye-hand coordination, movement velocity, cadence, dexterity with a higher rate of gait abnormalities than children with TD. Conclusion: This systematic review provided preliminary evidence suggesting that motor functioning should be addressed in the evaluation and intervention for children with ASD, and sensory processing should be supported among children with TD. More future research should investigate whether how the performance and engagement in daily life activities are affected by sensory processing and motor skills.Keywords: sensory processing, occupational therapy, children, motor skills
Procedia PDF Downloads 1291194 An Investigation into Why Liquefaction Charts Work: A Necessary Step toward Integrating the States of Art and Practice
Authors: Tarek Abdoun, Ricardo Dobry
Abstract:
This paper is a systematic effort to clarify why field liquefaction charts based on Seed and Idriss’ Simplified Procedure work so well. This is a necessary step toward integrating the states of the art (SOA) and practice (SOP) for evaluating liquefaction and its effects. The SOA relies mostly on laboratory measurements and correlations with void ratio and relative density of the sand. The SOP is based on field measurements of penetration resistance and shear wave velocity coupled with empirical or semi-empirical correlations. This gap slows down further progress in both SOP and SOA. The paper accomplishes its objective through: a literature review of relevant aspects of the SOA including factors influencing threshold shear strain and pore pressure buildup during cyclic strain-controlled tests; a discussion of factors influencing field penetration resistance and shear wave velocity; and a discussion of the meaning of the curves in the liquefaction charts separating liquefaction from no liquefaction, helped by recent full-scale and centrifuge results. It is concluded that the charts are curves of constant cyclic strain at the lower end (Vs1 < 160 m/s), with this strain being about 0.03 to 0.05% for earthquake magnitude, Mw ≈ 7. It is also concluded, in a more speculative way, that the curves at the upper end probably correspond to a variable increasing cyclic strain and Ko, with this upper end controlled by over consolidated and preshaken sands, and with cyclic strains needed to cause liquefaction being as high as 0.1 to 0.3%. These conclusions are validated by application to case histories corresponding to Mw ≈ 7, mostly in the San Francisco Bay Area of California during the 1989 Loma Prieta earthquake.Keywords: permeability, lateral spreading, liquefaction, centrifuge modeling, shear wave velocity charts
Procedia PDF Downloads 2981193 Anonymity and Irreplaceability: Gross Anatomical Practices in Japanese Medical Education
Authors: Ayami Umemura
Abstract:
Without exception, all the bodies dissected in the gross anatomical practices are bodies that have lived irreplaceable lives, laughing and talking with family and friends. While medical education aims to cultivate medical knowledge that is universally applicable to all human bodies, it relies on a unique, irreplaceable, and singular entity. In this presentation, we will explore the ``irreplaceable relationship'' that is cultivated between medical students and anonymous cadavers during gross anatomical practices, drawing on Emmanuel Levinas's ``ethics of the face'' and Martin Buber's discussion of “I-Thou.'' Through this, we aim to present ``a different ethic'' that emerges only in the context of face-to-face relationships, which differs from the generalized, institutionalized, mass-produced ethics like seen in so-called ``ethics codes.'' Since the 1990s, there has been a movement around the world to use gross anatomical practices as an "educational tool" for medical professionalism and medical ethics, and some educational institutions have started disclosing the actual names, occupations, and places of birth of corpses to medical students. These efforts have also been criticized because they lack medical calmness. In any case, the issue here is that this information is all about the past that medical students never know directly. The critical fact that medical students are building relationships from scratch and spending precious time together without any information about the corpses before death is overlooked. Amid gross anatomical practices, a medical student is exposed to anonymous cadavers with faces and touching and feeling them. In this presentation, we will examine a collection of essays written by medical students on gross anatomical practices collected by the Japanese Association for Volunteer Body Donation from medical students across the country since 1978. There, we see the students calling out to the corpse, being called out to, being encouraged, superimposing the carcasses on their own immediate family, regretting parting, and shedding tears. Then, medical students can be seen addressing the dead body in the second person singular, “you.” These behaviors reveal an irreplaceable relationship between the anonymous cadavers and the medical students. The moment they become involved in an irreplaceable relationship between “I and you,” an accidental and anonymous encounter becomes inevitable. When medical students notice being the inevitable takers of voluntary and addressless gifts, they pledge to become “Good Doctors” owing the anonymous persons. This presentation aims to present “a different ethic” based on uniqueness and irreplaceability that comes from the faces of the others embedded in each context, which is different from “routine” and “institutionalized” ethics. That can only be realized ``because of anonymity''.Keywords: anonymity, irreplaceability, uniqueness, singularlity, emanuel levinas, martin buber, alain badiou, medical education
Procedia PDF Downloads 641192 Bio-Guided of Active New Alkaloids from Alstonia Brassi Toxicity Antitumour Activity in Silico and Molecular Modeling
Authors: Mesbah Khaled, Bouraoui Ouissal, Benkiniouar Rachid, Belkhiri Lotfi
Abstract:
Alstonia, which are tropical plants with a wide geographical distribution, have been divided into different sections by different authors based on previous studies of several species within the genus. Monachino divides Alstonia into 5 sections, while Pichon divides it into 3 sections. Several plants belonging to this genus, such as Alstonia brassii, have been used in traditional folk medicine to treat ailments such as fever, malaria and dysentery]. Previous studies focusing on the chemical composition of these plants have successfully identified indol alkaloids with cytotoxic, anti-diabetic and anti-inflammatory properties. The newly discovered monomers are structurally similar to the backbones of picralin, affinisin and macrolin. On the other hand, all recently isolated dimeric compounds have a macrolin moiety. In this study, a computational analysis was performed on a series of novel molecules, including both monomeric and dimeric compounds with different structural frameworks. This investigation represents the first computational study of these molecules using an in silico approach incorporating 2D-QSAR data. The analysis involved various computational techniques, including 2D-QSAR modelling, molecular docking studies and subsequent validation by molecular dynamics simulation and assessment of ADMET properties. The chemical composition was identified by 1D and 2D NMR. Eight new alkaloids were isolated, 5 monomers and 3 dimers. In this section, we focus on the biological activity of 4 new alkaloids belonging to two different skeletons, the affinisine skeleton.Keywords: affinisine, talcarpine, macroline, cytotoxicity, alkaloids
Procedia PDF Downloads 4071191 Rt. Side Sleeping Position Prevents Sudden Infant Death Syndrome
Authors: Othman Salim Hussein Al-Fleesy
Abstract:
Background: Studies showed that sudden infant death syndrome (SIDS) has association with sleeping positions. Up-to-date no study explained how could they prevent it? Objectives: 1-To determine which sleeping position is certainly safe one to prevent SIDS. 2-To establish criteria for suggesting definition and making diagnosis for SIDS. 3-To discuss the controversy surrounding SUND, ALTE, NM, as compared to SIDS. Method: This literature review was built on a previous literature. Articles were obtained randomly according to their availability to the author. For the purpose of this work an easy approach was built by modeling an overview on SIDS topic after clarifying the misconception and misinterpretation of a number of controversial issues in regard to SIDS such as: asphyxia, sudden unexpected death among adults (Bangungut or Pokkuri), apparent life threatening event (ALTE), Nightmare, and comparing the findings with the literature review results..By this unique method we got a clue for prevention of Sudden Infant Death Syndrome. Results: The revision revealed with no doubt that no study before have studied right-side sleeping position at all. The author determined right side as the only safe position to preventing SIDS. A new definition for SIDS is suggested. The author postulated a Right side position hypothesis (Alfleesy hypothesis) which is a testable hypothesis in front of all researchers for further study . Conclusion: Our results contradict totally all previous studies and recommendations. We recommended strongly the right side position only for sleeping to prevent SIDS. New definition is suggested and a new hypothesis is postulated.Keywords: SIDS, ALTE, nightmare, forensic sciences
Procedia PDF Downloads 4511190 Optimum Performance of the Gas Turbine Power Plant Using Adaptive Neuro-Fuzzy Inference System and Statistical Analysis
Authors: Thamir K. Ibrahim, M. M. Rahman, Marwah Noori Mohammed
Abstract:
This study deals with modeling and performance enhancements of a gas-turbine combined cycle power plant. A clean and safe energy is the greatest challenges to meet the requirements of the green environment. These requirements have given way the long-time governing authority of steam turbine (ST) in the world power generation, and the gas turbine (GT) will replace it. Therefore, it is necessary to predict the characteristics of the GT system and optimize its operating strategy by developing a simulation system. The integrated model and simulation code for exploiting the performance of gas turbine power plant are developed utilizing MATLAB code. The performance code for heavy-duty GT and CCGT power plants are validated with the real power plant of Baiji GT and MARAFIQ CCGT plants the results have been satisfactory. A new technology of correlation was considered for all types of simulation data; whose coefficient of determination (R2) was calculated as 0.9825. Some of the latest launched correlations were checked on the Baiji GT plant and apply error analysis. The GT performance was judged by particular parameters opted from the simulation model and also utilized Adaptive Neuro-Fuzzy System (ANFIS) an advanced new optimization technology. The best thermal efficiency and power output attained were about 56% and 345MW respectively. Thus, the operation conditions and ambient temperature are strongly influenced on the overall performance of the GT. The optimum efficiency and power are found at higher turbine inlet temperatures. It can be comprehended that the developed models are powerful tools for estimating the overall performance of the GT plants.Keywords: gas turbine, optimization, ANFIS, performance, operating conditions
Procedia PDF Downloads 4271189 Progressive Collapse of Cooling Towers
Authors: Esmaeil Asadzadeh, Mehtab Alam
Abstract:
Well documented records of the past failures of the structures reveals that the progressive collapse of structures is one of the major reasons for dramatic human loss and economical consequences. Progressive collapse is the failure mechanism in which the structure fails gradually due to the sudden removal of the structural elements. The sudden removal of some structural elements results in the excessive redistributed loads on the others. This sudden removal may be caused by any sudden loading resulted from local explosion, impact loading and terrorist attacks. Hyperbolic thin walled concrete shell structures being an important part of nuclear and thermal power plants are always prone to such terrorist attacks. In concrete structures, the gradual failure would take place by generation of initial cracks and its propagation in the supporting columns along with the tower shell leading to the collapse of the entire structure. In this study the mechanism of progressive collapse for such high raised towers would be simulated employing the finite element method. The aim of this study would be providing clear conceptual step-by-step descriptions of various procedures for progressive collapse analysis using commercially available finite element structural analysis software’s, with the aim that the explanations would be clear enough that they will be readily understandable and will be used by practicing engineers. The study would be carried out in the following procedures: 1. Provide explanations of modeling, simulation and analysis procedures including input screen snapshots; 2. Interpretation of the results and discussions; 3. Conclusions and recommendations.Keywords: progressive collapse, cooling towers, finite element analysis, crack generation, reinforced concrete
Procedia PDF Downloads 4811188 Modeling of Cold Tube Drawing with a Fixed Plug by Finite Element Method and Determination of Optimum Drawing Parameters
Authors: E. Yarar, E. A. Guven, S. Karabay
Abstract:
In this study, a comprehensive simulation was made for the cold tube drawing with fixed plug. The cold tube drawing process is preferred due to its high surface quality and the high mechanical properties. In drawing processes applied to materials with low plastic deformability, cracks can occur on the surfaces and the process efficiency decreases. The aim of the work is to investigate the effects of different drawing parameters on drawing forces and stresses. In the simulations, optimum conditions were investigated for four different materials, Ti64Al4V, AA5052, AISI4140, and C365. One of the most important parameters for the cold drawing process is the die angle. Three dies were designed for the analysis with semi die angles of 5°, 10°, and 15°. Three different parameters were used for the friction coefficient between die and the material. In the simulations, reduction of area and the drawing speed is kept constant. Drawing is done in one pass. According to the simulation results, the highest drawing forces were obtained in Ti64Al4V. As the semi die angle increases, the drawing forces decrease. The change in semi die angle was most effective on Ti64Al4V. Increasing the coefficient of friction is another effect that increases the drawing forces. The increase in the friction coefficient has also increased in drawing stresses. The increase in die angle also increased the drawing stress distribution for the other three materials outside C365. According to the results of the analysis, it is found that the designed drawing die is suitable for drawing. The lowest drawing stress distribution and drawing forces were obtained for AA5052. Drawing die parameters have a direct effect on the results. In addition, lubricants used for drawing have a significant effect on drawing forces.Keywords: cold tube drawing, drawing force, drawing stress, semi die angle
Procedia PDF Downloads 1681187 Introduction to Multi-Agent Deep Deterministic Policy Gradient
Authors: Xu Jie
Abstract:
As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decisionmaking problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security). By modeling the multi-job collaborative cryptographic service scheduling problem as a multiobjective optimized job flow scheduling problem, and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing, and effectively solves the problem of complex resource scheduling in cryptographic services.Keywords: multi-agent reinforcement learning, non-stationary dynamics, multi-agent systems, cooperative and competitive agents
Procedia PDF Downloads 271186 Imbalance on the Croatian Housing Market in the Aftermath of an Economic Crisis
Authors: Tamara Slišković, Tomislav Sekur
Abstract:
This manuscript examines factors that affect demand and supply of the housing market in Croatia. The period from the beginning of this century, until 2008, was characterized by a strong expansion of construction, housing and real estate market in general. Demand for residential units was expanding, and this was supported by favorable lending conditions of banks. Indicators on the supply side, such as the number of newly built houses and the construction volume index were also increasing. Rapid growth of demand, along with the somewhat slower supply growth, led to the situation in which new apartments were sold before the completion of residential buildings. This resulted in a rise of housing price which was indication of a clear link between the housing prices with the supply and demand in the housing market. However, after 2008 general economic conditions in Croatia worsened and demand for housing has fallen dramatically, while supply descended at much slower pace. Given that there is a gap between supply and demand, it can be concluded that the housing market in Croatia is in imbalance. Such trend is accompanied by a relatively small decrease in housing price. The final result of such movements is the large number of unsold housing units at relatively high price levels. For this reason, it can be argued that housing prices are sticky and that, consequently, the price level in the aftermath of a crisis does not correspond to the discrepancy between supply and demand on the Croatian housing market. The degree of rigidity of the housing price can be determined by inclusion of the housing price as the explanatory variable in the housing demand function. Other independent variables are demographic variable (e.g. the number of households), the interest rate on housing loans, households' disposable income and rent. The equilibrium price is reached when the demand for housing equals its supply, and the speed of adjustment of actual prices to equilibrium prices reveals the extent to which the prices are rigid. The latter requires inclusion of the housing prices with time lag as an independent variable in estimating demand function. We also observe the supply side of the housing market, in order to explain to what extent housing prices explain the movement of new construction activity, and other variables that describe the supply. In this context, we test whether new construction on the Croatian market is dependent on current prices or prices with a time lag. Number of dwellings is used to approximate new construction (flow variable), while the housing prices (current or lagged), quantity of dwellings in the previous period (stock variable) and a series of costs related to new construction are independent variables. We conclude that the key reason for the imbalance in the Croatian housing market should be sought in the relative relationship of price elasticities of supply and demand.Keywords: Croatian housing market, economic crisis, housing prices, supply imbalance, demand imbalance
Procedia PDF Downloads 2771185 Optimization Aluminium Design for the Facade Second Skin toward Visual Comfort: Case Studies & Dialux Daylighting Simulation Model
Authors: Yaseri Dahlia Apritasari
Abstract:
Visual comfort is important for the building occupants to need. Visual comfort can be fulfilled through natural lighting (daylighting) and artificial lighting. One strategy to optimize natural lighting can be achieved through the facade second skin design. This strategy can reduce glare, and fulfill visual comfort need. However, the design strategy cannot achieve light intensity for visual comfort. Because the materials, design and opening percentage of the facade of second skin blocked sunlight. This paper discusses aluminum material for the facade second skin design that can fulfill the optimal visual comfort with the case studies Multi Media Tower building. The methodology of the research is combination quantitative and qualitative through field study observed, lighting measurement and visual comfort questionnaire. Then it used too simulation modeling (DIALUX 4.13, 2016) for three facades second skin design model. Through following steps; (1) Measuring visual comfort factor: light intensity indoor and outdoor; (2) Taking visual comfort data from building occupants; (3) Making models with different facade second skin design; (3) Simulating and analyzing the light intensity value for each models that meet occupants visual comfort standard: 350 lux (Indonesia National Standard, 2010). The result shows that optimization of aluminum material for the facade second skin design can meet optimal visual comfort for building occupants. The result can give recommendation aluminum opening percentage of the facade second skin can meet optimal visual comfort for building occupants.Keywords: aluminium material, Facade, second skin, visual comfort
Procedia PDF Downloads 3531184 Process Improvement and Redesign of the Immuno Histology (IHC) Lab at MSKCC: A Lean and Ergonomic Study
Authors: Samantha Meyerholz
Abstract:
MSKCC offers patients cutting edge cancer care with the highest quality standards. However, many patients and industry members do not realize that the operations of the Immunology Histology Lab (IHC) are the backbone for carrying out this mission. The IHC lab manufactures blocks and slides containing critical tissue samples that will be read by a Pathologist to diagnose and dictate a patient’s treatment course. The lab processes 200 requests daily, leading to the generation of approximately 2,000 slides and 1,100 blocks each day. Lab material is transported through labeling, cutting, staining and sorting manufacturing stations, while being managed by multiple techs throughout the space. The quality of the stain as well as wait times associated with processing requests, is directly associated with patients receiving rapid treatments and having a wider range of care options. This project aims to improve slide request turnaround time for rush and non-rush cases, while increasing the quality of each request filled (no missing slides or poorly stained items). Rush cases are to be filled in less than 24 hours, while standard cases are allotted a 48 hour time period. Reducing turnaround times enable patients to communicate sooner with their clinical team regarding their diagnosis, ultimately leading faster treatments and potentially better outcomes. Additional project goals included streamlining tech and material workflow, while reducing waste and increasing efficiency. This project followed a DMAIC structure with emphasis on lean and ergonomic principles that could be integrated into an evolving lab culture. Load times and batching processes were analyzed using process mapping, FMEA analysis, waste analysis, engineering observation, 5S and spaghetti diagramming. Reduction of lab technician movement as well as their body position at each workstation was of top concern to pathology leadership. With new equipment being brought into the lab to carry out workflow improvements, screen and tool placement was discussed with the techs in focus groups, to reduce variation and increase comfort throughout the workspace. 5S analysis was completed in two phases in the IHC lab, helping to drive solutions that reduced rework and tech motion. The IHC lab plans to continue utilizing these techniques to further reduce the time gap between tissue analysis and cancer care.Keywords: engineering, ergonomics, healthcare, lean
Procedia PDF Downloads 2231183 Electron-Ion Recombination for Photoionized and Collisionally Ionized Plasmas
Authors: Shahin A. Abdel-Naby, Asad T. Hassan
Abstract:
Astrophysical plasma environments can be classified into collisionally ionized (CP) and photoionizedplasmas (PP). In the PP, ionization is caused by an external radiation field, while it is caused by electron collision in the CP. Accurate and reliable laboratory astrophysical data for electron-ion recombination is needed for plasma modeling for low and high-temperatures. Dielectronic recombination (DR) is the dominant recombination process for the CP for most of the ions. When a free electron is captured by an ion with simultaneous excitation of its core, a doubly-exited intermediate state may be formed. The doubly excited state relaxes either by electron emission (autoionization) or by radiative decay (photon emission). DR process takes place when the relaxation occurs to a bound state by a photon emission. DR calculations at low-temperatures are problematic and challenging since small uncertaintiesin the low-energy DR resonance positions can produce huge uncertainties in DR rate coefficients.DR rate coefficients for N²⁺ and O³⁺ ions are calculated using state-of-the-art multi-configurationBreit-Pauli atomic structure AUTOSTRUCTURE collisional package within the generalized collisional-radiative framework. Level-resolved calculations for RR and DR rate coefficients from the ground and metastable initial states are produced in an intermediate coupling scheme associated withn = 0 and n = 1 core-excitations. DR cross sections for these ions are convoluted with the experimental electron-cooler temperatures to produce DR rate coefficients. Good agreements are foundbetween these rate coefficients and theexperimental measurements performed at CRYRING heavy-ionstorage ring for both ions.Keywords: atomic data, atomic process, electron-ion collision, plasmas
Procedia PDF Downloads 991182 Simulation of Complex-Shaped Particle Breakage with a Bonded Particle Model Using the Discrete Element Method
Authors: Felix Platzer, Eric Fimbinger
Abstract:
In Discrete Element Method (DEM) simulations, the breakage behavior of particles can be simulated based on different principles. In the case of large, complex-shaped particles that show various breakage patterns depending on the scenario leading to the failure and often only break locally instead of fracturing completely, some of these principles do not lead to realistic results. The reason for this is that in said cases, the methods in question, such as the Particle Replacement Method (PRM) or Voronoi Fracture, replace the initial particle (that is intended to break) into several sub-particles when certain breakage criteria are reached, such as exceeding the fracture energy. That is why those methods are commonly used for the simulation of materials that fracture completely instead of breaking locally. That being the case, when simulating local failure, it is advisable to pre-build the initial particle from sub-particles that are bonded together. The dimensions of these sub-particles consequently define the minimum size of the fracture results. This structure of bonded sub-particles enables the initial particle to break at the location of the highest local loads – due to the failure of the bonds in those areas – with several sub-particle clusters being the result of the fracture, which can again also break locally. In this project, different methods for the generation and calibration of complex-shaped particle conglomerates using bonded particle modeling (BPM) to enable the ability to depict more realistic fracture behavior were evaluated based on the example of filter cake. The method that proved suitable for this purpose and which furthermore allows efficient and realistic simulation of breakage behavior of complex-shaped particles applicable to industrial-sized simulations is presented in this paper.Keywords: bonded particle model, DEM, filter cake, particle breakage
Procedia PDF Downloads 2111181 Induced Pulsation Attack Against Kalman Filter Driven Brushless DC Motor Control System
Authors: Yuri Boiko, Iluju Kiringa, Tet Yeap
Abstract:
We use modeling and simulation tools, to introduce a novel bias injection attack, named the ’Induced Pulsation Attack’, which targets Cyber Physical Systems with closed-loop controlled Brushless DC (BLDC) motor and Kalman filter driver in the feedback loop. This attack involves engaging a linear function with a constant gradient to distort the coefficient of the injected bias, which falsifies the Kalman filter estimates of the rotor’s angular speed. As a result, this manipulation interaction inside the control system causes periodic pulsations in a form of asymmetric sine wave of both current and voltage in the circuit windings, with a high magnitude. It is shown that by varying the gradient of linear function, one can control both the frequency and structure of the induced pulsations. It is also demonstrated that terminating the attack at any point leads to additional compensating effort from the controller to restore the speed to its equilibrium value. This compensation effort produces an exponentially decaying wave, which we call the ’attack withdrawal syndrome’ wave. The conditions for maximizing or minimizing the impact of the attack withdrawal syndrome are determined. Linking the termination of the attack to the end of the full period of the induced pulsation wave has been shown to nullify the attack withdrawal syndrome wave, thereby improving the attack’s covertness.Keywords: cyber-attack, induced pulsation, bias injection, Kalman filter, BLDC motor, control system, closed loop, P- controller, PID-controller, saw-function, cyber-physical system
Procedia PDF Downloads 731180 Fluid–Structure Interaction Modeling of Wind Turbines
Authors: Andre F. A. Cyrino
Abstract:
Knowing that the technological advance is the focus on the efficient extraction of energy from wind, and therefore in the design of wind turbine structures, this work aims the study of the fluid-structure interaction of an idealized wind turbine. The blade was studied as a beam attached to a cylindrical Hub with rotation axis pointing the air flow that passes through the rotor. Using the calculus of variations and the finite difference method the blade will be simulated by a discrete number of nodes and the aerodynamic forces were evaluated. The study presented here was written on Matlab and performs a numeric simulation of a simplified model of windmill containing a Hub and three blades modeled as Euler-Bernoulli beams for small strains and under the constant and uniform wind. The mathematical approach is done by Hamilton’s Extended Principle with the aerodynamic loads applied on the nodes considering the local relative wind speed, angle of attack and aerodynamic lift and drag coefficients. Due to the wide range of angles of attack, a wind turbine blade operates, the airfoil used on the model was NREL SERI S809 which allowed obtaining equations for Cl and Cd as functions of the angle of attack, based on a NASA study. Tridimensional flow effects were no taken in part, as well as torsion of the beam, which only bends. The results showed the dynamic response of the system in terms of displacement and rotational speed as the turbine reached the final speed. Although the results were not compared to real windmills or more complete models, the resulting values were consistent with the size of the system and wind speed.Keywords: blade aerodynamics, fluid–structure interaction, wind turbine aerodynamics, wind turbine blade
Procedia PDF Downloads 2691179 Modeling and Simulation of Vibratory Behavior of Hybrid Smart Composite Plate
Authors: Salah Aguib, Noureddine Chikh, Abdelmalek Khabli, Abdelkader Nour, Toufik Djedid, Lallia Kobzili
Abstract:
This study presents the behavior of a hybrid smart sandwich plate with a magnetorheological elastomer core. In order to improve the vibrational behavior of the plate, the pseudo‐fibers formed by the effect of the magnetic field on the elastomer charged by the ferromagnetic particles are oriented at 45° with respect to the direction of the magnetic field at 0°. Ritz's approach is taken to solve the physical problem. In order to verify and compare the results obtained by the Ritz approach, an analysis using the finite element method was carried out. The rheological property of the MRE material at 0° and at 45° are determined experimentally, The studied elastomer is prepared by a mixture of silicone oil, RTV141A polymer, and 30% of iron particles of total mixture, the mixture obtained is mixed for about 15 minutes to obtain an elastomer paste with good homogenization. In order to develop a magnetorheological elastomer (MRE), this paste is injected into an aluminum mold and subjected to a magnetic field. In our work, we have chosen an ideal percentage of filling of 30%, to obtain the best characteristics of the MRE. The mechanical characteristics obtained by dynamic mechanical viscoanalyzer (DMA) are used in the two numerical approaches. The natural frequencies and the modal damping of the sandwich plate are calculated and discussed for various magnetic field intensities. The results obtained by the two methods are compared. These off‐axis anisotropic MRE structures could open up new opportunities in various fields of aeronautics, aerospace, mechanical engineering and civil engineering.Keywords: hybrid smart sandwich plate, vibratory behavior, FEM, Ritz approach, MRE
Procedia PDF Downloads 681178 Additive Manufacturing’s Impact on Product Design and Development: An Industrial Case Study
Authors: Ahmed Abdelsalam, Daniel Roozbahani, Marjan Alizadeh, Heikki Handroos
Abstract:
The aim of this study was to redesign a pressing air nozzle with lower weight and improved efficiency utilizing Selective Laser Melting (SLM) technology based on Design for Additive Manufacturing (DfAM) methods. The original pressing air nozzle was modified in SolidWorks 3D CAD, and two design concepts were introduced considering the DfAM approach. In the proposed designs, the air channels were amended. 3D models for the original pressing air nozzle and introduced designs were created to obtain the flow characteristic data using Ansys software. Results of CFD modeling for the original and two proposed designs were extracted, compared, and analyzed to demonstrate the impact of design on the development of a more efficient pressing air nozzle by AM process. Improved airflow was achieved by optimizing the pressing air nozzle's internal channel for both design concepts by providing 30% and 50.6% fewer pressure drops than the original design. Moreover, utilizing the presented designs, a significant reduction in product weight was attained. In addition, by applying the proposed designs, 48.3% and 70.3% reduction in product weight was attained compared to the original design. Therefore, pressing air nozzle with enhanced productivity and lowered weight was generated utilizing the DfAM-driven designs developed in this study. The main contribution of this study is to investigate the additional possibilities that can be achieved in designing modern parts using the advantage of SLM technology in producing that part. The approach presented in this study can be applied to almost any similar industrial application.Keywords: additive manufacturing, design for additive manufacturing, design methods, product design, pressing air nozzle
Procedia PDF Downloads 1781177 Reliability Based Analysis of Multi-Lane Reinforced Concrete Slab Bridges
Authors: Ali Mahmoud, Shadi Najjar, Mounir Mabsout, Kassim Tarhini
Abstract:
Empirical expressions for estimating the wheel load distribution and live-load bending moment are typically specified in highway bridge codes such as the AASHTO procedures. The purpose of this paper is to analyze the reliability levels that are inherent in reinforced concrete slab bridges that are designed based on the simplified empirical live load equations in the AASHTO LRFD procedures. To achieve this objective, bridges with multi-lanes (three and four lanes) and different spans are modeled using finite-element analysis (FEA) subjected to HS20 truck loading, tandem loading, and standard lane loading per AASHTO LRFD procedures. The FEA results are compared with the AASHTO LRFD moments in order to quantify the biases that might result from the simplifying assumptions adopted in AASHTO. A reliability analysis is conducted to quantify the reliability index for bridges designed using AASHTO procedures. To reach a consistent level of safety for three- and four-lane bridges, following a previous study restricted to one- and two-lane bridges, the live load factor in the design equation proposed by AASHTO LRFD will be assessed and revised if needed by alternating the live load factor for these lanes. The results will provide structural engineers with more consistent provisions to design concrete slab bridges or evaluate the load-carrying capacity of existing bridges.Keywords: reliability analysis of concrete bridges, finite element modeling, reliability analysis, reinforced concrete bridge design, load carrying capacity
Procedia PDF Downloads 3431176 Modeling of Sediment Yield and Streamflow of Watershed Basin in the Philippines Using the Soil Water Assessment Tool Model for Watershed Sustainability
Authors: Warda L. Panondi, Norihiro Izumi
Abstract:
Sedimentation is a significant threat to the sustainability of reservoirs and their watershed. In the Philippines, the Pulangi watershed experienced a high sediment loss mainly due to land conversions and plantations that showed critical erosion rates beyond the tolerable limit of -10 ton/ha/yr in all of its sub-basin. From this event, the prediction of runoff volume and sediment yield is essential to examine using the country's soil conservation techniques realistically. In this research, the Pulangi watershed was modeled using the soil water assessment tool (SWAT) to predict its watershed basin's annual runoff and sediment yield. For the calibration and validation of the model, the SWAT-CUP was utilized. The model was calibrated with monthly discharge data for 1990-1993 and validated for 1994-1997. Simultaneously, the sediment yield was calibrated in 2014 and validated in 2015 because of limited observed datasets. Uncertainty analysis and calculation of efficiency indexes were accomplished through the SUFI-2 algorithm. According to the coefficient of determination (R2), Nash Sutcliffe efficiency (NSE), King-Gupta efficiency (KGE), and PBIAS, the calculation of streamflow indicates a good performance for both calibration and validation periods while the sediment yield resulted in a satisfactory performance for both calibration and validation. Therefore, this study was able to identify the most critical sub-basin and severe needs of soil conservation. Furthermore, this study will provide baseline information to prevent floods and landslides and serve as a useful reference for land-use policies and watershed management and sustainability in the Pulangi watershed.Keywords: Pulangi watershed, sediment yield, streamflow, SWAT model
Procedia PDF Downloads 2111175 Governance in the Age of Artificial intelligence and E- Government
Authors: Mernoosh Abouzari, Shahrokh Sahraei
Abstract:
Electronic government is a way for governments to use new technology that provides people with the necessary facilities for proper access to government information and services, improving the quality of services and providing broad opportunities to participate in democratic processes and institutions. That leads to providing the possibility of easy use of information technology in order to distribute government services to the customer without holidays, which increases people's satisfaction and participation in political and economic activities. The expansion of e-government services and its movement towards intelligentization has the ability to re-establish the relationship between the government and citizens and the elements and components of the government. Electronic government is the result of the use of information and communication technology (ICT), which by implementing it at the government level, in terms of the efficiency and effectiveness of government systems and the way of providing services, tremendous commercial changes are created, which brings people's satisfaction at the wide level will follow. The main level of electronic government services has become objectified today with the presence of artificial intelligence systems, which recent advances in artificial intelligence represent a revolution in the use of machines to support predictive decision-making and Classification of data. With the use of deep learning tools, artificial intelligence can mean a significant improvement in the delivery of services to citizens and uplift the work of public service professionals while also inspiring a new generation of technocrats to enter government. This smart revolution may put aside some functions of the government, change its components, and concepts such as governance, policymaking or democracy will change in front of artificial intelligence technology, and the top-down position in governance may face serious changes, and If governments delay in using artificial intelligence, the balance of power will change and private companies will monopolize everything with their pioneering in this field, and the world order will also depend on rich multinational companies and in fact, Algorithmic systems will become the ruling systems of the world. It can be said that currently, the revolution in information technology and biotechnology has been started by engineers, large economic companies, and scientists who are rarely aware of the political complexities of their decisions and certainly do not represent anyone. Therefore, it seems that if liberalism, nationalism, or any other religion wants to organize the world of 2050, it should not only rationalize the concept of artificial intelligence and complex data algorithm but also mix them in a new and meaningful narrative. Therefore, the changes caused by artificial intelligence in the political and economic order will lead to a major change in the way all countries deal with the phenomenon of digital globalization. In this paper, while debating the role and performance of e-government, we will discuss the efficiency and application of artificial intelligence in e-government, and we will consider the developments resulting from it in the new world and the concepts of governance.Keywords: electronic government, artificial intelligence, information and communication technology., system
Procedia PDF Downloads 96