Search results for: real gas model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20011

Search results for: real gas model

9631 Local Differential Privacy-Based Data-Sharing Scheme for Smart Utilities

Authors: Veniamin Boiarkin, Bruno Bogaz Zarpelão, Muttukrishnan Rajarajan

Abstract:

The manufacturing sector is a vital component of most economies, which leads to a large number of cyberattacks on organisations, whereas disruption in operation may lead to significant economic consequences. Adversaries aim to disrupt the production processes of manufacturing companies, gain financial advantages, and steal intellectual property by getting unauthorised access to sensitive data. Access to sensitive data helps organisations to enhance the production and management processes. However, the majority of the existing data-sharing mechanisms are either susceptible to different cyber attacks or heavy in terms of computation overhead. In this paper, a privacy-preserving data-sharing scheme for smart utilities is proposed. First, a customer’s privacy adjustment mechanism is proposed to make sure that end-users have control over their privacy, which is required by the latest government regulations, such as the General Data Protection Regulation. Secondly, a local differential privacy-based mechanism is proposed to ensure the privacy of the end-users by hiding real data based on the end-user preferences. The proposed scheme may be applied to different industrial control systems, whereas in this study, it is validated for energy utility use cases consisting of smart, intelligent devices. The results show that the proposed scheme may guarantee the required level of privacy with an expected relative error in utility.

Keywords: data-sharing, local differential privacy, manufacturing, privacy-preserving mechanism, smart utility

Procedia PDF Downloads 58
9630 Analysis of Two-Echelon Supply Chain with Perishable Items under Stochastic Demand

Authors: Saeed Poormoaied

Abstract:

Perishability and developing an intelligent control policy for perishable items are the major concerns of marketing managers in a supply chain. In this study, we address a two-echelon supply chain problem for perishable items with a single vendor and a single buyer. The buyer adopts an aged-based continuous review policy which works by taking both the stock level and the aging process of items into account. The vendor works under the warehouse framework, where its lot size is determined with respect to the batch size of the buyer. The model holds for a positive and fixed lead time for the buyer, and zero lead time for the vendor. The demand follows a Poisson process and any unmet demand is lost. We provide exact analytic expressions for the operational characteristics of the system by using the renewal reward theorem. Items have a fixed lifetime after which they become unusable and are disposed of from the buyer's system. The age of items starts when they are unpacked and ready for the consumption at the buyer. When items are held by the vendor, there is no aging process which results in no perishing at the vendor's site. The model is developed under the centralized framework, which takes the expected profit of both vendor and buyer into consideration. The goal is to determine the optimal policy parameters under the service level constraint at the retailer's site. A sensitivity analysis is performed to investigate the effect of the key input parameters on the expected profit and order quantity in the supply chain. The efficiency of the proposed age-based policy is also evaluated through a numerical study. Our results show that when the unit perishing cost is negligible, a significant cost saving is achieved.

Keywords: two-echelon supply chain, perishable items, age-based policy, renewal reward theorem

Procedia PDF Downloads 130
9629 The Effect of Naringenin on the Apoptosis in T47D Cell Line of Breast Cancer

Authors: AliAkbar Hafezi, Jahanbakhsh Asadi, Majid Shahbazi, Alijan Tabarraei, Nader Mansour Samaei, Hamed Sheibak, Roghaye Gharaei

Abstract:

Background: Breast cancer is the most common cancer in women. In most cancer cells, apoptosis is blocked. As for the importance of apoptosis in cancer cell death and the role of different genes in its induction or inhibition, the search for compounds that can begin the process of apoptosis in tumor cells is discussed as a new strategy in anticancer drug discovery. The aim of this study was to investigate the effect of Naringenin (NGEN) on the apoptosis in the T47D cell line of breast cancer. Materials and Methods: In this experimental study in vitro, the T47D cell line of breast cancer was selected as a sample. The cells at 24, 48, and 72 hours were treated with doses of 20, 200, and 1000 µm of Naringenin. Then, the transcription levels of the genes involved in apoptosis, including Bcl-2, Bax, Caspase 3, Caspase 8, Caspase 9, P53, PARP-1, and FAS, were assessed using Real Time-PCR. The collected data were analyzed using IBM SPSS Statistics 24.0. Results: The results showed that Naringenin at doses of 20, 200, and 1000 µm in all three times of 24, 48, and 72 hours increased the expression of Caspase 3, P53, PARP-1 and FAS and reduced the expression of Bcl-2 and increased the Bax/Bcl-2 ratio, nevertheless in none of the studied doses and times, had not a significant effect on the expression of Bax, Caspase 8 and Caspase 9. Conclusion: This study indicates that Naringenin can reduce the growth of some cancer cells and cause their deaths through increased apoptosis and decreased anti-apoptotic Bcl-2 gene expression and, resulting in the induction of apoptosis via both internal and external pathways.

Keywords: apoptosis, breast cancer, naringenin, T47D cell line

Procedia PDF Downloads 37
9628 Numerical Prediction of Width Crack of Concrete Dapped-End Beams

Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo

Abstract:

Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.

Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis

Procedia PDF Downloads 148
9627 How Message Framing and Temporal Distance Affect Word of Mouth

Authors: Camille Lacan, Pierre Desmet

Abstract:

In the crowdfunding model, a campaign succeeds by collecting the funds required over a predefined duration. The success of a CF campaign depends both on the capacity to attract members of the online communities concerned, and on the community members’ involvement in online word-of-mouth recommendations. To maximize the campaign's success probability, project creators (i.e., an organization appealing for financial resources) send messages to contributors to ask them to issue word of mouth. Internet users relay information about projects through Word of Mouth which is defined as “a critical tool for facilitating information diffusion throughout online communities”. The effectiveness of these messages depends on the message framing and the time at which they are sent to contributors (i.e., at the start of the campaign or close to the deadline). This article addresses the following question: What are the effect of message framing and temporal distance on the willingness to share word of mouth? Drawing on Perspectives Theory and Construal Level Theory, this study examines the interplay between message framing (Gains vs. Losses) and temporal distance (message while the deadline is coming vs. far) on intention to share word of mouth. A between-subject experimental design is conducted to test the research model. Results show significant differences between a loss-framed message (lack of benefits if the campaign fails) associated with a short deadline (ending tomorrow) compared to a gain-framed message (benefits if the campaign succeeds) associated with a distant deadline (ending in three months). However, this effect is moderated by the anticipated regret of a campaign failure and the temporal orientation. These moderating effects contribute to specifying the boundary condition of the framing effect. Handling the message framing and the temporal distance are thus the key decisions to influence the willingness to share word of mouth.

Keywords: construal levels, crowdfunding, message framing, word of mouth

Procedia PDF Downloads 236
9626 Human Gesture Recognition for Real-Time Control of Humanoid Robot

Authors: S. Aswath, Chinmaya Krishna Tilak, Amal Suresh, Ganesh Udupa

Abstract:

There are technologies to control a humanoid robot in many ways. But the use of Electromyogram (EMG) electrodes has its own importance in setting up the control system. The EMG based control system helps to control robotic devices with more fidelity and precision. In this paper, development of an electromyogram based interface for human gesture recognition for the control of a humanoid robot is presented. To recognize control signs in the gestures, a single channel EMG sensor is positioned on the muscles of the human body. Instead of using a remote control unit, the humanoid robot is controlled by various gestures performed by the human. The EMG electrodes attached to the muscles generates an analog signal due to the effect of nerve impulses generated on moving muscles of the human being. The analog signals taken up from the muscles are supplied to a differential muscle sensor that processes the given signal to generate a signal suitable for the microcontroller to get the control over a humanoid robot. The signal from the differential muscle sensor is converted to a digital form using the ADC of the microcontroller and outputs its decision to the CM-530 humanoid robot controller through a Zigbee wireless interface. The output decision of the CM-530 processor is sent to a motor driver in order to control the servo motors in required direction for human like actions. This method for gaining control of a humanoid robot could be used for performing actions with more accuracy and ease. In addition, a study has been conducted to investigate the controllability and ease of use of the interface and the employed gestures.

Keywords: electromyogram, gesture, muscle sensor, humanoid robot, microcontroller, Zigbee

Procedia PDF Downloads 394
9625 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 220
9624 Looking beyond Corporate Social Responsibility to Sustainable Development: Conceptualisation and Theoretical Exploration

Authors: Mercy E. Makpor

Abstract:

Traditional Corporate Social Responsibility (CSR) idea has gone beyond just ensuring safety environments, caring about global warming and ensuring good living standards and conditions for the society at large. The paradigm shift is towards a focus on strategic objectives and the long-term value creation for both businesses and the society at large for a realistic future. As an important approach to solving social and environment issues, CSR has been accepted globally. Yet the approach is expected to go beyond where it is currently. So much is expected from businesses and governments at every level globally and locally. This then leads to the original idea of the concept, that is, how it originated and how it has been perceived over the years. Little wonder there has been a lot of definitions surrounding the concept without a major globally acceptable definition of it. The definition of CSR given by the European Commission will be considered for the purpose of this paper. Sustainable Development (SD), on the other hand, has been viewed in recent years as an ethical concept explained in the UN-Report termed “Our Common Future,” which can also be referred to as the Brundtland report. The report summarises the need for SD to take place in the present without comprising the future. However, the recent 21st-century framework on sustainability known as the “Triple Bottom Line (TBL)” framework, has added its voice to the concepts of CSR and sustainable development. The TBL model is of the opinion that businesses should not only report on their financial performance but also on their social and environmental performances, highlighting that CSR has gone beyond just the “material-impact” approach towards a “Future-Oriented” approach (sustainability). In this paper, the concept of CSR is revisited by exploring the various theories therein. The discourse on the concepts of sustainable development and sustainable development frameworks will also be indicated, thereby inducing these into how CSR can benefit both businesses and their stakeholders as well as the entirety of the society, not just for the present but for the future. It does this by exploring the importance of both concepts (CSR and SD) and concludes by making recommendations for a more empirical research in the near future.

Keywords: corporate social responsibility, sustainable development, sustainability, triple bottom line model

Procedia PDF Downloads 234
9623 A Dual Channel Optical Sensor for Norepinephrine via Situ Generated Silver Nanoparticles

Authors: Shalini Menon, K. Girish Kumar

Abstract:

Norepinephrine (NE) is one of the naturally occurring catecholamines which act both as a neurotransmitter and a hormone. Catecholamine levels are used for the diagnosis and regulation of phaeochromocytoma, a neuroendocrine tumor of the adrenal medulla. The development of simple, rapid and cost-effective sensors for NE still remains a great challenge. Herein, a dual-channel sensor has been developed for the determination of NE. A mixture of AgNO₃, NaOH, NH₃.H₂O and cetrimonium bromide in appropriate concentrations was taken as the working solution. To the thoroughly vortexed mixture, an appropriate volume of NE solution was added. After a particular time, the fluorescence and absorbance were measured. Fluorescence measurements were made by exciting at a wavelength of 400 nm. A dual-channel optical sensor has been developed for the colorimetric as well as the fluorimetric determination of NE. Metal enhanced fluorescence property of nanoparticles forms the basis of the fluorimetric detection of this assay, whereas the appearance of brown color in the presence of NE leads to colorimetric detection. Wide linear ranges and sub-micromolar detection limits were obtained using both the techniques. Moreover, the colorimetric approach was applied for the determination of NE in synthetic blood serum and the results obtained were compared with the classic high-performance liquid chromatography (HPLC) method. Recoveries between 97% and 104% were obtained using the proposed method. Based on five replicate measurements, relative standard deviation (RSD) for NE determination in the examined synthetic blood serum was found to be 2.3%. This indicates the reliability of the proposed sensor for real sample analysis.

Keywords: norepinephrine, colorimetry, fluorescence, silver nanoparticles

Procedia PDF Downloads 97
9622 Evolution of Relations among Multiple Institutional Logics: A Case Study from a Higher Education Institution

Authors: Ye Jiang

Abstract:

To examine how the relationships among multiple institutional logics vary over time and the factors that may impact this process, we conducted a 15-year in-depth longitudinal case study of a Higher Education Institution to examine its exploration in college student management. By employing constructive grounded theory, we developed a four-stage process model comprising separation, formalization, selective bridging, and embeddedness that showed how two contradictory logics become complementary, and finally become a new hybridized logic. We argue that selective bridging is an important step in changing inter-logic relations. We also found that ambidextrous leadership and situational sensemaking are two key factors that drive this process. Our contribution to the literature is threefold. First, we enhance the literature on the changing relationships among multiple institutional logics and our findings advance the understanding of relationships between multiple logics through a dynamic view. While most studies have tended to assume that the relationship among logics is static and persistently in a contentious state, we contend that the relationships among multiple institutional logics can change over time. Competitive logics can become complementary, and a new hybridized logic can emerge therefrom. The four-stage logic hybridization process model offers insights on the logic hybridization process, which is underexplored in the literature. Second, our research reveals that selective bridging is important in making conflicting logics compatible, and thus constitutes a key step in creating new hybridized logic dynamics. Our findings suggest that the relations between multiple logics are manageable and can thus be manipulated for organizational innovation. Finally, the factors influencing the variations in inter-logic relations enrich the understanding of the antecedents of these dynamics.

Keywords: institutional theory, institutional logics, ambidextrous leadership, situational sensemaking

Procedia PDF Downloads 131
9621 Differences in Vitamin D Status in Caucasian and Asian Women Following Ultraviolet Radiation (UVR) Exposure

Authors: O. Hakim, K. Hart, P. McCabe, J. Berry, L. E. Rhodes, N. Spyrou, A. Alfuraih, S. Lanham-New

Abstract:

It is known that skin pigmentation reduces the penetration of ultraviolet radiation (UVR) and thus photosynthesis of 25(OH)D. However, the ethnic differences in 25(OH)D production remain to be fully elucidated. This study aimed to investigate the differences in vitamin D production between Asian and Caucasian postmenopausal women, in response to a defined, controlled UVB exposure. Seventeen women; nine white Caucasian (skin phototype II and III), eight South Asian women (skin phototype IV and V) participated in the study, acting as their controls. Three blood samples were taken for measurement of 25(OH)D during the run-in period (nine days, no sunbed exposure) after which all subjects underwent an identical UVR exposure protocol irrespective of skin colour (nine days, three sunbed sessions: 6, 8 and 8 minutes respectively with approximately 80% of body surface exposed). Skin tone was measured four times during the study. Both groups showed a gradual increase in 25(OH)D with final levels significantly higher than baseline (p<0.01). 25(OH)D concentration mean from a baseline of 43.58±19.65 to 57.80±17.11 nmol/l among Caucasian and from 27.03±23.92 to 44.73±17.74 nmol/l among Asian women. The baseline status of vitamin D was classified as deficient among the Asian women and insufficient among the Caucasian women. The percentage increase in vitamin D3 among Caucasians was 39.86% (21.02) and 207.78% (286.02) in Asian subjects respectively. This greater response to UVR exposure reflects the lower baseline levels of the Asian subjects. The mixed linear model analysis identified a significant effect of duration of UVR exposure on the production of 25(OH)D. However, the model shows no significant effect of ethnicity and skin tone on the production of 25(OH)D. These novel findings indicate that people of Asian ethnicity have the full capability to produce a similar amount of vitamin D compared to the Caucasian group; initial vitamin D concentration influences the amount of UVB needed to reach equal serum concentrations.

Keywords: ethnicity, Caucasian, South Asian, vitamin D, ultraviolet radiation, UVR

Procedia PDF Downloads 521
9620 Role of Activated Partial Thromboplastin Time (APTT) to Assess the Need of Platelet Transfusion in Dengue

Authors: Kalyan Koganti

Abstract:

Background: In India, platelet transfusions are given to large no. of patients suffering from dengue due to the fear of bleeding especially when the platelet counts are low. Though many patients do not bleed when the platelet count falls to less than 20,000, certain patients bleed even if the platelet counts are more than 20,000 without any comorbid condition (like gastrointestinal ulcer) in the past. This fear has led to huge amounts of unnecessary platelet transfusions which cause significant economic burden to low and middle-income countries like India and also sometimes these transfusions end with transfusion-related adverse reactions. Objective: To identify the role of Activated Partial Thromboplastin Time (APTT) in comparison with thrombocytoenia as an indicator to assess the real need of platelet transfusions. Method: A prospective study was conducted at a hospital in South India which included 176 admitted cases of dengue confirmed by immunochromatography. APTT was performed in all these patients along with platelet count. Cut off values of > 60 seconds for APTT and < 20,000 for platelet count were considered to assess the bleeding manifestations. Results: Among the total 176 patients, 56 patients had bleeding manifestations like malena, hematuria, bleeding gums etc. APTT > 60 seconds had a sensitivity and specificity of 93% and 90% respectively in identifying bleeding manifestations where as platelet count of < 20,000 had a sensitivity and specificity of 64% and 73% respectively. Conclusion: Elevated APTT levels can be considered as an indicator to assess the need of platelet transfusion in dengue. As there is a significant variation among patients who bleed with respect to platelet count, APTT can be considered to avoid unnecessary transfusions.

Keywords: activated partial thromboplastin time, dengue, platelet transfusion, thrombocytopenia

Procedia PDF Downloads 201
9619 Bioavailability of Zinc to Wheat Grown in the Calcareous Soils of Iraqi Kurdistan

Authors: Muhammed Saeed Rasheed

Abstract:

Knowledge of the zinc and phytic acid (PA) concentrations of staple cereal crops are essential when evaluating the nutritional health of national and regional populations. In the present study, a total of 120 farmers’ fields in Iraqi Kurdistan were surveyed for zinc status in soil and wheat grain samples; wheat is the staple carbohydrate source in the region. Soils were analysed for total concentrations of phosphorus (PT) and zinc (ZnT), available P (POlsen) and Zn (ZnDTPA) and for pH. Average values (mg kg-1) ranged between 403-3740 (PT), 42.0-203 (ZnT), 2.13-28.1 (POlsen) and 0.14-5.23 (ZnDTPA); pH was in the range 7.46-8.67. The concentrations of Zn, PA/Zn molar ratio and estimated Zn bioavailability were also determined in wheat grain. The ranges of Zn and PA concentrations (mg kg⁻¹) were 12.3-63.2 and 5400 – 9300, respectively, giving a PA/Zn molar ratio of 15.7-30.6. A trivariate model was used to estimate intake of bioaccessible Zn, employing the following parameter values: (i) maximum Zn absorption = 0.09 (AMAX), (ii) equilibrium dissociation constant of zinc-receptor binding reaction = 0.680 (KP), and (iii) equilibrium dissociation constant of Zn-PA binding reaction = 0.033 (KR). In the model, total daily absorbed Zn (TAZ) (mg d⁻¹) as a function of total daily nutritional PA (mmole d⁻¹) and total daily nutritional Zn (mmole Zn d⁻¹) was estimated assuming an average wheat flour consumption of 300 g day⁻¹ in the region. Consideration of the PA and Zn intake suggest only 21.5±2.9% of grain Zn is bioavailable so that the effective Zn intake from wheat is only 1.84-2.63 mg d-1 for the local population. Overall results suggest available dietary Zn is below recommended levels (11 mg d⁻¹), partly due to low uptake by wheat but also due to the presence of large concentrations of PA in wheat grains. A crop breeding program combined with enhanced agronomic management methods is needed to enhance both Zn uptake and bioavailability in grains of cultivated wheat types.

Keywords: phosphorus, zinc, phytic acid, phytic acid to zinc molar ratio, zinc bioavailability

Procedia PDF Downloads 113
9618 A Rational Strategy to Maximize the Value-Added Products by Selectively Converting Components of Inferior Heavy Oil

Authors: Kashan Bashir, Salah Naji Ahmed Sufyan, Mirza Umar Baig

Abstract:

In this study, n-dodecane, tetralin, decalin, and tetramethybenzene (TMBE) were used as model compounds of alkanes, naphthenic-aromatic, cycloalkanes and alkyl-benzenes presented in hydro-diesel. The catalytic cracking properties of four model compounds over Y zeolite catalyst (Y-Cat.) and ZSM-5 zeolite catalysts (ZSM-5-Cat.) were probed. The experiment results revealed that high conversion of macromolecular paraffin and naphthenic aromatics were achieved over Y-Cat, whereas its low cracking activity of intermediate products micromolecules paraffin and olefin and high activity of hydride transfer reaction goes against the production of value-added products (light olefin and gasoline). In contrast, despite the fact that the hydride transfer reaction was greatly inhabited over ZSM-5-Cat, the low conversion of macromolecules was observed attributed to diffusion limitations. Interestingly, the mixed catalyst compensates for the shortcomings of the two catalysts, and a “relay reaction” between Y-Cat and ZSM-5-Cat was proposed. Specifically, the added Y-Cat acts as a “pre-cracking booster site” and promotes macromolecules conversion. The addition of ZSM-5-Cat not only significantly suppresses the hydride transfer reaction but also contributes to the cracking of immediate products paraffin and olefin into ethylene and propylene, resulting in a high yield of alkyl-benzene (gasoline), ethylene, and propylene with a low yield of naphthalene (LCO) and coke. The catalytic cracking evaluation experiments of mixed hydro-LCO were also performed to further clarify the “relay reaction” above, showing the highest yield of LPG and gasoline over mixed catalyst. The results indicate that the Y-cat and ZSM-5-cat have a synergistic effect on the conversion of hydro-diesel and corresponding value-added product yield and selective coke yield.

Keywords: synergistic effect, hydro-diesel cracking, FCC, zeolite catalyst, ethylene and propylene

Procedia PDF Downloads 51
9617 Electrochemical Sensor Based on Poly(Pyrogallol) for the Simultaneous Detection of Phenolic Compounds and Nitrite in Wastewater

Authors: Majid Farsadrooh, Najmeh Sabbaghi, Seyed Mohammad Mostashari, Abolhasan Moradi

Abstract:

Phenolic compounds are chief environmental contaminants on account of their hazardous and toxic nature on human health. The preparation of sensitive and potent chemosensors to monitor emerging pollution in water and effluent samples has received great consideration. A novel and versatile nanocomposite sensor based on poly pyrogallol is presented for the first time in this study, and its electrochemical behavior for simultaneous detection of hydroquinone (HQ), catechol (CT), and resorcinol (RS) in the presence of nitrite is evaluated. The physicochemical characteristics of the fabricated nanocomposite were investigated by emission-scanning electron microscopy (FE-SEM), energy-dispersive X-ray spectroscopy (EDS), and Brunauer-Emmett-Teller (BET). The electrochemical response of the proposed sensor to the detection of HQ, CT, RS, and nitrite is studied using cyclic voltammetry (CV), chronoamperometry (CA), differential pulse voltammetry (DPV), and electrochemical impedance spectroscopy (EIS). The kinetic characterization of the prepared sensor showed that both adsorption and diffusion processes can control reactions at the electrode. In the optimized conditions, the new chemosensor provides a wide linear range of 0.5-236.3, 0.8-236.3, 0.9-236.3, and 1.2-236.3 μM with a low limit of detection of 21.1, 51.4, 98.9, and 110.8 nM (S/N = 3) for HQ, CT and RS, and nitrite, respectively. Remarkably, the electrochemical sensor has outstanding selectivity, repeatability, and stability and is successfully employed for the detection of RS, CT, HQ, and nitrite in real water samples with the recovery of 96.2%–102.4%, 97.8%-102.6%, 98.0%–102.4% and 98.4%–103.2% for RS, CT, HQ, and nitrite, respectively. These outcomes illustrate that poly pyrogallol is a promising candidate for effective electrochemical detection of dihydroxybenzene isomers in the presence of nitrite.

Keywords: electrochemical sensor, poly pyrogallol, phenolic compounds, simultaneous determination

Procedia PDF Downloads 53
9616 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 209
9615 Microbial Contaminants in Drinking Water Collected from Different Regions of Kuwait

Authors: Abu Salim Mustafa

Abstract:

Water plays a major role in maintaining life on earth, but it can also serve as a matrix for pathogenic organisms, posing substantial health threats to humans. Although, outbreaks of diseases attributable to drinking water may not be common in industrialized countries, they still occur and can lead to serious acute, chronic, or sometimes fatal health consequences. The analysis of drinking water samples from different regions of Kuwait was performed in this study for bacterial and viral contaminations. Drinking tap water samples were collected from 15 different locations of the six Kuwait governorates. All samples were analyzed by confocal microscopy for the presence of bacteria. The samples were cultured in vitro to detect cultivable organisms. DNA was isolated from the cultured organisms and the identity of the bacteria was determined by sequencing the bacterial 16S rRNA genes, followed by BLAST analysis in the database of NCBI, USA. RNA was extracted from water samples and analyzed by real-time PCR for the detection of viruses with potential health risks, i.e. Astrovirus, Enterovirus, Norovirus, Rotavirus, and Hepatitis A. Confocal microscopy showed the presence of bacteria in some water samples. The 16S rRNA gene sequencing of culture grown organisms, followed by BLAST analysis, identified the presence of several non-pathogenic bacterial species. However, one sample had Acinetobacter baumannii, which often causes opportunistic infections in immunocompromised people, but none of the studied viruses could be detected in the drinking water samples analyzed. The results indicate that drinking water samples analyzed from various locations in Kuwait are relatively safe for drinking and do not contain many harmful pathogens.

Keywords: drinking water, microbial contaminant, 16S rDNA, Kuwait

Procedia PDF Downloads 138
9614 Knowledge, Attitude, and Practice among Medical Students Regarding Basic Life Support

Authors: Sumia Fatima, Tayyaba Idrees

Abstract:

Cardiac Arrest and Heart Failures are an important causes of mortality in developed and developing countries and even a second spent without Cardiopulmonary Resuscitation (CPR) increases the risk of mortality. Youngs doctors are expected to partake in CPR from the first day and if they are not taught basic life support (BLS) skills during their studies. They have next to no opportunity to learn them in clinical settings. To determine the exact level of knowledge of Basic Life Support among medical students. To compare the degree of knowledge among 1st and 2nd year medical students of RMU (Rawalpindi Medical University), using self-structured questionnaires. A cross sectional, qualitative primary study was conducted in March 2020 in order to analyse theoretical and practical knowledge of Basic Life Support among Medical Students of 1st and 2nd year MBBS. Self-Structured Questionnaires were distributed among 300 students, 150 from 1st year and 150 from 2nd year. Data was analysed using SPSS v 22. Chi Square test was employed. The results showed that only 13 (4%) students had received formal BLS training.129 (42%) students had encountered accidents in real life but had not known how to react. Majority responded that Basic Life Support should be made part of medical college curriculum (189 students), 194 participants (64%) had moderate knowledge of both theoretical and practical aspects of BLS. 75-80% students of both 1st and 2nd year had only moderate knowledge, which must be improved for them to be better healthcare providers in future. It was also found that male students had more practical knowledge than females, but both had almost the same proficiency in theoretical knowledge. The study concluded that the level of knowledge of BLS among the students was not up to the mark, and there is a dire need to include BLS training in the medical colleges’ curriculum.

Keywords: basic cardiac life support, cardiac arrest, awareness, medical students

Procedia PDF Downloads 81
9613 MiRNA Regulation of CXCL12β during Inflammation

Authors: Raju Ranjha, Surbhi Aggarwal

Abstract:

Background: Inflammation plays an important role in infectious and non-infectious diseases. MiRNA is also reported to play role in inflammation and associated cancers. Chemokine CXCL12 is also known to play role in inflammation and various cancers. CXCL12/CXCR4 chemokine axis was involved in pathogenesis of IBD specially UC. Supplementation of CXCL12 induces homing of dendritic cells to spleen and enhances control of plasmodium parasite in BALB/c mice. We looked at the regulation of CXCL12β by miRNA in UC colitis. Prolonged inflammation of colon in UC patient increases the risk of developing colorectal cancer. We looked at the expression differences of CXCl12β and its targeting miRNA in cancer susceptible area of colon of UC patients. Aim: Aim of this study was to find out the expression regulation of CXCL12β by miRNA in inflammation. Materials and Methods: Biopsy samples and blood samples were collected from UC patients and non-IBD controls. mRNA expression was analyzed using microarray and real-time PCR. CXCL12β targeting miRNA were looked by using online target prediction tools. Expression of CXCL12β in blood samples and cell line supernatant was analyzed using ELISA. miRNA target was validated using dual luciferase assay. Results and conclusion: We found miR-200a regulate the expression of CXCL12β in UC. Expression of CXCL12β was increased in cancer susceptible part of colon and expression of its targeting miRNA was decreased in the same part of colon. miR-200a regulate CXCL12β expression in inflammation and may be an important therapeutic target in inflammation associated cancer.

Keywords: inflammation, miRNA, regulation, CXCL12

Procedia PDF Downloads 253
9612 Revisiting Ryan v Lennon to Make the Case against Judicial Supremacy

Authors: Tom Hickey

Abstract:

It is difficult to conceive of a case that might more starkly bring the arguments concerning judicial review to the fore than State (Ryan) v Lennon. Small wonder that it has attracted so much scholarly attention, although the fact that almost all of it has been in an Irish setting is perhaps surprising, given the illustrative value of the case in respect of a philosophical quandary that continues to command attention in all developed constitutional democracies. Should judges have power to invalidate legislation? This article revisits Ryan v Lennon with an eye on the importance of the idea of “democracy” in the case. It assesses the meaning of democracy: what its purpose might be and what practical implications might follow, specifically in respect of judicial review. Based on this assessment, it argues for a particular institutional model for the vindication of constitutional rights. In the context of calls for the drafting of a new constitution for Ireland, however forlorn these calls might be for the moment, it makes a broad and general case for the abandonment of judicial supremacy and for the taking up of a model in which judges have a constrained rights reviewing role that informs a more robust role that legislators would play, thereby enhancing the quality of the control that citizens have over their own laws. The article is in three parts. Part I assesses the exercise of judicial power over legislation in Ireland, with the primary emphasis on Ryan v Lennon. It considers the role played by the idea of democracy in that case and relates it to certain apparently intractable dilemmas that emerged in later Irish constitutional jurisprudence. Part II considers the concept of democracy more generally, with an eye on overall implications for judicial power. It argues for an account of democracy based on the idea of equally shared popular control over government. Part III assesses how this understanding might inform a new constitutional arrangement in the Irish setting for the vindication of fundamental rights.

Keywords: constitutional rights, democracy as popular control, Ireland, judicial power, republican theory, Ryan v Lennon

Procedia PDF Downloads 523
9611 Reflections of Nocturnal Librarian: Attaining a Work-Life Balance in a Mega-City of Lagos State Nigeria

Authors: Oluwole Durodolu

Abstract:

The rationale for this study is to explore the adaptive strategy that librarians adopt in performing night shifts in a mega-city like Lagos state. Maslach Burnout Theory would be used to measure the three proportions of burnout in understanding emotional exhaustion, depersonalisation, and individual accomplishment to scrutinise job-related burnout syndrome allied with longstanding, unsolved stress. The qualitative methodology guided by a phenomenological research paradigm, which is an approach that focuses on the commonality of real-life experience in a particular group, would be used, focus group discussion adopted as a method of data collection from library staff who are involved in night-shift. The participant for the focus group discussion would be selected using a convenience sampling technique in which staff at the cataloguing unit would be included in the sample because of the representative characteristics of the unit. This would be done to enable readers to understand phenomena as it is reasonable than from a remote perspective. The exploratory interviews which will be in focus group method to shed light on issues relating to security, housing, transportation, budgeting, energy supply, employee duties, time management, information access, and sustaining professional levels of service and how all these variables affect the productivity of all the 149 library staff and their work-life balance.

Keywords: nightshift, work-life balance, mega-city, academic library, Maslach Burnout Theory, Lagos State, University of Lagos

Procedia PDF Downloads 109
9610 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)

Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton

Abstract:

Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.

Keywords: cold-start learning, expectation propagation, multi-armed bandits, Thompson Sampling, variational inference

Procedia PDF Downloads 97
9609 Determining the Threshold for Protective Effects of Aerobic Exercise on Aortic Structure in a Mouse Model of Marfan Syndrome Associated Aortic Aneurysm

Authors: Christine P. Gibson, Ramona Alex, Michael Farney, Johana Vallejo-Elias, Mitra Esfandiarei

Abstract:

Aortic aneurysm is the leading cause of death in Marfan syndrome (MFS), a connective tissue disorder caused by mutations in fibrillin-1 gene (FBN1). MFS aneurysm is characterized by weakening of the aortic wall due to elastin fibers fragmentation and disorganization. The above-average height and distinct physical features make young adults with MFS desirable candidates for competitive sports; but little is known about the exercise limit at which they will be at risk for aortic rupture. On the other hand, aerobic cardiovascular exercise has been shown to have protective effects on the heart and aorta. We have previously reported that mild aerobic exercise can delay the formation of aortic aneurysm in a mouse model of MFS. In this study, we aimed to investigate the effects of various levels of exercise intensity on the progression of aortic aneurysm in the mouse model. Starting at 4 weeks of age, we subjected control and MFS mice to different levels of exercise intensity (8m/min, 10m/min, 15m/min, and 20m/min, corresponding to 55%, 65%, 75%, and 85% of VO2 max, respectively) on a treadmill for 30 minutes per day, five days a week for the duration of the study. At 24 weeks of age, aortic tissue were isolated and subjected to structural and functional studies using histology and wire myography in order to evaluate the effects of different exercise routines on elastin fragmentation and organization and aortic wall elasticity/stiffness. Our data shows that exercise training at the intensity levels between 55%-75% significantly reduces elastin fragmentation and disorganization, with less recovery observed in 85% MFS group. The reversibility of elasticity was also significantly restored in MFS mice subjected to 55%-75% intensity; however, the recovery was less pronounced in MFS mice subjected to 85% intensity. Furthermore, our data shows that smooth muscle cells (SMCs) contractilion in response to vasoconstrictor agent phenylephrine (100nM) is significantly reduced in MFS aorta (54.84 ± 1.63 mN/mm2) as compared to control (95.85 ± 3.04 mN/mm2). At 55% of intensity, exercise did not rescue SMCs contraction (63.45 ± 1.70 mN/mm2), while at higher intensity levels, SMCs contraction in response to phenylephrine was restored to levels similar to control aorta [65% (81.88 ± 4.57 mN/mm2), 75% (86.22 ± 3.84 mN/mm2), and 85% (83.91 ± 5.42 mN/mm2)]. This study provides the first time evidence that high intensity exercise (e.g. 85%) may not provide the most beneficial effects on aortic function (vasoconstriction) and structure (elastin fragmentation, aortic wall elasticity) during the progression of aortic aneurysm in MFS mice. On the other hand, based on our observations, medium intensity exercise (e.g. 65%) seems to provide the utmost protective effects on aortic structure and function in MFS mice. These findings provide new insights into the potential capacity, in which MFS patients could participate in various aerobic exercise routines, especially in young adults affected by cardiovascular complications particularly aortic aneurysm. This work was funded by Midwestern University Research Fund.

Keywords: aerobic exercise, aortic aneurysm, aortic wall elasticity, elastin fragmentation, Marfan syndrome

Procedia PDF Downloads 367
9608 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: dynamic modeling, missing data, mobility, multiple imputation

Procedia PDF Downloads 154
9607 Hand Gesture Interface for PC Control and SMS Notification Using MEMS Sensors

Authors: Keerthana E., Lohithya S., Harshavardhini K. S., Saranya G., Suganthi S.

Abstract:

In an epoch of expanding human-machine interaction, the development of innovative interfaces that bridge the gap between physical gestures and digital control has gained significant momentum. This study introduces a distinct solution that leverages a combination of MEMS (Micro-Electro-Mechanical Systems) sensors, an Arduino Mega microcontroller, and a PC to create a hand gesture interface for PC control and SMS notification. The core of the system is an ADXL335 MEMS accelerometer sensor integrated with an Arduino Mega, which communicates with a PC via a USB cable. The ADXL335 provides real-time acceleration data, which is processed by the Arduino to detect specific hand gestures. These gestures, such as left, right, up, down, or custom patterns, are interpreted by the Arduino, and corresponding actions are triggered. In the context of SMS notifications, when a gesture indicative of a new SMS is recognized, the Arduino relays this information to the PC through the serial connection. The PC application, designed to monitor the Arduino's serial port, displays these SMS notifications in the serial monitor. This study offers an engaging and interactive means of interfacing with a PC by translating hand gestures into meaningful actions, opening up opportunities for intuitive computer control. Furthermore, the integration of SMS notifications adds a practical dimension to the system, notifying users of incoming messages as they interact with their computers. The use of MEMS sensors, Arduino, and serial communication serves as a promising foundation for expanding the capabilities of gesture-based control systems.

Keywords: hand gestures, multiple cables, serial communication, sms notification

Procedia PDF Downloads 34
9606 The Effect of Power of Isolation Transformer on the Lamps in Airfield Ground Lighting Systems

Authors: Hossein Edrisi

Abstract:

To study the impact of the amount and volume of power of isolation transformer on the lamps in airfield Ground Lighting Systems. A test was conducted in Persian Gulf International Airport, This airport is situated in the south of Iran and it is one of the most cutting-edge airports, the same one that owns modern devices. Iran uses materials and auxiliary equipment which are made by ADB Company from Belgium. Airfield ground lighting (AGL) systems are responsible for providing visual issue to aircrafts and helicopters in the runways. In an AGL system a great deal of lamps are connected in serial circuits to each other and each ring has its individual constant current regulators (CCR), which through that provide energy to the lamps. Control of lamps is crucial for maintenance and operation in the AGL systems. Thanks to the Programmable Logic Controller (PLC) that is a cutting-edge technology can help the system to connect the elements from substations and ATC (TOWER). For this purpose, a test in real conditions of the airport done for all element that used in the airport such as isolation transformer in different power capacity and different consuming power and brightness of the lamps. The data were analyzed with Lux meter and Multimeter. The results had shown that the increase in the power of transformer caused a significant increase in brightness. According to the Ohm’s law and voltage division, without changing the characteristics of the light bulb, it is not possible to change the voltage, just need to change the amount of transformer with which it connects to the lamps. When the voltage is increased, the current through the bulb has to increase as well, because of Ohm's law: I=V/R and I=V/R which means that if V increases, so do I increase. The output voltage on the constant current regulator emerges between the lamps and the transformers.

Keywords: AGL, CCR, lamps, transformer, Ohm’s law

Procedia PDF Downloads 229
9605 Optimum Structural Wall Distribution in Reinforced Concrete Buildings Subjected to Earthquake Excitations

Authors: Nesreddine Djafar Henni, Akram Khelaifia, Salah Guettala, Rachid Chebili

Abstract:

Reinforced concrete shear walls and vertical plate-like elements play a pivotal role in efficiently managing a building's response to seismic forces. This study investigates how the performance of reinforced concrete buildings equipped with shear walls featuring different shear wall-to-frame stiffness ratios aligns with the requirements stipulated in the Algerian seismic code RPA99v2003, particularly in high-seismicity regions. Seven distinct 3D finite element models are developed and evaluated through nonlinear static analysis. Engineering Demand Parameters (EDPs) such as lateral displacement, inter-story drift ratio, shear force, and bending moment along the building height are analyzed. The findings reveal two predominant categories of induced responses: force-based and displacement-based EDPs. Furthermore, as the shear wall-to-frame ratio increases, there is a concurrent increase in force-based EDPs and a decrease in displacement-based ones. Examining the distribution of shear walls from both force and displacement perspectives, model G with the highest stiffness ratio, concentrating stiffness at the building's center, intensifies induced forces. This configuration necessitates additional reinforcements, leading to a conservative design approach. Conversely, model C, with the lowest stiffness ratio, distributes stiffness towards the periphery, resulting in minimized induced shear forces and bending moments, representing an optimal scenario with maximal performance and minimal strength requirements.

Keywords: dual RC buildings, RC shear walls, modeling, static nonlinear pushover analysis, optimization, seismic performance

Procedia PDF Downloads 40
9604 Analysis of Magnetic Anomaly Data for Identification Structure in Subsurface of Geothermal Manifestation at Candi Umbul Area, Magelang, Central Java Province, Indonesia

Authors: N. A. Kharisa, I. Wulandari, R. Narendratama, M. I. Faisal, K. Kirana, R. Zipora, I. Arfiansah, I. Suyanto

Abstract:

Acquisition of geophysical survey with magnetic method has been done in manifestation of geothermalat Candi Umbul, Grabag, Magelang, Central Java Province on 10-12 May 2013. This objective research is interpretation to interpret structural geology that control geothermal system in CandiUmbul area. The research has been finished with area size 1,5 km x 2 km and measurement space of 150 m. And each point of line space survey is 150 m using PPM Geometrics model G-856. Data processing was started with IGRF and diurnal variation correction to get total magnetic field anomaly. Then, advance processing was done until reduction to pole, upward continuation, and residual anomaly. That results become next interpretation in qualitative step. It is known that the biggest object position causes low anomaly located in central of area survey that comes from hot spring manifestation and demagnetization zone that indicates the existence of heat source activity. Then, modeling the anomaly map was used for quantitative interpretation step. The result of modeling is rock layers and geological structure model that can inform about the geothermal system. And further information from quantitative interpretations can be interpreted about lithology susceptibility. And lithology susceptibilities are andesiteas heat source has susceptibility value of (k= 0.00014 emu), basaltic as alteration rock (k= 0.0016 emu), volcanic breccia as reservoir rock (k= 0.0026 emu), andesite porfirtic as cap rock (k= 0.004 emu), lava andesite (k= 0.003 emu), and alluvium (k= 0.0007 emu). The hot spring manifestation is controlled by the normal fault which becomes a weak zone, easily passed by hot water which comes from the geothermal reservoir.

Keywords: geological structure, geothermal system, magnetic, susceptibility

Procedia PDF Downloads 373
9603 Innovative In-Service Training Approach to Strengthen Health Care Human Resources and Scale-Up Detection of Mycobacterium tuberculosis

Authors: Tsegahun Manyazewal, Francesco Marinucci, Getachew Belay, Abraham Tesfaye, Gonfa Ayana, Amaha Kebede, Tsegahun Manyazewal, Francesco Marinucci, Getachew Belay, Abraham Tesfaye, Gonfa Ayana, Amaha Kebede, Yewondwossen Tadesse, Susan Lehman, Zelalem Temesgen

Abstract:

In-service health trainings in Sub-Saharan Africa are mostly content-centered with higher disconnection with the real practice in the facility. This study intended to evaluate in-service training approach aimed to strengthen health care human resources. A combined web-based and face-to-face training was designed and piloted in Ethiopia with the diagnosis of tuberculosis. During the first part, which lasted 43 days, trainees accessed web-based material and read without leaving their work; while the second part comprised a one-day hands-on evaluation. Trainee’s competency was measured using multiple-choice questions, written-assignments, exercises and hands-on evaluation. Of 108 participants invited, 81 (75%) attended the course and 71 (88%) of them successfully completed. Of those completed, 73 (90%) scored a grade from A to C. The approach was effective to transfer knowledge and turn it into practical skills. In-service health training should transform from a passive one-time-event to a continuous behavioral change of participants and improvements on their actual work.

Keywords: Ethiopia, health care, Mycobacterium tuberculosis, training

Procedia PDF Downloads 484
9602 Tomato-Weed Classification by RetinaNet One-Step Neural Network

Authors: Dionisio Andujar, Juan lópez-Correa, Hugo Moreno, Angela Ri

Abstract:

The increased number of weeds in tomato crops highly lower yields. Weed identification with the aim of machine learning is important to carry out site-specific control. The last advances in computer vision are a powerful tool to face the problem. The analysis of RGB (Red, Green, Blue) images through Artificial Neural Networks had been rapidly developed in the past few years, providing new methods for weed classification. The development of the algorithms for crop and weed species classification looks for a real-time classification system using Object Detection algorithms based on Convolutional Neural Networks. The site study was located in commercial corn fields. The classification system has been tested. The procedure can detect and classify weed seedlings in tomato fields. The input to the Neural Network was a set of 10,000 RGB images with a natural infestation of Cyperus rotundus l., Echinochloa crus galli L., Setaria italica L., Portulaca oeracea L., and Solanum nigrum L. The validation process was done with a random selection of RGB images containing the aforementioned species. The mean average precision (mAP) was established as the metric for object detection. The results showed agreements higher than 95 %. The system will provide the input for an online spraying system. Thus, this work plays an important role in Site Specific Weed Management by reducing herbicide use in a single step.

Keywords: deep learning, object detection, cnn, tomato, weeds

Procedia PDF Downloads 89