Search results for: mathematical transmission modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7074

Search results for: mathematical transmission modeling

1074 Behavior of Common Philippine-Made Concrete Hollow Block Structures Subjected to Seismic Load Using Rigid Body Spring-Discrete Element Method

Authors: Arwin Malabanan, Carl Chester Ragudo, Jerome Tadiosa, John Dee Mangoba, Eric Augustus Tingatinga, Romeo Eliezer Longalong

Abstract:

Concrete hollow blocks (CHB) are the most commonly used masonry block for walls in residential houses, school buildings and public buildings in the Philippines. During the recent 2013 Bohol earthquake (Mw 7.2), it has been proven that CHB walls are very vulnerable to severe external action like strong ground motion. In this paper, a numerical model of CHB structures is proposed, and seismic behavior of CHB houses is presented. In modeling, the Rigid Body Spring-Discrete Element method (RBS-DEM)) is used wherein masonry blocks are discretized into rigid elements and connected by nonlinear springs at preselected contact points. The shear and normal stiffness of springs are derived from the material properties of CHB unit incorporating the grout and mortar fillings through the volumetric transformation of the dimension using material ratio. Numerical models of reinforced and unreinforced walls are first subjected to linearly-increasing in plane loading to observe the different failure mechanisms. These wall models are then assembled to form typical model masonry houses and then subjected to the El Centro and Pacoima earthquake records. Numerical simulations show that the elastic, failure and collapse behavior of the model houses agree well with shaking table tests results. The effectiveness of the method in replicating failure patterns will serve as a basis for the improvement of the design and provides a good basis of strengthening the structure.

Keywords: concrete hollow blocks, discrete element method, earthquake, rigid body spring model

Procedia PDF Downloads 361
1073 Risk Factors for Determining Anti-HBcore to Hepatitis B Virus Among Blood Donors

Authors: Tatyana Savchuk, Yelena Grinvald, Mohamed Ali, Ramune Sepetiene, Dinara Sadvakassova, Saniya Saussakova, Kuralay Zhangazieva, Dulat Imashpayev

Abstract:

Introduction. The problem of viral hepatitis B (HBV) takes a vital place in the global health system. The existing risk of HBV transmission through blood transfusions is associated with transfusion of blood taken from infected individuals during the “serological window” period or from patients with latent HBV infection, the marker of which is anti-HBcore. In the absence of information about other markers of hepatitis B, the presence of anti-HBcore suggests that a person may be actively infected or has suffered hepatitis B in the past and has immunity. Aim. To study the risk factors influencing the positive anti-HBcore indicators among the donor population. Materials and Methods. The study was conducted in 2021 in the Scientific and Production Center of Transfusiology of the Ministry of Healthcare in Kazakhstan. The samples taken from blood donors were tested for anti-HBcore, by CLIA on the Architect i2000SR (ABBOTT). A special questionnaire was developed for the blood donors’ socio-demographic characteristics. Statistical analysis was conducted by the R software (version 4.1.1, USA, 2021). Results.5709 people aged 18 to 66 years were included in the study, the proportion of men and women was 68.17% and 31.83%, respectively. The average age of the participants was 35.7 years. A weighted multivariable mixed effects logistic regression analysis showed that age (p<0.001), ethnicity (p<0.05), and marital status (p<0.05) were statistically associated with anti-HBcore positivity. In particular, analysis adjusting for gender, nationality, education, marital status, family history of hepatitis, blood transfusion, injections, and surgical interventions, with a one-year increase in age (adjOR=1.06, 95%CI:1.05-1.07), showed an 6% growth in odds of having anti-HBcore positive results. Those who were russian ethnicity (adjOR=0.65, 95%CI:0.46-0.93) and representatives of other nationality groups (adjOR=0.56, 95%CI:0.37-0.85) had lower odds of having anti-HBcore when compared to Kazakhs when controlling for other covariant variables. Among singles, the odds of having a positive anti-HBcore were lower by 29% (adjOR = 0.71, 95%CI:0.57-0.89) compared to married participants when adjusting for other variables. Conclusions.Kazakhstan is one of the countries with medium endemicity of HBV prevalence (2%-7%). Results of the study demonstrated the possibility to form a profile of risk factors (age, nationality, marital status). Taking into account the data, it is recommended to increase attention to donor questionnaires by adding leading questions and to improve preventive measures to prevent HBV. Funding. This research was supported by a grant from Abbott Laboratories.

Keywords: anti-HBcore, blood donor, donation, hepatitis B virus, occult hepatitis

Procedia PDF Downloads 100
1072 Agent-Based Modelling to Improve Dairy-origin Beef Production: Model Description and Evaluation

Authors: Addisu H. Addis, Hugh T. Blair, Paul R. Kenyon, Stephen T. Morris, Nicola M. Schreurs, Dorian J. Garrick

Abstract:

Agent-based modeling (ABM) enables an in silico representation of complex systems and cap-tures agent behavior resulting from interaction with other agents and their environment. This study developed an ABM to represent a pasture-based beef cattle finishing systems in New Zea-land (NZ) using attributes of the rearer, finisher, and processor, as well as specific attributes of dairy-origin beef cattle. The model was parameterized using values representing 1% of NZ dairy-origin cattle, and 10% of rearers and finishers in NZ. The cattle agent consisted of 32% Holstein-Friesian, 50% Holstein-Friesian–Jersey crossbred, and 8% Jersey, with the remainder being other breeds. Rearers and finishers repetitively and simultaneously interacted to determine the type and number of cattle populating the finishing system. Rearers brought in four-day-old spring-born calves and reared them until 60 calves (representing a full truck load) on average had a live weight of 100 kg before selling them on to finishers. Finishers mainly attained weaners from rearers, or directly from dairy farmers when weaner demand was higher than the supply from rearers. Fast-growing cattle were sent for slaughter before the second winter, and the re-mainder were sent before their third winter. The model finished a higher number of bulls than heifers and steers, although it was 4% lower than the industry reported value. Holstein-Friesian and Holstein-Friesian–Jersey-crossbred cattle dominated the dairy-origin beef finishing system. Jersey cattle account for less than 5% of total processed beef cattle. Further studies to include re-tailer and consumer perspectives and other decision alternatives for finishing farms would im-prove the applicability of the model for decision-making processes.

Keywords: agent-based modelling, dairy cattle, beef finishing, rearers, finishers

Procedia PDF Downloads 91
1071 Accurately Measuring Stress Using Latest Breathing Technology and Its Relationship with Academic Performance

Authors: Farshid Marbouti, Jale Ulas, Julia Thompson

Abstract:

The main sources of stress among college students are: changes in sleeping and eating habits, undertaking new responsibilities, and financial difficulties as the most common sources of stress, exams, meeting new people, career decisions, fear of failure, and pressure from parents, transition to university especially if it requires leaving home, working with people that they do not know, trouble with parents, and relationship with the opposite sex. The students use a variety of stress coping strategies, including talking to family and friends, leisure activities and exercising. The Yerkes–Dodson law indicates while a moderate amount of stress may be beneficial for performance, too high stress will result in weak performance. In other words, if students are too stressed, they are likely to have low academic performance. In a preliminary study conducted in 2017 with engineering students enrolled in three high failure rate classes, the majority of the students stated that they have high levels of stress mainly for academic, financial, or family-related reasons. As the second stage of the study, the main purpose of this research is to investigate the students’ level of stress, sources of stress, their relationship with student demographic background, students’ coping strategies, and academic performance. A device is being developed to gather data from students breathing patterns and measure their stress levels. In addition, all participants are asked to fill out a survey. The survey under development has the following categories: exam stressor, study-related stressors, financial pressures, transition to university, family-related stress, student response to stress, and stress management. After the data collection, Structural Equation Modeling (SEM) analysis will be conducted in order to identify the relationship among students’ level of stress, coping strategies, and academic performance.

Keywords: college student stress, coping strategies, academic performance, measuring stress

Procedia PDF Downloads 102
1070 Advancing Urban Sustainability through Data-Driven Machine Learning Solutions

Authors: Nasim Eslamirad, Mahdi Rasoulinezhad, Francesco De Luca, Sadok Ben Yahia, Kimmo Sakari Lylykangas, Francesco Pilla

Abstract:

With the ongoing urbanization, cities face increasing environmental challenges impacting human well-being. To tackle these issues, data-driven approaches in urban analysis have gained prominence, leveraging urban data to promote sustainability. Integrating Machine Learning techniques enables researchers to analyze and predict complex environmental phenomena like Urban Heat Island occurrences in urban areas. This paper demonstrates the implementation of data-driven approach and interpretable Machine Learning algorithms with interpretability techniques to conduct comprehensive data analyses for sustainable urban design. The developed framework and algorithms are demonstrated for Tallinn, Estonia to develop sustainable urban strategies to mitigate urban heat waves. Geospatial data, preprocessed and labeled with UHI levels, are used to train various ML models, with Logistic Regression emerging as the best-performing model based on evaluation metrics to derive a mathematical equation representing the area with UHI or without UHI effects, providing insights into UHI occurrences based on buildings and urban features. The derived formula highlights the importance of building volume, height, area, and shape length to create an urban environment with UHI impact. The data-driven approach and derived equation inform mitigation strategies and sustainable urban development in Tallinn and offer valuable guidance for other locations with varying climates.

Keywords: data-driven approach, machine learning transparent models, interpretable machine learning models, urban heat island effect

Procedia PDF Downloads 30
1069 Finite Element Modeling of a Lower Limb Based on the East Asian Body Characteristics for Pedestrian Protection

Authors: Xianping Du, Runlu Miao, Guanjun Zhang, Libo Cao, Feng Zhu

Abstract:

Current vehicle safety standards and human body injury criteria were established based on the biomechanical response of Euro-American human body, without considering the difference in the body anthropometry and injury characteristics among different races, particularly the East Asian people with smaller body size. Absence of such race specific design considerations will negatively influence the protective performance of safety products for these populations, and weaken the accuracy of injury thresholds derived. To resolve these issues, in this study, we aim to develop a race specific finite element model to simulate the impact response of the lower extremity of a 50th percentile East Asian (Chinese) male. The model was built based on medical images for the leg of an average size Chinese male and slightly adjusted based on the statistical data. The model includes detailed anatomic features and is able to simulate the muscle active force. Thirteen biomechanical tests available in the literature were used to validate its biofidelity. Using the validated model, a pedestrian-car impact accident taking place in China was re-constructed computationally. The results show that the newly developed lower leg model has a good performance in predicting dynamic response and tibia fracture pattern. An additional comparison on the fracture tolerance of the East Asian and Euro-American lower limb suggests that the current injury criterion underestimates the degree of injury of East Asian human body.

Keywords: lower limb, East Asian body characteristics, traffic accident reconstruction, finite element analysis, injury tolerance

Procedia PDF Downloads 283
1068 Topology Enhancement of a Straight Fin Using a Porous Media Computational Fluid Dynamics Simulation Approach

Authors: S. Wakim, M. Nemer, B. Zeghondy, B. Ghannam, C. Bouallou

Abstract:

Designing the optimal heat exchanger is still an essential objective to be achieved. Parametrical optimization involves the evaluation of the heat exchanger dimensions to find those that best satisfy certain objectives. This method contributes to an enhanced design rather than an optimized one. On the contrary, topology optimization finds the optimal structure that satisfies the design objectives. The huge development in metal additive manufacturing allowed topology optimization to find its way into engineering applications especially in the aerospace field to optimize metal structures. Using topology optimization in 3d heat and mass transfer problems requires huge computational time, therefore coupling it with CFD simulations can reduce this it. However, existed CFD models cannot be coupled with topology optimization. The CFD model must allow creating a uniform mesh despite the initial geometry complexity and also to swap the cells from fluid to solid and vice versa. In this paper, a porous media approach compatible with topology optimization criteria is developed. It consists of modeling the fluid region of the heat exchanger as porous media having high porosity and similarly the solid region is modeled as porous media having low porosity. The switching from fluid to solid cells required by topology optimization is simply done by changing each cell porosity using a user defined function. This model is tested on a plate and fin heat exchanger and validated by comparing its results to experimental data and simulations results. Furthermore, this model is used to perform a material reallocation based on local criteria to optimize a plate and fin heat exchanger under a constant heat duty constraint. The optimized fin uses 20% fewer materials than the first while the pressure drop is reduced by about 13%.

Keywords: computational methods, finite element method, heat exchanger, porous media, topology optimization

Procedia PDF Downloads 150
1067 Role of Climatic Conditions on Pacific Bluefin Tuna Thunnus orientalis Stock Structure

Authors: Ashneel Ajay Singh, Kazumi Sakuramoto, Naoki Suzuki, Kalla Alok, Nath Paras

Abstract:

Bluefin (Thunnus orientalis) tuna is one of the most economically valuable tuna species in the world. In recent years the stock has been observed to decline. It is suspected that the stock-recruitment relationship and population structure is influenced by environmental and climatic variables. This study was aimed at investigating the influence of environmental and climatic conditions on the trajectory of the different life stages of the North Pacific bluefin tuna. Exploratory analysis was performed for the North Pacific sea surface temperature (SST) and Pacific Decadal Oscillation (PDO) on the time series of the bluefin tuna cohorts (age-0, 1, 2,…,9, 10+). General Additive Modeling (GAM) was used to reconstruct the recruitment (R) trajectory. The spatial movement of the SST was also monitored from 1953 to 2012 in the distribution area of the bluefin tuna. Exploratory analysis showed significance influence of the North Pacific Sea Surface temperature (SST) and Pacific Decadal Oscillation (PDO) on the time series of the age-0 group. Other age group (1, 2,…,9, 10+) time series did not exhibit any significant correlations. PDO showed most significant relationship in the months of October to December. Although the stock-recruitment relationship is of biological significance, the recruits (age-0) showed poor correlation with the Spawning Stock Biomass (SSB). Indeed the most significant model incorporated the SSB, SST and PDO. The results show that the stock-recruitment relationship of the North Pacific bluefin tuna is multi-dimensional and cannot be adequately explained by the SSB alone. SST and PDO forcing of the population structure is of significant importance and needs to be accounted for when making harvesting plans for bluefin tuna in the North Pacific.

Keywords: pacific bluefin tuna, Thunnus orientalis, cohorts, recruitment, spawning stock biomass, sea surface temperature, pacific decadal oscillation, general additive model

Procedia PDF Downloads 232
1066 Fully Eulerian Finite Element Methodology for the Numerical Modeling of the Dynamics of Heart Valves

Authors: Aymen Laadhari

Abstract:

During the last decade, an increasing number of contributions have been made in the fields of scientific computing and numerical methodologies applied to the study of the hemodynamics in the heart. In contrast, the numerical aspects concerning the interaction of pulsatile blood flow with highly deformable thin leaflets have been much less explored. This coupled problem remains extremely challenging and numerical difficulties include e.g. the resolution of full Fluid-Structure Interaction problem with large deformations of extremely thin leaflets, substantial mesh deformations, high transvalvular pressure discontinuities, contact between leaflets. Although the Lagrangian description of the structural motion and strain measures is naturally used, many numerical complexities can arise when studying large deformations of thin structures. Eulerian approaches represent a promising alternative to readily model large deformations and handle contact issues. We present a fully Eulerian finite element methodology tailored for the simulation of pulsatile blood flow in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets. Our method enables to use a fluid solver on a fixed mesh, whilst being able to easily model the mechanical properties of the valve. We introduce a semi-implicit time integration scheme based on a consistent NewtonRaphson linearization. A variant of the classical Newton method is introduced and guarantees a third-order convergence. High-fidelity computational geometries are built and simulations are performed under physiological conditions. We address in detail the main features of the proposed method, and we report several experiments with the aim of illustrating its accuracy and efficiency.

Keywords: eulerian, level set, newton, valve

Procedia PDF Downloads 275
1065 Cement Bond Characteristics of Artificially Fabricated Sandstones

Authors: Ashirgul Kozhagulova, Ainash Shabdirova, Galym Tokazhanov, Minh Nguyen

Abstract:

The synthetic rocks have been advantageous over the natural rocks in terms of availability and the consistent studying the impact of a particular parameter. The artificial rocks can be fabricated using variety of techniques such as mixing sand and Portland cement or gypsum, firing the mixture of sand and fine powder of borosilicate glass or by in-situ precipitation of calcite solution. In this study, sodium silicate solution has been used as the cementing agent for the quartz sand. The molded soft cylindrical sandstone samples are placed in the gas-tight pressure vessel, where the hardening of the material takes place as the chemical reaction between carbon dioxide and the silicate solution progresses. The vessel allows uniform disperse of carbon dioxide and control over the ambient gas pressure. Current paper shows how the bonding material is initially distributed in the intergranular space and the surface of the sand particles by the usage of Electron Microscopy and the Energy Dispersive Spectroscopy. During the study, the strength of the cement bond as a function of temperature is observed. The impact of cementing agent dosage on the micro and macro characteristics of the sandstone is investigated. The analysis of the cement bond at micro level helps to trace the changes to particles bonding damage after a potential yielding. Shearing behavior and compressional response have been examined resulting in the estimation of the shearing resistance and cohesion force of the sandstone. These are considered to be main input values to the mathematical prediction models of sand production from weak clastic oil reservoir formations.

Keywords: artificial sanstone, cement bond, microstructure, SEM, triaxial shearing

Procedia PDF Downloads 163
1064 A Hard Day's Night: Persistent Within-Individual Effects of Job Demands and the Role of Recovery Processes

Authors: Helen Pluut, Remus Ilies, Nikos Dimotakis, Maral Darouei

Abstract:

This study aims to examine recovery from work as an important daily activity with implications for workplace behavior. Building on affective events theory and the stressor-detachment model as frameworks, this paper proposes and tests a comprehensive within-individual model that uncovers the role of recovery processes at home in linking workplace demands (e.g., workload) and stressors (e.g., workplace incivility) to next-day organizational citizenship behaviors (OCBs). Our sample consisted of 126 full-time employees in a large Midwestern University. For a period of 16 working days, these employees were asked to fill out 3 electronic surveys while at work. The first survey (sent out in the morning) measured self-reported sleep quality, recovery experiences the previous day at home, and momentary effect. The second survey (sent out close to the end of the workday) measured job demands and stressors as well as OCBs, while the third survey in the evening assessed job strain. Data were analyzed using Hierarchical Linear Modeling (HLM). Results indicated that job demands and stressors at work made it difficult to unwind properly at home and have a good night’s sleep, which had repercussions for next day’s morning effect, which, in turn, influenced OCBs. It can be concluded that processes of recovery are vital to an individual’s daily effective functioning and behavior at work, but recovery may become impaired after a hard day’s work. Thus, our study sheds light on the potentially persistent nature of strain experienced as a result of work and points to the importance of recovery processes to enable individuals to avoid such cross-day spillover. Our paper will discuss this implication for theory and practice as well as potential directions for future research.

Keywords: affect, job demands, organizational citizenship behavior, recovery, strain

Procedia PDF Downloads 136
1063 Finite Element Modeling of Aortic Intramural Haematoma Shows Size Matters

Authors: Aihong Zhao, Priya Sastry, Mark L Field, Mohamad Bashir, Arvind Singh, David Richens

Abstract:

Objectives: Intramural haematoma (IMH) is one of the pathologies, along with acute aortic dissection, that present as Acute Aortic Syndrome (AAS). Evidence suggests that unlike aortic dissection, some intramural haematomas may regress with medical management. However, intramural haematomas have been traditionally managed like acute aortic dissections. Given that some of these pathologies may regress with conservative management, it would be useful to be able to identify which of these may not need high risk emergency intervention. A computational aortic model was used in this study to try and identify intramural haematomas with risk of progression to aortic dissection. Methods: We created a computational model of the aorta with luminal blood flow. Reports in the literature have identified 11 mm as the radial clot thickness that is associated with heightened risk of progression of intramural haematoma. Accordingly, haematomas of varying sizes were implanted in the modeled aortic wall to test this hypothesis. The model was exposed to physiological blood flows and the stresses and strains in each layer of the aortic wall were recorded. Results: Size and shape of clot were seen to affect the magnitude of aortic stresses. The greatest stresses and strains were recorded in the intima of the model. When the haematoma exceeded 10 mm in all dimensions, the stress on the intima reached breaking point. Conclusion: Intramural clot size appears to be a contributory factor affecting aortic wall stress. Our computer simulation corroborates clinical evidence in the literature proposing that IMH diameter greater than 11 mm may be predictive of progression. This preliminary report suggests finite element modelling of the aortic wall may be a useful process by which to examine putative variables important in predicting progression or regression of intramural haematoma.

Keywords: intramural haematoma, acute aortic syndrome, finite element analysis,

Procedia PDF Downloads 429
1062 Towards a Strategic Framework for State-Level Epistemological Functions

Authors: Mark Darius Juszczak

Abstract:

While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.

Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program

Procedia PDF Downloads 70
1061 A Structuring and Classification Method for Assigning Application Areas to Suitable Digital Factory Models

Authors: R. Hellmuth

Abstract:

The method of factory planning has changed a lot, especially when it is about planning the factory building itself. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity and Ambiguity) lead to more frequent restructuring measures within a factory. A digital factory model is the planning basis for rebuilding measures and becomes an indispensable tool. Furthermore, digital building models are increasingly being used in factories to support facility management and manufacturing processes. The main research question of this paper is, therefore: What kind of digital factory model is suitable for the different areas of application during the operation of a factory? First, different types of digital factory models are investigated, and their properties and usabilities for use cases are analysed. Within the scope of investigation are point cloud models, building information models, photogrammetry models, and these enriched with sensor data are examined. It is investigated which digital models allow a simple integration of sensor data and where the differences are. Subsequently, possible application areas of digital factory models are determined by means of a survey and the respective digital factory models are assigned to the application areas. Finally, an application case from maintenance is selected and implemented with the help of the appropriate digital factory model. It is shown how a completely digitalized maintenance process can be supported by a digital factory model by providing information. Among other purposes, the digital factory model is used for indoor navigation, information provision, and display of sensor data. In summary, the paper shows a structuring of digital factory models that concentrates on the geometric representation of a factory building and its technical facilities. A practical application case is shown and implemented. Thus, the systematic selection of digital factory models with the corresponding application cases is evaluated.

Keywords: building information modeling, digital factory model, factory planning, maintenance

Procedia PDF Downloads 103
1060 Modeling Vegetation Phenological Characteristics of Terrestrial Ecosystems

Authors: Zongyao Sha

Abstract:

Green vegetation plays a vital role in energy flows and matter cycles in terrestrial ecosystems, and vegetation phenology may not only be influenced by but also impose active feedback on climate changes. The phenological events of vegetation, such as the start of the season (SOS), end of the season (EOS), and length of the season (LOS), can respond to climate changes and affect gross primary productivity (GPP). Here we coupled satellite remote sensing imagery with FLUXNET observations to systematically map the shift of SOS, EOS, and LOS in global vegetated areas and explored their response to climate fluctuations and feedback on GPP during the last two decades. Results indicated that SOS advanced significantly, at an average rate of 0.19 days/year at a global scale, particularly in the northern hemisphere above the middle latitude (≥30°N) and that EOS was slightly delayed during the past two decades, resulting in prolonged LOS in 72.5% of the vegetated area. The climate factors, including seasonal temperature and precipitation, are attributed to the shifts in vegetation phenology but with a high spatial and temporal difference. The study revealed interactions between vegetation phenology and climate changes. Both temperature and precipitation affect vegetation phenology. Higher temperature as a direct consequence of global warming advanced vegetation green-up date. On the other hand, 75.9% and 20.2% of the vegetated area showed a positive correlation and significant positive correlation between annual GPP and length of vegetation growing season (LOS), likely indicating an enhancing effect on vegetation productivity and thus increased carbon uptake from the shifted vegetation phenology. Our study highlights a comprehensive view of the vegetation phenology changes of the global terrestrial ecosystems during the last two decades. The interactions between the shifted vegetation phenology and climate changes may provide useful information for better understanding the future trajectory of global climate changes. The feedback on GPP from the shifted vegetation phenology may serve as an adaptation mechanism for terrestrial ecosystems to mitigate global warming through improved carbon uptake from the atmosphere.

Keywords: vegetation phenology, growing season, NPP, correlation analysis

Procedia PDF Downloads 97
1059 Reliability Assessment and Failure Detection in a Complex Human-Machine System Using Agent-Based and Human Decision-Making Modeling

Authors: Sanjal Gavande, Thomas Mazzuchi, Shahram Sarkani

Abstract:

In a complex aerospace operational environment, identifying failures in a procedure involving multiple human-machine interactions are difficult. These failures could lead to accidents causing loss of hardware or human life. The likelihood of failure further increases if operational procedures are tested for a novel system with multiple human-machine interfaces and with no prior performance data. The existing approach in the literature of reviewing complex operational tasks in a flowchart or tabular form doesn’t provide any insight into potential system failures due to human decision-making ability. To address these challenges, this research explores an agent-based simulation approach for reliability assessment and fault detection in complex human-machine systems while utilizing a human decision-making model. The simulation will predict the emergent behavior of the system due to the interaction between humans and their decision-making capability with the varying states of the machine and vice-versa. Overall system reliability will be evaluated based on a defined set of success-criteria conditions and the number of recorded failures over an assigned limit of Monte Carlo runs. The study also aims at identifying high-likelihood failure locations for the system. The research concludes that system reliability and failures can be effectively calculated when individual human and machine agent states are clearly defined. This research is limited to the operations phase of a system lifecycle process in an aerospace environment only. Further exploration of the proposed agent-based and human decision-making model will be required to allow for a greater understanding of this topic for application outside of the operations domain.

Keywords: agent-based model, complex human-machine system, human decision-making model, system reliability assessment

Procedia PDF Downloads 164
1058 Towards Designing of a Potential New HIV-1 Protease Inhibitor Using Quantitative Structure-Activity Relationship Study in Combination with Molecular Docking and Molecular Dynamics Simulations

Authors: Mouna Baassi, Mohamed Moussaoui, Hatim Soufi, Sanchaita RajkhowaI, Ashwani Sharma, Subrata Sinha, Said Belaaouad

Abstract:

Human Immunodeficiency Virus type 1 protease (HIV-1 PR) is one of the most challenging targets of antiretroviral therapy used in the treatment of AIDS-infected people. The performance of protease inhibitors (PIs) is limited by the development of protease mutations that can promote resistance to the treatment. The current study was carried out using statistics and bioinformatics tools. A series of thirty-three compounds with known enzymatic inhibitory activities against HIV-1 protease was used in this paper to build a mathematical model relating the structure to the biological activity. These compounds were designed by software; their descriptors were computed using various tools, such as Gaussian, Chem3D, ChemSketch and MarvinSketch. Computational methods generated the best model based on its statistical parameters. The model’s applicability domain (AD) was elaborated. Furthermore, one compound has been proposed as efficient against HIV-1 protease with comparable biological activity to the existing ones; this drug candidate was evaluated using ADMET properties and Lipinski’s rule. Molecular Docking performed on Wild Type and Mutant Type HIV-1 proteases allowed the investigation of the interaction types displayed between the proteases and the ligands, Darunavir (DRV) and the new drug (ND). Molecular dynamics simulation was also used in order to investigate the complexes’ stability, allowing a comparative study of the performance of both ligands (DRV & ND). Our study suggested that the new molecule showed comparable results to that of Darunavir and may be used for further experimental studies. Our study may also be used as a pipeline to search and design new potential inhibitors of HIV-1 proteases.

Keywords: QSAR, ADMET properties, molecular docking, molecular dynamics simulation.

Procedia PDF Downloads 29
1057 Two Dimensional Steady State Modeling of Temperature Profile and Heat Transfer of Electrohydrodynamically Enhanced Micro Heat Pipe

Authors: H. Shokouhmand, M. Tajerian

Abstract:

A numerical investigation of laminar forced convection flows through a square cross section micro heat pipe by applying electrohydrodynamic (EHD) field has been studied. In the present study, pentane is selected as working fluid. Temperature and velocity profiles and heat transfer enhancement in the micro heat pipe by using EHD field at the two-dimensional and single phase fluid flow in steady state regime have been numerically calculated. At this model, only Coulomb force is considered. The study has been carried out for the Reynolds number 10 to 100 and EHD force field up to 8 KV. Coupled, non-linear equations governed on the model (continuity, momentum, and energy equations) have been solved simultaneously by CFD numerical methods. Steady state behavior of affecting parameters, e.g. friction factor, average temperature, Nusselt number and heat transfer enhancement criteria, have been evaluated. It has been observed that by increasing Reynolds number, the effect of EHD force became more significant and for smaller Reynolds numbers the rate of heat transfer enhancement criteria is increased. By obtaining and plotting the mentioned parameters, it has been shown that the EHD field enhances the heat transfer process. The numerical results show that by increasing EHD force field the absolute value of Nusselt number and friction factor increases and average temperature of fluid flow decreases. But the increasing rate of Nusselt number is greater than increasing value of friction factor, which makes applying EHD force field for heat transfer enhancement in micro heat pipes acceptable and applicable. The numerical results of model are in good agreement with the experimental results available in the literature.

Keywords: micro heat pipe, electrohydrodynamic force, Nusselt number, average temperature, friction factor

Procedia PDF Downloads 263
1056 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 119
1055 Poly(propylene fumarate) Copolymers with Phosphonic Acid-based Monomers Designed as Bone Tissue Engineering Scaffolds

Authors: Görkem Cemali̇, Avram Aruh, Gamze Torun Köse, Erde Can ŞAfak

Abstract:

In order to heal bone disorders, the conventional methods which involve the use of autologous and allogenous bone grafts or permanent implants have certain disadvantages such as limited supply, disease transmission, or adverse immune response. A biodegradable material that acts as structural support to the damaged bone area and serves as a scaffold that enhances bone regeneration and guides bone formation is one desirable solution. Poly(propylene fumarate) (PPF) which is an unsaturated polyester that can be copolymerized with appropriate vinyl monomers to give biodegradable network structures, is a promising candidate polymer to prepare bone tissue engineering scaffolds. In this study, hydroxyl-terminated PPF was synthesized and thermally cured with vinyl phosphonic acid (VPA) and diethyl vinyl phosphonate (VPES) in the presence of radical initiator benzoyl peroxide (BP), with changing co-monomer weight ratios (10-40wt%). In addition, the synthesized PPF was cured with VPES comonomer at body temperature (37oC) in the presence of BP initiator, N, N-Dimethyl para-toluidine catalyst and varying amounts of Beta-tricalcium phosphate (0-20 wt% ß-TCP) as filler via radical polymerization to prepare composite materials that can be used in injectable forms. Thermomechanical properties, compressive properties, hydrophilicity and biodegradability of the PPF/VPA and PPF/VPES copolymers were determined and analyzed with respect to the copolymer composition. Biocompatibility of the resulting polymers and their composites was determined by the MTS assay and osteoblast activity was explored with von kossa, alkaline phosphatase and osteocalcin activity analysis and the effects of VPA and VPES comonomer composition on these properties were investigated. Thermally cured PPF/VPA and PPF/VPES copolymers with different compositions exhibited compressive modulus and strength values in the wide range of 10–836 MPa and 14–119 MPa, respectively. MTS assay studies showed that the majority of the tested compositions were biocompatible and the overall results indicated that PPF/VPA and PPF/VPES network polymers show significant potential for applications as bone tissue engineering scaffolds where varying PPF and co-monomer ratio provides adjustable and controllable properties of the end product. The body temperature cured PPF/VPES/ß-TCP composites exhibited significantly lower compressive modulus and strength values than the thermal cured PPF/VPES copolymers and were therefore found to be useful as scaffolds for cartilage tissue engineering applications.

Keywords: biodegradable, bone tissue, copolymer, poly(propylene fumarate), scaffold

Procedia PDF Downloads 163
1054 Development and Characterization of Topical 5-Fluorouracil Solid Lipid Nanoparticles for the Effective Treatment of Non-Melanoma Skin Cancer

Authors: Sudhir Kumar, V. R. Sinha

Abstract:

Background: The topical and systemic toxicity associated with present nonmelanoma skin cancer (NMSC) treatment therapy using 5-Fluorouracil (5-FU) make it necessary to develop a novel delivery system having lesser toxicity and better control over drug release. Solid lipid nanoparticles offer many advantages like: controlled and localized release of entrapped actives, nontoxicity, and better tolerance. Aim:-To investigate safety and efficacy of 5-FU loaded solid lipid nanoparticles as a topical delivery system for the treatment of nonmelanoma skin cancer. Method: Topical solid lipid nanoparticles of 5-FU were prepared using Compritol 888 ATO (Glyceryl behenate) as lipid component and pluronic F68 (Poloxamer 188), Tween 80 (Polysorbate 80), Tyloxapol (4-(1,1,3,3-Tetramethylbutyl) phenol polymer with formaldehyde and oxirane) as surfactants. The SLNs were prepared with emulsification method. Different formulation parameters viz. type and ratio of surfactant, ratio of lipid and ratio of surfactant:lipid were investigated on particle size and drug entrapment efficiency. Results: Characterization of SLNs like–Transmission Electron Microscopy (TEM), Differential Scannig calorimetry (DSC), Fourier transform infrared spectroscopy (FTIR), Particle size determination, Polydispersity index, Entrapment efficiency, Drug loading, ex vivo skin permeation and skin retention studies, skin irritation and histopathology studies were performed. TEM results showed that shape of SLNs was spherical with size range 200-500nm. Higher encapsulation efficiency was obtained for batches having higher concentration of surfactant and lipid. It was found maximum 64.3% for SLN-6 batch with size of 400.1±9.22 nm and PDI 0.221±0.031. Optimized SLN batches and marketed 5-FU cream were compared for flux across rat skin and skin drug retention. The lesser flux and higher skin retention was obtained for SLN formulation in comparison to topical 5-FU cream, which ensures less systemic toxicity and better control of drug release across skin. Chronic skin irritation studies lacks serious erythema or inflammation and histopathology studies showed no significant change in physiology of epidermal layers of rat skin. So, these studies suggest that the optimized SLN formulation is efficient then marketed cream and safer for long term NMSC treatment regimens. Conclusion: Topical and systemic toxicity associated with long-term use of 5-FU, in the treatment of NMSC, can be minimized with its controlled release with significant drug retention with minimal flux across skin. The study may provide a better alternate for effective NMSC treatment.

Keywords: 5-FU, topical formulation, solid lipid nanoparticles, non melanoma skin cancer

Procedia PDF Downloads 511
1053 Transcriptional Evidence for the Involvement of MyD88 in Flagellin Recognition: Genomic Identification of Rock Bream MyD88 and Comparative Analysis

Authors: N. Umasuthan, S. D. N. K. Bathige, W. S. Thulasitha, I. Whang, J. Lee

Abstract:

The MyD88 is an evolutionarily conserved host-expressed adaptor protein that is essential for proper TLR/ IL1R immune-response signaling. A previously identified complete cDNA (1626 bp) of OfMyD88 comprised an ORF of 867 bp encoding a protein of 288 amino acids (32.9 kDa). The gDNA (3761 bp) of OfMyD88 revealed a quinquepartite genome organization composed of 5 exons (with the sizes of 310, 132, 178, 92 and 155 bp) separated by 4 introns. All the introns displayed splice signals consistent with the consensus GT/AG rule. A bipartite domain structure with two domains namely death domain (24-103) coded by 1st exon, and TIR domain (151-288) coded by last 3 exons were identified through in silico analysis. Moreover, homology modeling of these two domains revealed a similar quaternary folding nature between human and rock bream homologs. A comprehensive comparison of vertebrate MyD88 genes showed that they possess a 5-exonic structure. In this structure, the last three exons were strongly conserved, and this suggests that a rigid structure has been maintained during vertebrate evolution. A cluster of TATA box-like sequences were found 0.25 kb upstream of cDNA starting position. In addition, putative 5'-flanking region of OfMyD88 was predicted to have TFBS implicated with TLR signaling, including copies of NFB1, APRF/ STAT3, Sp1, IRF1 and 2 and Stat1/2. Using qPCR technique, a ubiquitous mRNA expression was detected in liver and blood. Furthermore, a significantly up-regulated transcriptional expression of OfMyD88 was detected in head kidney (12-24 h; >2-fold), spleen (6 h; 1.5-fold), liver (3 h; 1.9-fold) and intestine (24 h; ~2-fold) post-Fla challenge. These data suggest a crucial role for MyD88 in antibacterial immunity of teleosts.

Keywords: MyD88, innate immunity, flagellin, genomic analysis

Procedia PDF Downloads 412
1052 Earth Observations and Hydrodynamic Modeling to Monitor and Simulate the Oil Pollution in the Gulf of Suez, Red Sea, Egypt

Authors: Islam Abou El-Magd, Elham Ali, Moahmed Zakzouk, Nesreen Khairy, Naglaa Zanaty

Abstract:

Maine environment and coastal zone are wealthy with natural resources that contribute to the local economy of Egypt. The Gulf of Suez and Red Sea area accommodates diverse human activities that contribute to the local economy, including oil exploration and production, touristic activities, export and import harbors, etc, however, it is always under the threat of pollution due to human interaction and activities. This research aimed at integrating in-situ measurements and remotely sensed data with hydrodynamic model to map and simulate the oil pollution. High-resolution satellite sensors including Sentinel 2 and Plantlab were functioned to trace the oil pollution. Spectral band ratio of band 4 (infrared) over band 3 (red) underpinned the mapping of the point source pollution from the oil industrial estates. This ratio is supporting the absorption windows detected in the hyperspectral profiles. ASD in-situ hyperspectral device was used to measure experimentally the oil pollution in the marine environment. The experiment used to measure water behavior in three cases a) clear water without oil, b) water covered with raw oil, and c) water after a while from throwing the raw oil. The spectral curve is clearly identified absorption windows for oil pollution, particularly at 600-700nm. MIKE 21 model was applied to simulate the dispersion of the oil contamination and create scenarios for crises management. The model requires precise data preparation of the bathymetry, tides, waves, atmospheric parameters, which partially obtained from online modeled data and other from historical in-situ stations. The simulation enabled to project the movement of the oil spill and could create a warning system for mitigation. Details of the research results will be described in the paper.

Keywords: oil pollution, remote sensing, modelling, Red Sea, Egypt

Procedia PDF Downloads 340
1051 The Prevalence and Profile of Extended Spectrum B-Lactamase (ESBL) Producing Enterobacteriaceae Species in the Intensive Care Unit (ICU) Setting of a Tertiary Care Hospital of North India

Authors: Harmeet Pal Singh Dhooria, Deepinder Chinna, UPS Sidhu, Alok Jain

Abstract:

Serious infections caused by gram-negative bacteria are a significant cause of mortality and morbidity in the hospital setting. In acute care facilities like in intensive care units (ICUs), the intensity of antimicrobial use together with a population highly susceptible to infection, creates an environment, which facilitates both emergence and transmission of Extended Spectrum -lactamase (ESBL) producing Enterobacteriaceae species. The study was conducted in the Medical Intensive Care Unit (MICU) and the Pulmonary Critical Care Unit (PCCU) of the Department of Medicine, Dayanand Medical College and Hospital, Ludhiana, Punjab, India. Out of a total of 1108 samples of urine, blood and respiratory tract secretions received for culture and sensitivity analysis from Medical Intensive Care Unit and Pulmonary Critical Care Unit, a total of 170 isolates of Enterobacteriaceae species were obtained which were then included in our study. Out of these 170 isolates, confirmed ESBL production was seen in 116 (68.24%) cases. E.coli was the most common species isolated (56.47%) followed by Klebsiella (32.94%), Enterobacter (5.88%), Citrobacter (3.53%), Enterobacter (0.59%) and Morganella (0.59%) among the total isolates. The rate of ESBL production was more in Klebsiella (78.57%) as compared to E.coli (60.42%). ESBL producers were found to be significantly more common in patients with prior history of hospitalization, antibiotic use, and prolonged ICU stay. Also significantly increased the prevalence of ESBL related infections was observed in patients with a history of catheterization or central line insertion but not in patients with the history of intubation. Patients who had an underlying malignancy had significantly higher prevalence of ESBL related infections as compared to other co-morbid illnesses. A slightly significant difference in the rate of mortality/LAMA was observed in the ESBL producer versus the non-ESBL producer group. The rate of mortality/LAMA was significantly higher in the ESBL related UTI but not in the ESBL related respiratory tract and bloodstream infections. ESBL producing isolates had significantly higher rates of resistance to Cefepime and Piperacillin/Tazobactum, and to non β-lactum antibiotics like Amikacin and Ciprofloxacin. The level of resistance to Imipenem was lower as compared to other antibiotics. However, it was noted that ESBL producing isolates had higher levels of resistance to Imipenem as compared to non-ESBL producing isolates. Conclusion- The prevalence of ESBL producing organisms was found to be very high (68.24%) among Enterobacteriaceae isolates in our ICU setting as among other ICU care settings around the world.

Keywords: enterobacteriaceae, extended spectrum B-lactamase (ESBL), ICU, antibiotic resistance

Procedia PDF Downloads 273
1050 Search for APN Permutations in Rings ℤ_2×ℤ_2^k

Authors: Daniel Panario, Daniel Santana de Freitas, Brett Stevens

Abstract:

Almost Perfect Nonlinear (APN) permutations with optimal resistance against differential cryptanalysis can be found in several domains. The permutation used in the standard for symmetric cryptography (the AES), for example, is based on a special kind of inversion in GF(28). Although very close to APN (2-uniform), this permutation still contains one number 4 in its differential spectrum, which means that, rigorously, it must be classified as 4-uniform. This fact motivates the search for fully APN permutations in other domains of definition. The extremely high complexity associated to this kind of problem precludes an exhaustive search for an APN permutation with 256 elements to be performed without the support of a suitable mathematical structure. On the other hand, in principle, there is nothing to indicate which mathematically structured domains can effectively help the search, and it is necessary to test several domains. In this work, the search for APN permutations in rings ℤ2×ℤ2k is investigated. After a full, exhaustive search with k=2 and k=3, all possible APN permutations in those rings were recorded, together with their differential profiles. Some very promising heuristics in these cases were collected so that, when used as a basis to prune backtracking for the same search in ℤ2×ℤ8 (search space with size 16! ≅244), just a few tenths of a second were enough to produce an APN permutation in a single CPU. Those heuristics were empirically extrapolated so that they could be applied to a backtracking search for APNs over ℤ2×ℤ16 (search space with size 32! ≅2117). The best permutations found in this search were further refined through Simulated Annealing, with a definition of neighbors suitable to this domain. The best result produced with this scheme was a 3-uniform permutation over ℤ2×ℤ16 with only 24 values equal to 3 in the differential spectrum (all the other 968 values were less than or equal 2, as it should be the case for an APN permutation). Although far from being fully APN, this result is technically better than a 4-uniform permutation and demanded only a few seconds in a single CPU. This is a strong indication that the use of mathematically structured domains, like the rings described in this work, together with heuristics based on smaller cases, can lead to dramatic cuts in the computational resources involved in the complexity of the search for APN permutations in extremely large domains.

Keywords: APN permutations, heuristic searches, symmetric cryptography, S-box design

Procedia PDF Downloads 153
1049 A One-Dimensional Modeling Analysis of the Influence of Swirl and Tumble Coefficient in a Single-Cylinder Research Engine

Authors: Mateus Silva Mendonça, Wender Pereira de Oliveira, Gabriel Heleno de Paula Araújo, Hiago Tenório Teixeira Santana Rocha, Augusto César Teixeira Malaquias, José Guilherme Coelho Baeta

Abstract:

The stricter legislation and the greater demand of the population regard to gas emissions and their effects on the environment as well as on human health make the automotive industry reinforce research focused on reducing levels of contamination. This reduction can be achieved through the implementation of improvements in internal combustion engines in such a way that they promote the reduction of both specific fuel consumption and air pollutant emissions. These improvements can be obtained through numerical simulation, which is a technique that works together with experimental tests. The aim of this paper is to build, with support of the GT-Suite software, a one-dimensional model of a single-cylinder research engine to analyze the impact of the variation of swirl and tumble coefficients on the performance and on the air pollutant emissions of an engine. Initially, the discharge coefficient is calculated through the software Converge CFD 3D, given that it is an input parameter in GT-Power. Mesh sensitivity tests are made in 3D geometry built for this purpose, using the mass flow rate in the valve as a reference. In the one-dimensional simulation is adopted the non-predictive combustion model called Three Pressure Analysis (TPA) is, and then data such as mass trapped in cylinder, heat release rate, and accumulated released energy are calculated, aiming that the validation can be performed by comparing these data with those obtained experimentally. Finally, the swirl and tumble coefficients are introduced in their corresponding objects so that their influences can be observed when compared to the results obtained previously.

Keywords: 1D simulation, single-cylinder research engine, swirl coefficient, three pressure analysis, tumble coefficient

Procedia PDF Downloads 99
1048 Global Experiences in Dealing with Biological Epidemics with an Emphasis on COVID-19 Disease: Approaches and Strategies

Authors: Marziye Hadian, Alireza Jabbari

Abstract:

Background: The World Health Organization has identified COVID-19 as a public health emergency and is urging governments to stop the virus transmission by adopting appropriate policies. In this regard, authorities have taken different approaches to cut the chain or controlling the spread of the disease. Now, the questions we are facing include what these approaches are? What tools should be used to implement each preventive protocol? In addition, what is the impact of each approach? Objective: The aim of this study was to determine the approaches to biological epidemics and related prevention tools with an emphasis on COVID-19 disease. Data sources: Databases including ISI web of science, PubMed, Scopus, Science Direct, Ovid, and ProQuest were employed for data extraction. Furthermore, authentic sources such as the WHO website, the published reports of relevant countries, as well as the Worldometer website were evaluated for gray studies. The time-frame of the study was from 1 December 2019 to 30 May 2020. Methods: The present study was a systematic study of publications related to the prevention strategies for the COVID-19 disease. The study was carried out based on the PRISMA guidelines and CASP for articles and AACODS for grey literature. Results: The study findings showed that in order to confront the COVID-19 epidemic, in general, there are three approaches of "mitigation", "active control" and "suppression" and four strategies of "quarantine", "isolation", "social distance" and "lockdown" in both individual and social dimensions to deal with epidemics. Selection and implementation of each approach requires specific strategies and has different effects when it comes to controlling and inhibiting the disease. Key finding: One possible approach to control the disease is to change individual behavior and lifestyle. In addition to prevention strategies, use of masks, observance of personal hygiene principles such as regular hand washing and non-contact of contaminated hands with the face, as well as an observance of public health principles such as sneezing and coughing etiquettes, safe extermination of personal protective equipment, must be strictly observed. Have not been included in the category of prevention tools. However, it has a great impact on controlling the epidemic, especially the new coronavirus epidemic. Conclusion: Although the use of different approaches to control and inhibit biological epidemics depends on numerous variables, however, despite these requirements, global experience suggests that some of these approaches are ineffective. The use of previous experiences in the world, along with the current experiences of countries, can be very helpful in choosing the accurate approach for each country in accordance with the characteristics of that country and lead to the reduction of possible costs at the national and international levels.

Keywords: novel corona virus, COVID-19, approaches, prevention tools, prevention strategies

Procedia PDF Downloads 124
1047 The Effect of Satisfaction with the Internet on Online Shopping Attitude With TAM Approach Controlled By Gender

Authors: Velly Anatasia

Abstract:

In the last few decades extensive research has been conducted into information technology (IT) adoption, testing a series of factors considered to be essential for improved diffusion. Some studies analyze IT characteristics such as usefulness, ease of use and/or security, others focus on the emotions and experiences of users and a third group attempts to determine the importance of socioeconomic user characteristics such as gender, educational level and income. The situation is similar regarding e-commerce, where the majority of studies have taken for granted the importance of including these variables when studying e-commerce adoption, as these were believed to explain or forecast who buys or who will buy on the internet. Nowadays, the internet has become a marketplace suitable for all ages and incomes and both genders and thus the prejudices linked to the advisability of selling certain products should be revised. The objective of this study is to test whether the socioeconomic characteristics of experienced e-shoppers such as gender rally moderate the effect of their perceptions of online shopping behavior. Current development of the online environment and the experience acquired by individuals from previous e-purchases can attenuate or even nullify the effect of these characteristics. The individuals analyzed are experienced e-shoppers i.e. individuals who often make purchases on the internet. The Technology Acceptance Model (TAM) was broadened to include previous use of the internet and perceived self-efficacy. The perceptions and behavior of e-shoppers are based on their own experiences. The information obtained will be tested using questionnaires which were distributed and self-administered to respondent accustomed using internet. The causal model is estimated using structural equation modeling techniques (SEM), followed by tests of the moderating effect of socioeconomic variables on perceptions and online shopping behavior. The expected findings of this study indicated that gender moderate neither the influence of previous use of the internet nor the perceptions of e-commerce. In short, they do not condition the behavior of the experienced e-shopper.

Keywords: Internet shopping, age groups, gender, income, electronic commerce

Procedia PDF Downloads 329
1046 Monitoring and Management of Aquatic Macroinvertebrates for Determining the Level of Water Pollution Catchment Basin of Debed River, Armenia

Authors: Inga Badasyan

Abstract:

Every year we do monitoring of water pollution of catchment basin of Debed River. Next, the Ministry of Nature Protection does modeling programme. Finely, we are managing the impact of water pollution in Debed river. Ecosystem technologies efficiency performance were estimated based on the physical, chemical, and macrobiological analyses of water on regular base between 2012 to 2015. Algae community composition was determined to assess the ecological status of Debed river, while vegetation was determined to assess biodiversity. Last time, experts werespeaking about global warming, which is having bad impact on the surface water, freshwater, etc. As, we know that global warming is caused by the current high levels of carbon dioxide in the water. Geochemical modelling is increasingly playing an important role in various areas of hydro sciences and earth sciences. Geochemical modelling of highly concentrated aqueous solutions represents an important topic in the study of many environments such as evaporation ponds, groundwater and soils in arid and semi-arid zones, costal aquifers, etc. The sampling time is important for benthic macroinvertebrates, for that reason we have chosen in the spring (abundant flow of the river, the beginning of the vegetation season) and autumn (the flow of river is scarce). The macroinvertebrates are good indicator for a chromic pollution and aquatic ecosystems. Results of our earlier investigations in the Debed river reservoirs clearly show that management problem of ecosystem reservoirs is topical. Research results can be applied to studies of monitoring water quality in the rivers and allow for rate changes and to predict possible future changes in the nature of the lake.

Keywords: ecohydrological monitoring, flood risk management, global warming, aquatic macroinvertebrates

Procedia PDF Downloads 281
1045 Factors That Influence Choice of Walking Mode in Work Trips: Case Study of Rasht, Iran

Authors: Nima Safaei, Arezoo Masoud, Babak Safaei

Abstract:

In recent years, there has been a growing emphasis on the role of urban planning in walking capability and the effects of individual and socioeconomic factors on the physical activity levels of city dwellers. Although considerable number of studies are conducted about walkability and for identifying the effective factors in walking mode choice in developed countries, to our best knowledge, literature lacks in the study of factors affecting choice of walking mode in developing countries. Due to the high importance of health aspects of human societies and in order to make insights and incentives for reducing traffic during rush hours, many researchers and policy makers in the field of transportation planning have devoted much attention to walkability studies; they have tried to improve the effective factors in the choice of walking mode in city neighborhoods. In this study, effective factors in walkability that have proven to have significant impact on the choice of walking mode, are studied at the same time in work trips. The data for the study is collected from the employees in their workplaces by well-instructed people using questionnaires; the statistical population of the study consists of 117 employed people who commute daily from work to home in Rasht city of Iran during the beginning of spring 2015. Results of the study which are found through the linear regression modeling, show that people who do not have freedom of choice for choosing their living locations and need to be present at their workplaces in certain hours have lower levels of walking. Additionally, unlike some of the previous studies which were conducted in developed countries, coincidental effects of Body Mass Index (BMI) and the income level of employees, do not have a significant effect on the walking level in work travels.

Keywords: BMI, linear regression, transportation, walking, work trips

Procedia PDF Downloads 191