Search results for: statistical modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7470

Search results for: statistical modeling

930 Possibilities and Prospects for the Development of the Agricultural Insurance Market (The Example of Georgia)

Authors: Nino Damenia

Abstract:

The agricultural sector plays an important role in the development of Georgia's economy, it contributes to employment and food security. It faces various types of risks that may lead to heavy financial losses. Agricultural insurance is one of the means of combating agricultural risks. The paper discusses the agricultural insurance experience of those countries (European countries and the USA) that have successfully implemented the agricultural insurance program. Analysis of international cases shows that a well-designed and implemented agri-insurance system can bring significant benefits to farmers, insurance companies and the economy as a whole. In the background of all this, the Government of Georgia recognized the importance of agro-insurance and took important steps for its development. In 2014, in cooperation with insurance companies, an agro-insurance program was introduced, the purpose of which is to increase the availability of insurance for farmers and stimulate the agro-insurance market. Despite such a step forward, challenges remain such as awareness of farmers, insufficient infrastructure for data collection and risk assessment, involvement of insurance companies and other important factors. With the support of the government and stakeholders, it is possible to overcome the existing challenges and establish a strong and effective agro-insurance system. Objectives. The purpose of the research is to analyze the development trends of the agricultural insurance market, to identify the main factors affecting its growth, and to further develop recommendations for development prospects for Georgia. Methodologies. The research uses mixed methods, which combine qualitative and quantitative research techniques. The qualitative method includes the study of the literature of Georgian and foreign economists, which allows us to get acquainted with the challenges, opportunities, legislative and regulatory frameworks of agricultural insurance. Quantitative analysis involves collecting data from stakeholders and then analyzing it. The paper also uses the methods of synthesis, comparison and statistical analysis of the agricultural insurance market in Georgia, Europe and the USA. Conclusions. As the main results of the research, we can consider that the analysis of the insurance market has been made and its main functions have been identified; The essence, features and functions of agricultural insurance are analyzed; European and US agricultural insurance market is researched; The stages of formation and development of the agricultural insurance market of Georgia are studied, its importance for the agricultural sector of Georgia is determined; The role of the state for the development of agro-insurance is analyzed and development prospects are established based on the study of the current trends of the agro-insurance market of Georgia.

Keywords: agricultural insurance, agriculture, agricultural insurance program, risk

Procedia PDF Downloads 38
929 Multi-Size Continuous Particle Separation on a Dielectrophoresis-Based Microfluidics Chip

Authors: Arash Dalili, Hamed Tahmouressi, Mina Hoorfar

Abstract:

Advances in lab-on-a-chip (LOC) devices have led to significant advances in the manipulation, separation, and isolation of particles and cells. Among the different active and passive particle manipulation methods, dielectrophoresis (DEP) has been proven to be a versatile mechanism as it is label-free, cost-effective, simple to operate, and has high manipulation efficiency. DEP has been applied for a wide range of biological and environmental applications. A popular form of DEP devices is the continuous manipulation of particles by using co-planar slanted electrodes, which utilizes a sheath flow to focus the particles into one side of the microchannel. When particles enter the DEP manipulation zone, the negative DEP (nDEP) force generated by the slanted electrodes deflects the particles laterally towards the opposite side of the microchannel. The lateral displacement of the particles is dependent on multiple parameters including the geometry of the electrodes, the width, length and height of the microchannel, the size of the particles and the throughput. In this study, COMSOL Multiphysics® modeling along with experimental studies are used to investigate the effect of the aforementioned parameters. The electric field between the electrodes and the induced DEP force on the particles are modelled by COMSOL Multiphysics®. The simulation model is used to show the effect of the DEP force on the particles, and how the geometry of the electrodes (width of the electrodes and the gap between them) plays a role in the manipulation of polystyrene microparticles. The simulation results show that increasing the electrode width to a certain limit, which depends on the height of the channel, increases the induced DEP force. Also, decreasing the gap between the electrodes leads to a stronger DEP force. Based on these results, criteria for the fabrication of the electrodes were found, and soft lithography was used to fabricate interdigitated slanted electrodes and microchannels. Experimental studies were run to find the effect of the flow rate, geometrical parameters of the microchannel such as length, width, and height as well as the electrodes’ angle on the displacement of 5 um, 10 um and 15 um polystyrene particles. An empirical equation is developed to predict the displacement of the particles under different conditions. It is shown that the displacement of the particles is more for longer and lower height channels, lower flow rates, and bigger particles. On the other hand, the effect of the angle of the electrodes on the displacement of the particles was negligible. Based on the results, we have developed an optimum design (in terms of efficiency and throughput) for three size separation of particles.

Keywords: COMSOL Multiphysics, Dielectrophoresis, Microfluidics, Particle separation

Procedia PDF Downloads 164
928 Processes and Application of Casting Simulation and Its Software’s

Authors: Surinder Pal, Ajay Gupta, Johny Khajuria

Abstract:

Casting simulation helps visualize mold filling and casting solidification; predict related defects like cold shut, shrinkage porosity and hard spots; and optimize the casting design to achieve the desired quality with high yield. Flow and solidification of molten metals are, however, a very complex phenomenon that is difficult to simulate correctly by conventional computational techniques, especially when the part geometry is intricate and the required inputs (like thermo-physical properties and heat transfer coefficients) are not available. Simulation software is based on the process of modeling a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but it also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mockup of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome. The all casting simulation software has own requirements, like magma cast has only best for crack simulation. The latest generation software Auto CAST developed at IIT Bombay provides a host of functions to support method engineers, including part thickness visualization, core design, multi-cavity mold design with common gating and feeding, application of various feed aids (feeder sleeves, chills, padding, etc.), simulation of mold filling and casting solidification, automatic optimization of feeders and gating driven by the desired quality level, and what-if cost analysis. IIT Bombay has developed a set of applications for the foundry industry to improve casting yield and quality. Casting simulation is a fast and efficient solution for process for advanced tool which is the result of more than 20 years of collaboration with major industrial partners and academic institutions around the world. In this paper the process of casting simulation is studied.

Keywords: casting simulation software’s, simulation technique’s, casting simulation, processes

Procedia PDF Downloads 464
927 Preliminary Seismic Vulnerability Assessment of Existing Historic Masonry Building in Pristina, Kosovo

Authors: Florim Grajcevci, Flamur Grajcevci, Fatos Tahiri, Hamdi Kurteshi

Abstract:

The territory of Kosova is actually included in one of the most seismic-prone regions in Europe. Therefore, the earthquakes are not so rare in Kosova; and when they occurred, the consequences have been rather destructive. The importance of assessing the seismic resistance of existing masonry structures has drawn strong and growing interest in the recent years. Engineering included those of Vulnerability, Loss of Buildings and Risk assessment, are also of a particular interest. This is due to the fact that this rapidly developing field is related to great impact of earthquakes on the socioeconomic life in seismic-prone areas, as Kosova and Prishtina are, too. Such work paper for Prishtina city may serve as a real basis for possible interventions in historic buildings as are museums, mosques, old residential buildings, in order to adequately strengthen and/or repair them, by reducing the seismic risk within acceptable limits. The procedures of the vulnerability assessment of building structures have concentrated on structural system, capacity, and the shape of layout and response parameters. These parameters will provide expected performance of the very important existing building structures on the vulnerability and the overall behavior during the earthquake excitations. The structural systems of existing historical buildings in Pristina, Kosovo, are dominantly unreinforced brick or stone masonry with very high risk potential from the expected earthquakes in the region. Therefore, statistical analysis based on the observed damage-deformation, cracks, deflections and critical building elements, would provide more reliable and accurate results for the regional assessments. The analytical technique was used to develop a preliminary evaluation methodology for assessing seismic vulnerability of the respective structures. One of the main objectives is also to identify the buildings that are highly vulnerable to damage caused from inadequate seismic performance-response. Hence, the damage scores obtained from the derived vulnerability functions will be used to categorize the evaluated buildings as “stabile”, “intermediate”, and “unstable”. The vulnerability functions are generated based on the basic damage inducing parameters, namely number of stories (S), lateral stiffness (LS), capacity curve of total building structure (CCBS), interstory drift (IS) and overhang ratio (OR).

Keywords: vulnerability, ductility, seismic microzone, ductility, energy efficiency

Procedia PDF Downloads 388
926 A Construction Management Tool: Determining a Project Schedule Typical Behaviors Using Cluster Analysis

Authors: Natalia Rudeli, Elisabeth Viles, Adrian Santilli

Abstract:

Delays in the construction industry are a global phenomenon. Many construction projects experience extensive delays exceeding the initially estimated completion time. The main purpose of this study is to identify construction projects typical behaviors in order to develop a prognosis and management tool. Being able to know a construction projects schedule tendency will enable evidence-based decision-making to allow resolutions to be made before delays occur. This study presents an innovative approach that uses Cluster Analysis Method to support predictions during Earned Value Analyses. A clustering analysis was used to predict future scheduling, Earned Value Management (EVM), and Earned Schedule (ES) principal Indexes behaviors in construction projects. The analysis was made using a database with 90 different construction projects. It was validated with additional data extracted from literature and with another 15 contrasting projects. For all projects, planned and executed schedules were collected and the EVM and ES principal indexes were calculated. A complete linkage classification method was used. In this way, the cluster analysis made considers that the distance (or similarity) between two clusters must be measured by its most disparate elements, i.e. that the distance is given by the maximum span among its components. Finally, through the use of EVM and ES Indexes and Tukey and Fisher Pairwise Comparisons, the statistical dissimilarity was verified and four clusters were obtained. It can be said that construction projects show an average delay of 35% of its planned completion time. Furthermore, four typical behaviors were found and for each of the obtained clusters, the interim milestones and the necessary rhythms of construction were identified. In general, detected typical behaviors are: (1) Projects that perform a 5% of work advance in the first two tenths and maintain a constant rhythm until completion (greater than 10% for each remaining tenth), being able to finish on the initially estimated time. (2) Projects that start with an adequate construction rate but suffer minor delays culminating with a total delay of almost 27% of the planned time. (3) Projects which start with a performance below the planned rate and end up with an average delay of 64%, and (4) projects that begin with a poor performance, suffer great delays and end up with an average delay of a 120% of the planned completion time. The obtained clusters compose a tool to identify the behavior of new construction projects by comparing their current work performance to the validated database, thus allowing the correction of initial estimations towards more accurate completion schedules.

Keywords: cluster analysis, construction management, earned value, schedule

Procedia PDF Downloads 246
925 New Recombinant Netrin-a Protein of Lucilia Sericata Larvae by Bac to Bac Expression Vector System in Sf9 Insect Cell

Authors: Hamzeh Alipour, Masoumeh Bagheri, Abbasali Raz, Javad Dadgar Pakdel, Kourosh Azizi, Aboozar Soltani, Mohammad Djaefar Moemenbellah-Fard

Abstract:

Background: Maggot debridement therapy is an appropriate, effective, and controlled method using sterilized larvae of Luciliasericata (L.sericata) to treat wounds. Netrin-A is an enzyme in the Laminins family which secreted from salivary gland of L.sericata with a central role in neural regeneration and angiogenesis. This study aimed to production of new recombinant Netrin-A protein of Luciliasericata larvae by baculovirus expression vector system (BEVS) in SF9. Material and methods: In the first step, gene structure was subjected to the in silico studies, which were include determination of Antibacterial activity, Prion formation risk, homology modeling, Molecular docking analysis, and Optimization of recombinant protein. In the second step, the Netrin-A gene was cloned and amplified in pTG19 vector. After digestion with BamH1 and EcoR1 restriction enzymes, it was cloned in pFastBac HTA vector. It was then transformed into DH10Bac competent cells, and the recombinant Bacmid was subsequently transfected into insect Sf9 cells. The expressed recombinant Netrin-A was thus purified in the Ni-NTA agarose. This protein evaluation was done using SDS-PAGE and western blot, respectively. Finally, its concentration was calculated with the Bradford assay method. Results: The Bacmid vector structure with Netrin-A was successfully constructed and then expressed as Netrin-A protein in the Sf9 cell lane. The molecular weight of this protein was 52 kDa with 404 amino acids. In the in silico studies, fortunately, we predicted that recombinant LSNetrin-A have Antibacterial activity and without any prion formation risk.This molecule hasa high binding affinity to the Neogenin and a lower affinity to the DCC-specific receptors. Signal peptide located between amino acids 24 and 25. The concentration of Netrin-A recombinant protein was calculated to be 48.8 μg/ml. it was confirmed that the characterized gene in our previous study codes L. sericata Netrin-A enzyme. Conclusions: Successful generation of the recombinant Netrin-A, a secreted protein in L.sericata salivary glands, and because Luciliasericata larvae are used in larval therapy. Therefore, the findings of the present study could be useful to researchers in future studies on wound healing.

Keywords: blowfly, BEVS, gene, immature insect, recombinant protein, Sf9

Procedia PDF Downloads 75
924 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach

Authors: Nina Ponikvar, Katja Zajc Kejžar

Abstract:

While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.

Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia

Procedia PDF Downloads 61
923 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring

Authors: Younghoon Kim, Seoung Bum Kim

Abstract:

One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.

Keywords: control chart, mixed integer programming, one-class classification, support vector data description

Procedia PDF Downloads 161
922 Efficacy and Safety of Updated Target Therapies for Treatment of Platinum-Resistant Recurrent Ovarian Cancer

Authors: John Hang Leung, Shyh-Yau Wang, Hei-Tung Yip, Fion, Ho Tsung-chin, Agnes LF Chan

Abstract:

Objectives: Platinum-resistant ovarian cancer has a short overall survival of 9–12 months and limited treatment options. The combination of immunotherapy and targeted therapy appears to be a promising treatment option for patients with ovarian cancer, particularly to patients with platinum-resistant recurrent ovarian cancer (PRrOC). However, there are no direct head-to-head clinical trials comparing their efficacy and toxicity. We, therefore, used a network to directly and indirectly compare seven newer immunotherapies or targeted therapies combined with chemotherapy in platinum-resistant relapsed ovarian cancer, including antibody-drug conjugates, PD-1 (Programmed death-1) and PD-L1 (Programmed death-ligand 1), PARP (Poly ADP-ribose polymerase) inhibitors, TKIs (Tyrosine kinase inhibitors), and antiangiogenic agents. Methods: We searched PubMed (Public/Publisher MEDLINE), EMBASE (Excerpta Medica Database), and the Cochrane Library electronic databases for phase II and III trials involving PRrOC patients treated with immunotherapy or targeted therapy plus chemotherapy. The quality of included trials was assessed using the GRADE method. The primary outcomes compared were progression-free survival, the secondary outcomes were overall survival and safety. Results: Seven randomized controlled trials involving a total of 2058 PRrOC patients were included in this analysis. Bevacizumab plus chemotherapy showed statistically significant differences in PFS (Progression-free survival) but not OS (Overall survival) for all interested targets and immunotherapy regimens; however, according to the heatmap analysis, bevacizumab plus chemotherapy had a statistically significant risk of ≥grade 3 SAEs (Severe adverse effects), particularly hematological severe adverse events (neutropenia, anemia, leukopenia, and thrombocytopenia). Conclusions: Bevacizumab plus chemotherapy resulted in better PFS as compared with all interested regimens for the treatment of PRrOC. However, statistical differences in SAEs as bevacizumab plus chemotherapy is associated with a greater risk for hematological SAE.

Keywords: platinum-resistant recurrent ovarian cancer, network meta-analysis, immune checkpoint inhibitors, target therapy, antiangiogenic agents

Procedia PDF Downloads 61
921 Analysis of Urban Flooding in Wazirabad Catchment of Kabul City with Help of Geo-SWMM

Authors: Fazli Rahim Shinwari, Ulrich Dittmer

Abstract:

Like many megacities around the world, Kabul is facing severe problems due to the rising frequency of urban flooding. Since 2001, Kabul is experiencing rapid population growth because of the repatriation of refugees and internal migration. Due to unplanned development, green areas inside city and hilly areas within and around the city are converted into new housing towns that had increased runoff. Trenches along the roadside comprise the unplanned drainage network of the city that drains the combined sewer flow. In rainy season overflow occurs, and after streets become dry, the dust particles contaminate the air which is a major cause of air pollution in Kabul city. In this study, a stormwater management model is introduced as a basis for a systematic approach to urban drainage planning in Kabul. For this purpose, Kabul city is delineated into 8 watersheds with the help of one-meter resolution LIDAR DEM. Storm, water management model, is developed for Wazirabad catchment by using available data and literature values. Due to lack of long term metrological data, the model is only run for hourly rainfall data of a rain event that occurred in April 2016. The rain event from 1st to 3rd April with maximum intensity of 3mm/hr caused huge flooding in Wazirabad Catchment of Kabul City. Model-estimated flooding at some points of the catchment as an actual measurement of flooding was not possible; results were compared with information obtained from local people, Kabul Municipality and Capital Region Independent Development Authority. The model helped to identify areas where flooding occurred because of less capacity of drainage system and areas where the main reason for flooding is due to blockage in the drainage canals. The model was used for further analysis to find a sustainable solution to the problem. The option to construct new canals was analyzed, and two new canals were proposed that will reduce the flooding frequency in Wazirabad catchment of Kabul city. By developing the methodology to develop a stormwater management model from digital data and information, the study had fulfilled the primary objective, and similar methodology can be used for other catchments of Kabul city to prepare an emergency and long-term plan for drainage system of Kabul city.

Keywords: urban hydrology, storm water management, modeling, SWMM, GEO-SWMM, GIS, identification of flood vulnerable areas, urban flooding analysis, sustainable urban drainage

Procedia PDF Downloads 135
920 Computational Modelling of pH-Responsive Nanovalves in Controlled-Release System

Authors: Tomilola J. Ajayi

Abstract:

A category of nanovalves system containing the α-cyclodextrin (α-CD) ring on a stalk tethered to the pores of mesoporous silica nanoparticles (MSN) is theoretically and computationally modelled. This functions to control opening and blocking of the MSN pores for efficient targeted drug release system. Modeling of the nanovalves is based on the interaction between α-CD and the stalk (p-anisidine) in relation to pH variation. Conformational analysis was carried out prior to the formation of the inclusion complex, to find the global minimum of both neutral and protonated stalk. B3LYP/6-311G**(d, p) basis set was employed to attain all theoretically possible conformers of the stalk. Six conformers were taken into considerations, and the dihedral angle (θ) around the reference atom (N17) of the p-anisidine stalk was scanned from 0° to 360° at 5° intervals. The most stable conformer was obtained at a dihedral angle of 85.3° and was fully optimized at B3LYP/6-311G**(d, p) level of theory. The most stable conformer obtained from conformational analysis was used as the starting structure to create the inclusion complexes. 9 complexes were formed by moving the neutral guest into the α-CD cavity along the Z-axis in 1 Å stepwise while keeping the distance between dummy atom and OMe oxygen atom on the stalk restricted. The dummy atom and the carbon atoms on α-CD structure were equally restricted for orientation A (see Scheme 1). The generated structures at each step were optimized with B3LYP/6-311G**(d, p) methods to determine their energy minima. Protonation of the nitrogen atom on the stalk occurs at acidic pH, leading to unsatisfactory host-guest interaction in the nanogate; hence there is dethreading. High required interaction energy and conformational change are theoretically established to drive the release of α-CD at a certain pH. The release was found to occur between pH 5-7 which agreed with reported experimental results. In this study, we applied the theoretical model for the prediction of the experimentally observed pH-responsive nanovalves which enables blocking, and opening of mesoporous silica nanoparticles pores for targeted drug release system. Our results show that two major factors are responsible for the cargo release at acidic pH. The higher interaction energy needed for the complex/nanovalve formation to exist after protonation as well as conformational change upon protonation are driving the release due to slight pH change from 5 to 7.

Keywords: nanovalves, nanogate, mesoporous silica nanoparticles, cargo

Procedia PDF Downloads 106
919 Analytical and Numerical Studies on the Behavior of a Freezing Soil Layer

Authors: X. Li, Y. Liu, H. Wong, B. Pardoen, A. Fabbri, F. McGregor, E. Liu

Abstract:

The target of this paper is to investigate how saturated poroelastic soils subject to freezing temperatures behave and how different boundary conditions can intervene and affect the thermo-hydro-mechanical (THM) responses, based on a particular but classical configuration of a finite homogeneous soil layer studied by Terzaghi. The essential relations on the constitutive behavior of a freezing soil are firstly recalled: ice crystal - liquid water thermodynamic equilibrium, hydromechanical constitutive equations, momentum balance, water mass balance, and the thermal diffusion equation, in general, non-linear case where material parameters are state-dependent. The system of equations is firstly linearized, assuming all material parameters to be constants, particularly the permeability of liquid water, which should depend on the ice content. Two analytical solutions solved by the classic Laplace transform are then developed, accounting for two different sets of boundary conditions. Afterward, the general non-linear equations with state-dependent parameters are solved using a commercial code COMSOL based on finite elements method to obtain numerical results. The validity of this numerical modeling is partially verified using the analytical solution in the limiting case of state-independent parameters. Comparison between the results given by the linearized analytical solutions and the non-linear numerical model reveals that the above-mentioned linear computation will always underestimate the liquid pore pressure and displacement, whatever the hydraulic boundary conditions are. In the nonlinear model, the faster growth of ice crystals, accompanying the subsequent reduction of permeability of freezing soil layer, makes a longer duration for the depressurization of water liquid and slower settlement in the case where the ground surface is swiftly covered by a thin layer of ice, as well as a bigger global liquid pressure and swelling in the case of the impermeable ground surface. Nonetheless, the analytical solutions based on linearized equations give a correct order-of-magnitude estimate, especially at moderate temperature variations, and remain a useful tool for preliminary design checks.

Keywords: chemical potential, cryosuction, Laplace transform, multiphysics coupling, phase transformation, thermodynamic equilibrium

Procedia PDF Downloads 65
918 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics

Authors: Janne Engblom, Elias Oikarinen

Abstract:

A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.

Keywords: dynamic model, fixed effects, panel data, price dynamics

Procedia PDF Downloads 1457
917 Inf-γ and Il-2 Asses the Therapeutic Response in Anti-tuberculosis Patients at Jamot Hospital Yaounde, Cameroon

Authors: Alexandra Emmanuelle Membangbi, Jacky Njiki Bikoï, Esther Del-florence Moni Ndedi, Marie Joseph Nkodo Mindimi, Donatien Serge Mbaga, Elsa Nguiffo Makue, André Chris Mikangue Mbongue, Martha Mesembe, George Ikomey Mondinde, Eric Walter Perfura-yone, Sara Honorine Riwom Essama

Abstract:

Background: Tuberculosis (TB) is one of the top lethal infectious diseases worldwide. In recent years, interferon-γ (INF-γ) release assays (IGRAs) have been established as routine tests for diagnosing TB infection. However, produced INF-γ assessment failed to distinguish active TB (ATB) from latent TB infection (LTBI), especially in TB epidemic areas. In addition to IFN-γ, interleukin-2 (IL-2), another cytokine secreted by activated T cells, is also involved in immune response against Mycobacterium tuberculosis. The aim of the study was to assess the capacity of IFN-γ and IL2 to evaluate the therapeutic response of patients on anti-tuberculosis treatment. Material and Methods: We conducted a cross-sectional study in the Pneumonology Departments of the Jamot Hospital in Yaoundé between May and August 2021. After signed the informed consent, the sociodemographic data, as well as 5 mL of blood, were collected in the crook of the elbow of each participant. Sixty-one subjects were selected (n= 61) and divided into 4 groups as followed: group 1: resistant tuberculosis (n=13), group 2: active tuberculosis (n=19), group 3 cured tuberculosis (n=16), and group 4: presumed healthy persons (n=13). The cytokines of interest were determined using an indirect Enzyme-linked Immuno-Sorbent Assay (ELISA) according to the manufacturer's recommendations. P-values < 0.05 were interpreted as statistically significant. All statistical calculations were performed using SPSS version 22.0 Results: The results showed that men were more 14/61 infected (31,8%) with a high presence in active and resistant TB groups. The mean age was 41.3±13.1 years with a 95% CI = [38.2-44.7], the age group with the highest infection rate was ranged between 31 and 40 years. The IL-2 and INF-γ means were respectively 327.6±160.6 pg/mL and 26.6±13.0 pg/mL in active tuberculosis patients, 251.1±30.9 pg/mL and 21.4±9.2 pg/mL in patients with resistant tuberculosis, while it was 149.3±93.3 pg/mL and 17.9±9.4 pg/mL in cured patients, 15.1±8.4 pg/mL and 5.3±2.6 pg/mL in participants presumed healthy (p <0.0001). Significant differences in IFN-γ and IL-2 rates were observed between the different groups. Conclusion: Monitoring the serum levels of INF-γ and IL-2 would be useful to evaluate the therapeutic response of anti-tuberculosis patients, particularly in the both cytokines association case, that could improve the accuracy of routine examinations.

Keywords: antibiotic therapy, interferon gamma, interleukin 2, tuberculosis

Procedia PDF Downloads 94
916 Investigation of Mangrove Area Effects on Hydrodynamic Conditions of a Tidal Dominant Strait Near the Strait of Hormuz

Authors: Maryam Hajibaba, Mohsen Soltanpour, Mehrnoosh Abbasian, S. Abbas Haghshenas

Abstract:

This paper aims to evaluate the main role of mangroves forests on the unique hydrodynamic characteristics of the Khuran Strait (KS) in the Persian Gulf. Investigation of hydrodynamic conditions of KS is vital to predict and estimate sedimentation and erosion all over the protected areas north of Qeshm Island. KS (or Tang-e-Khuran) is located between Qeshm Island and the Iranian mother land and has a minimum width of approximately two kilometers. Hydrodynamics of the strait is dominated by strong tidal currents of up to 2 m/s. The bathymetry of the area is dynamic and complicated as 1) strong currents do exist in the area which lead to seemingly sand dune movements in the middle and southern parts of the strait, and 2) existence a vast area with mangrove coverage next to the narrowest part of the strait. This is why ordinary modeling schemes with normal mesh resolutions are not capable for high accuracy estimations of current fields in the KS. A comprehensive set of measurements were carried out with several components, to investigate the hydrodynamics and morpho-dynamics of the study area, including 1) vertical current profiling at six stations, 2) directional wave measurements at four stations, 3) water level measurements at six stations, 4) wind measurements at one station, and 5) sediment grab sampling at 100 locations. Additionally, a set of periodic hydrographic surveys was included in the program. The numerical simulation was carried out by using Delft3D – Flow Module. Model calibration was done by comparing water levels and depth averaged velocity of currents against available observational data. The results clearly indicate that observed data and simulations only fit together if a realistic perspective of the mangrove area is well captured by the model bathymetry data. Generating unstructured grid by using RGFGRID and QUICKIN, the flow model was driven with water level time-series at open boundaries. Adopting the available field data, the key role of mangrove area on the hydrodynamics of the study area can be studied. The results show that including the accurate geometry of the mangrove area and consideration of its sponge-like behavior are the key aspects through which a realistic current field can be simulated in the KS.

Keywords: Khuran Strait, Persian Gulf, tide, current, Delft3D

Procedia PDF Downloads 180
915 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails

Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali

Abstract:

When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.

Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis

Procedia PDF Downloads 18
914 Epoxomicin Affects Proliferating Neural Progenitor Cells of Rat

Authors: Bahaa Eldin A. Fouda, Khaled N. Yossef, Mohamed Elhosseny, Ahmed Lotfy, Mohamed Salama, Mohamed Sobh

Abstract:

Developmental neurotoxicity (DNT) entails the toxic effects imparted by various chemicals on the brain during the early childhood period. As human brains are vulnerable during this period, various chemicals would have their maximum effects on brains during early childhood. Some toxicants have been confirmed to induce developmental toxic effects on CNS e.g. lead, however; most of the agents cannot be identified with certainty due the defective nature of predictive toxicology models used. A novel alternative method that can overcome most of the limitations of conventional techniques is the use of 3D neurospheres system. This in-vitro system can recapitulate most of the changes during the period of brain development making it an ideal model for predicting neurotoxic effects. In the present study, we verified the possible DNT of epoxomicin which is a naturally occurring selective proteasome inhibitor with anti-inflammatory activity. Rat neural progenitor cells were isolated from rat embryos (E14) extracted from placental tissue. The cortices were aseptically dissected out from the brains of the fetuses and the tissues were triturated by repeated passage through a fire-polished constricted Pasteur pipette. The dispersed tissues were allowed to settle for 3 min. The supernatant was, then, transferred to a fresh tube and centrifuged at 1,000 g for 5 min. The pellet was placed in Hank’s balanced salt solution cultured as free-floating neurospheres in proliferation medium. Two doses of epoxomicin (1µM and 10µM) were used in cultured neuropsheres for a period of 14 days. For proliferation analysis, spheres were cultured in proliferation medium. After 0, 4, 5, 11, and 14 days, sphere size was determined by software analyses. The diameter of each neurosphere was measured and exported to excel file further to statistical analysis. For viability analysis, trypsin-EDTA solution were added to neurospheres for 3 min to dissociate them into single cells suspension, then viability evaluated by the Trypan Blue exclusion test. Epoxomicin was found to affect proliferation and viability of neuropsheres, these effects were positively correlated to doses and progress of time. This study confirms the DNT effects of epoxomicin on 3D neurospheres model. The effects on proliferation suggest possible gross morphologic changes while the decrease in viability propose possible focal lesion on exposure to epoxomicin during early childhood.

Keywords: neural progentor cells, epoxomicin, neurosphere, medical and health sciences

Procedia PDF Downloads 406
913 Obstacles and Ways-Forward to Upgrading Nigeria Basic Nursing Schools: A Survey of Perception of Teaching Hospitals’ Nurse Trainers and Stakeholders

Authors: Chijioke Oliver Nwodoh, Jonah Ikechukwu Eze, Loretta Chika Ukwuaba, Ifeoma Ndubuisi, Ada Carol Nwaneri, Ijeoma Lewechi Okoronkwo

Abstract:

Presence of nursing workforce with unequal qualification and status in Nigeria has undermined the growth of nursing profession in the country. Upgrading of the existing basic and post-basic nursing schools to degree-awarding institutions in Nigeria is a way-forward to solving this inequality problem and Nigeria teaching hospitals are in vantage position for this project due to the already existing supportive structure and manpower in those hospitals. What the nurse trainers and the stakeholders of the teaching hospitals may hold for or against the upgrading is a determining factor for the upgrading project, but that is not clear and has not been investigated in Nigeria. The study investigated the perception of nurse trainers and stakeholders of teaching hospitals in Enugu State of Nigeria on the obstacles and ways-forward to upgrading nursing schools to degree-awarding institutions in Nigeria. The study specifically elicited what the subjects may view as obstacles to upgrading basic and post-basic nursing schools to degree-awarding institutions in Nigeria and ascertained their suggestions on the possible ways of overcoming the obstacles. By utilizing cross-sectional descriptive design and a purposive sampling procedure, 78 accessible subjects out of a total population of 87 were used for the study. The generated data from the subjects were analyzed using frequencies, percentages and mean for the research questions and Pearson’s chi-square for the hypotheses, with the aid of Statistical Package for Social Sciences Version 20.0. The result showed that lack of extant policy, fund, and disunity among policy makers and stakeholders of nursing profession are the main obstacles to the upgrading. However, the respondents did not see items like: stakeholders and nurse trainers of basic and post-basic schools of nursing; fear of admitting and producing poor quality nurses; and so forth, as obstacles to the upgrading project. Institution of the upgrading policy by Nursing and Midwifery Council of Nigeria, funding, awareness creation for the upgrading and unison among policy makers and stakeholders of nursing profession are the major possible ways to overcome the obstacles. The difference in the subjects’ perceptions between the two hospitals was found to be statistically insignificant (p > 0.05). It is recommended that the policy makers and stakeholders of nursing in Nigeria should unite and liaise with Federal Ministries of Health and Education for modalities and actualization of upgrading nursing schools to degree-awarding institutions in Nigeria.

Keywords: nurse trainers, obstacles, perception, stakeholders, teaching hospital, upgrading basic nursing schools, ways-forward

Procedia PDF Downloads 126
912 Agreement between Basal Metabolic Rate Measured by Bioelectrical Impedance Analysis and Estimated by Prediction Equations in Obese Groups

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Basal metabolic rate (BMR) is widely used and an accepted measure of energy expenditure. Its principal determinant is body mass. However, this parameter is also correlated with a variety of other factors. The objective of this study is to measure BMR and compare it with the values obtained from predictive equations in adults classified according to their body mass index (BMI) values. 276 adults were included into the scope of this study. Their age, height and weight values were recorded. Five groups were designed based on their BMI values. First group (n = 85) was composed of individuals with BMI values varying between 18.5 and 24.9 kg/m2. Those with BMI values varying from 25.0 to 29.9 kg/m2 constituted Group 2 (n = 90). Individuals with 30.0-34.9 kg/m2, 35.0-39.9 kg/m2, > 40.0 kg/m2 were included in Group 3 (n = 53), 4 (n = 28) and 5 (n = 20), respectively. The most commonly used equations to be compared with the measured BMR values were selected. For this purpose, the values were calculated by the use of four equations to predict BMR values, by name, introduced by Food and Agriculture Organization (FAO)/World Health Organization (WHO)/United Nations University (UNU), Harris and Benedict, Owen and Mifflin. Descriptive statistics, ANOVA, post-Hoc Tukey and Pearson’s correlation tests were performed by a statistical program designed for Windows (SPSS, version 16.0). p values smaller than 0.05 were accepted as statistically significant. Mean ± SD of groups 1, 2, 3, 4 and 5 for measured BMR in kcal were 1440.3 ± 210.0, 1618.8 ± 268.6, 1741.1 ± 345.2, 1853.1 ± 351.2 and 2028.0 ± 412.1, respectively. Upon evaluation of the comparison of means among groups, differences were highly significant between Group 1 and each of the remaining four groups. The values were increasing from Group 2 to Group 5. However, differences between Group 2 and Group 3, Group 3 and Group 4, Group 4 and Group 5 were not statistically significant. These insignificances were lost in predictive equations proposed by Harris and Benedict, FAO/WHO/UNU and Owen. For Mifflin, the insignificance was limited only to Group 4 and Group 5. Upon evaluation of the correlations of measured BMR and the estimated values computed from prediction equations, the lowest correlations between measured BMR and estimated BMR values were observed among the individuals within normal BMI range. The highest correlations were detected in individuals with BMI values varying between 30.0 and 34.9 kg/m2. Correlations between measured BMR values and BMR values calculated by FAO/WHO/UNU as well as Owen were the same and the highest. In all groups, the highest correlations were observed between BMR values calculated from Mifflin and Harris and Benedict equations using age as an additional parameter. In conclusion, the unique resemblance of the FAO/WHO/UNU and Owen equations were pointed out. However, mean values obtained from FAO/WHO/UNU were much closer to the measured BMR values. Besides, the highest correlations were found between BMR calculated from FAO/WHO/UNU and measured BMR. These findings suggested that FAO/WHO/UNU was the most reliable equation, which may be used in conditions when the measured BMR values are not available.

Keywords: adult, basal metabolic rate, fao/who/unu, obesity, prediction equations

Procedia PDF Downloads 112
911 Antibacterial Effect of Silver Diamine Fluoride Incorporated in Fissure Sealants

Authors: Nélio Veiga, Paula Ferreira, Tiago Correia, Maria J. Correia, Carlos Pereira, Odete Amaral, Ilídio J. Correia

Abstract:

Introduction: The application of fissure sealants is considered to be an important primary prevention method used in dental medicine. However, the formation of microleakage gaps between tooth enamel and the fissure sealant applied is one of the most common reasons of dental caries development in teeth with fissure sealants. The association between various dental biomaterials may limit the major disadvantages and limitations of biomaterials functioning in a complementary manner. The present study consists in the incorporation of a cariostatic agent – silver diamine fluoride (SDF) – in a resin-based fissure sealant followed by the study of release kinetics by spectrophotometry analysis of the association between both biomaterials and assessment of the inhibitory effect on the growth of the reference bacterial strain Streptococcus mutans (S. mutans) in an in vitro study. Materials and Methods: An experimental in vitro study was designed consisting in the entrapment of SDF (Cariestop® 12% and 30%) into a commercially available fissure sealant (Fissurit®), by photopolymerization and photocrosslinking. The same sealant, without SDF was used as a negative control. The effect of the sealants on the growth of S. mutans was determined by the presence of bacterial inhibitory halos in the cultures at the end of the incubation period. In order to confirm the absence of bacteria in the surface of the materials, Scanning Electron Microscopy (SEM) characterization was performed. Also, to analyze the release profile of SDF along time, spectrophotometry technique was applied. Results: The obtained results indicate that the association of SDF to a resin-based fissure sealant may be able to increase the inhibition of S. mutans growth. However, no SDF release was noticed during the in vitro release studies and no statistical significant difference was verified when comparing the inhibitory halo sizes obtained for test and control group.  Conclusions: In this study, the entrapment of SDF in the resin-based fissure sealant did not potentiate the antibacterial effect of the fissure sealant or avoid the immediate development of dental caries. The development of more laboratorial research and, afterwards, long-term clinical data are necessary in order to verify if this association between these biomaterials is effective and can be considered for being used in oral health management. Also, other methodologies for associating cariostatic agents and sealant should be addressed.

Keywords: biomaterial, fissure sealant, primary prevention, silver diamine fluoride

Procedia PDF Downloads 241
910 Chemical Composition of Volatiles Emitted from Ziziphus jujuba Miller Collected during Different Growth Stages

Authors: Rose Vanessa Bandeira Reidel, Bernardo Melai, Pier Luigi Cioni, Luisa Pistelli

Abstract:

Ziziphus jujuba Miller is a common species of the Ziziphus genus (Rhamnaceae family) native to the tropics and subtropics known for its edible fruits, fresh consumed or used in healthy food, as flavoring and sweetener. Many phytochemicals and biological activities are described for this species. In this work, the aroma profiles emitted in vivo by whole fresh organs (leaf, bud flower, flower, green and red fruits) were analyzed separately by mean of solid phase micro-extraction (SPME) coupled with gas chromatography mass spectrometry (GC-MS). The emitted volatiles from different plant parts were analysed using Supelco SPME device coated with polydimethylsiloxane (PDMS, 100µm). Fresh plant material was introduced separately into a glass conical flask and allowed to equilibrate for 20 min. After the equilibration time, the fibre was exposed to the headspace for 15 min at room temperature, the fibre was re-inserted into the needle and transferred to the injector of the CG and CG-MS system, where the fibre was desorbed. All the data were submitted to multivariate statistical analysis, evidencing many differences amongst the selected plant parts and their developmental stages. A total of 144 compounds were identified corresponding to 94.6-99.4% of the whole aroma profile of jujube samples. Sesquiterpene hydrocarbons were the main chemical class of compounds in leaves also present in similar percentage in flowers and bud flowers where (E, E)-α-farnesene was the main constituent in all cited plant parts. This behavior can be due to a protection mechanism against pathogens and herbivores as well as resistance to abiotic factors. The aroma of green fruits was characterized by high amount of perillene while the red fruits release a volatile blend mainly constituted by different monoterpenes. The terpenoid emission of flesh fruits has important function in the interaction with animals including attraction of seed dispersers and it is related to a good quality of fruits. This study provides for the first time the chemical composition of the volatile emission from different Ziziphus jujuba organs. The SPME analyses of the collected samples showed different patterns of emission and can contribute to understand their ecological interactions and fruit production management.

Keywords: Rhamnaceae, aroma profile, jujube organs, HS-SPME, GC-MS

Procedia PDF Downloads 235
909 Development of Gully Erosion Prediction Model in Sokoto State, Nigeria, using Remote Sensing and Geographical Information System Techniques

Authors: Nathaniel Bayode Eniolorunda, Murtala Abubakar Gada, Sheikh Danjuma Abubakar

Abstract:

The challenge of erosion in the study area is persistent, suggesting the need for a better understanding of the mechanisms that drive it. Thus, the study evolved a predictive erosion model (RUSLE_Sok), deploying Remote Sensing (RS) and Geographical Information System (GIS) tools. The nature and pattern of the factors of erosion were characterized, while soil losses were quantified. Factors’ impacts were also measured, and the morphometry of gullies was described. Data on the five factors of RUSLE and distances to settlements, rivers and roads (K, R, LS, P, C, DS DRd and DRv) were combined and processed following standard RS and GIS algorithms. Harmonized World Soil Data (HWSD), Shuttle Radar Topographical Mission (SRTM) image, Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), Sentinel-2 image accessed and processed within the Google Earth Engine, road network and settlements were the data combined and calibrated into the factors for erosion modeling. A gully morphometric study was conducted at some purposively selected sites. Factors of soil erosion showed low, moderate, to high patterns. Soil losses ranged from 0 to 32.81 tons/ha/year, classified into low (97.6%), moderate (0.2%), severe (1.1%) and very severe (1.05%) forms. The multiple regression analysis shows that factors statistically significantly predicted soil loss, F (8, 153) = 55.663, p < .0005. Except for the C-Factor with a negative coefficient, all other factors were positive, with contributions in the order of LS>C>R>P>DRv>K>DS>DRd. Gullies are generally from less than 100m to about 3km in length. Average minimum and maximum depths at gully heads are 0.6 and 1.2m, while those at mid-stream are 1 and 1.9m, respectively. The minimum downstream depth is 1.3m, while that for the maximum is 4.7m. Deeper gullies exist in proximity to rivers. With minimum and maximum gully elevation values ranging between 229 and 338m and an average slope of about 3.2%, the study area is relatively flat. The study concluded that major erosion influencers in the study area are topography and vegetation cover and that the RUSLE_Sok well predicted soil loss more effectively than ordinary RUSLE. The adoption of conservation measures such as tree planting and contour ploughing on sloppy farmlands was recommended.

Keywords: RUSLE_Sok, Sokoto, google earth engine, sentinel-2, erosion

Procedia PDF Downloads 48
908 Global Modeling of Drill String Dragging and Buckling in 3D Curvilinear Bore-Holes

Authors: Valery Gulyayev, Sergey Glazunov, Elena Andrusenko, Nataliya Shlyun

Abstract:

Enhancement of technology and techniques for drilling deep directed oil and gas bore-wells are of essential industrial significance because these wells make it possible to increase their productivity and output. Generally, they are used for drilling in hard and shale formations, that is why their drivage processes are followed by the emergency and failure effects. As is corroborated by practice, the principal drilling drawback occurring in drivage of long curvilinear bore-wells is conditioned by the need to obviate essential force hindrances caused by simultaneous action of the gravity, contact and friction forces. Primarily, these forces depend on the type of the technological regime, drill string stiffness, bore-hole tortuosity and its length. They can lead to the Eulerian buckling of the drill string and its sticking. To predict and exclude these states, special mathematic models and methods of computer simulation should play a dominant role. At the same time, one might note that these mechanical phenomena are very complex and only simplified approaches (‘soft string drag and torque models’) are used for their analysis. Taking into consideration that now the cost of directed wells increases essentially with complication of their geometry and enlargement of their lengths, it can be concluded that the price of mistakes of the drill string behavior simulation through the use of simplified approaches can be very high and so the problem of correct software elaboration is very urgent. This paper deals with the problem of simulating the regimes of drilling deep curvilinear bore-wells with prescribed imperfect geometrical trajectories of their axial lines. On the basis of the theory of curvilinear flexible elastic rods, methods of differential geometry, and numerical analysis methods, the 3D ‘stiff-string drag and torque model’ of the drill string bending and the appropriate software are elaborated for the simulation of the tripping in and out regimes and drilling operations. It is shown by the computer calculations that the contact and friction forces can be calculated and regulated, providing predesigned trouble-free modes of operation. The elaborated mathematic models and software can be used for the emergency situations prognostication and their exclusion at the stages of the drilling process design and realization.

Keywords: curvilinear drilling, drill string tripping in and out, contact forces, resistance forces

Procedia PDF Downloads 127
907 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning

Authors: Yangzhi Li

Abstract:

Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.

Keywords: robotic construction, robotic assembly, visual guidance, machine learning

Procedia PDF Downloads 68
906 A Reduced Ablation Model for Laser Cutting and Laser Drilling

Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz

Abstract:

In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.

Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling

Procedia PDF Downloads 200
905 Insulin Receptor Substrate-1 (IRS1) and Transcription Factor 7-Like 2 (TCF7L2) Gene Polymorphisms Associated with Type 2 Diabetes Mellitus in Eritreans

Authors: Mengistu G. Woldu, Hani Y. Zaki, Areeg Faggad, Badreldin E. Abdalla

Abstract:

Background: Type 2 diabetes mellitus (T2DM) is a complex, degenerative, and multi-factorial disease, which is culpable for huge mortality and morbidity worldwide. Even though relatively significant numbers of studies are conducted on the genetics domain of this disease in the developed world, there is huge information gap in the sub-Saharan Africa region in general and in Eritrea in particular. Objective: The principal aim of this study was to investigate the association of common variants of the Insulin Receptor Substrate 1 (IRS1) and Transcription Factor 7-Like 2 (TCF7L2) genes with T2DM in the Eritrean population. Method: In this cross-sectional case control study 200 T2DM patients and 112 non-diabetes subjects were participated and genotyping of the IRS1 (rs13431179, rs16822615, 16822644rs, rs1801123) and TCF7L2 (rs7092484) tag SNPs were carries out using PCR-RFLP method of analysis. Haplotype analyses were carried out using Plink version 1.07, and Haploview 4.2 software. Linkage disequilibrium (LD), and Hardy-Weinberg equilibrium (HWE) analyses were performed using the Plink software. All descriptive statistical data analyses were carried out using SPSS (Version-20) software. Throughout the analysis p-value ≤0.05 was considered statistically significant. Result: Significant association was found between rs13431179 SNP of the IRS1 gene and T2DM under the recessive model of inheritance (OR=9.00, 95%CI=1.17-69.07, p=0.035), and marginally significant association found in the genotypic model (OR=7.50, 95%CI=0.94-60.06, p=0.058). The rs7092484 SNP of the TCF7L2 gene also showed markedly significant association with T2DM in the recessive (OR=3.61, 95%CI=1.70-7.67, p=0.001); and allelic (OR=1.80, 95%CI=1.23-2.62, p=0.002) models. Moreover, eight haplotypes of the IRS1 gene found to have significant association withT2DM (p=0.013 to 0.049). Assessments made on the interactions of genotypes of the rs13431179 and rs7092484 SNPs with various parameters demonstrated that high density lipoprotein (HDL), low density lipoprotein (LDL), waist circumference (WC), and systolic blood pressure (SBP) are the best T2DM onset predicting models. Furthermore, genotypes of the rs7092484 SNP showed significant association with various atherogenic indexes (Atherogenic index of plasma, LDL/HDL, and CHLO/HDL); and Eritreans carrying the GG or GA genotypes were predicted to be more susceptible to cardiovascular diseases onset. Conclusions: Results of this study suggest that IRS1 (rs13431179) and TCF7L2 (rs7092484) gene polymorphisms are associated with increased risk of T2DM in Eritreans.

Keywords: IRS1, SNP, TCF7L2, type 2 diabetes

Procedia PDF Downloads 210
904 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 20
903 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis

Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante

Abstract:

The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.

Keywords: dynamic analysis, long short-term memory, prediction, sepsis

Procedia PDF Downloads 105
902 Combining Bio-Molecular and Isotopic Tools to Determine the Fate of Halogenated Compounds in Polluted Groundwater

Authors: N. Balaban, A. Buernstein, F. Gelman, Z. Ronen

Abstract:

Brominated flame retardants are widespread pollutants, and are known to be toxic, carcinogenic, endocrinic disrupting as well as recalcitrant. The industrial complex Neot Hovav, in the Northern Negev, Israel, is situated above a fractured chalk aquitard, which is polluted by a wide variety of halogenated organic compounds. Two of the abundant pollutants found in the site are Dibromoneopentyl-glycol (DBNPG) and tribromoneopentyl-alcohol (TBNPA). Due to the elusive nature of the groundwater flow, it is difficult to connect between the spatial changes in contaminant concentrations to degradation. In this study, we attempt to determine whether these compounds are biodegraded in the groundwater, and to gain a better understanding concerning the bacterial community in the groundwater. This was achieved through the application of compound-specific isotope analysis (CSIA) of carbon (13^C/12^C) and bromine (81^Br/79^Br), and new-generation MiSeq pyrosequencing. The sampled boreholes were distributed among three main areas of the industrial complex: around the production plant of TBNPA and DBNPG; along the Hovav Wadi (small ephemeral stream) which crosses and drains the industrial complex; and downstream to the industrial area. TBNPA and DBNPG are found in all three areas, with no clear connection to the proximity of the borehole to the production plant. Initial isotopic data of TBNPA from boreholes in the area surrounding the production plant, reveal no changes in the carbon and bromine isotopic values. When observing the microbial groundwater community, the dominant phylum is Proteobacteria. Known anaerobic dehalogenating bacteria such as Dehalococcoides from the Chloroflexi phylum have also been detected. A statistical comparison of the groundwater microbial diversity using a multi-variant ordination of non-metric multidimensional scaling (NMDS) reveals three main clusters in accordance to spatial location in the industrial complex: all the boreholes sampled adjacent to the production plant cluster together and separately from the Wadi Hovav boreholes cluster and the downstream to the industrial area borehole cluster. This work provides the basis for the development and implication of an isotopic fractionation based tool for assessing the biodegradation of brominated organic compounds in contaminated environments, and a novel attempt to characterize the spatial microbial diversity in the contaminated site.

Keywords: biodegradation, brominated flame retardants, groundwater, isotopic fractionation, microbial diversity

Procedia PDF Downloads 223
901 Groundwater Potential Mapping using Frequency Ratio and Shannon’s Entropy Models in Lesser Himalaya Zone, Nepal

Authors: Yagya Murti Aryal, Bipin Adhikari, Pradeep Gyawali

Abstract:

The Lesser Himalaya zone of Nepal consists of thrusting and folding belts, which play an important role in the sustainable management of groundwater in the Himalayan regions. The study area is located in the Dolakha and Ramechhap Districts of Bagmati Province, Nepal. Geologically, these districts are situated in the Lesser Himalayas and partly encompass the Higher Himalayan rock sequence, which includes low-grade to high-grade metamorphic rocks. Following the Gorkha Earthquake in 2015, numerous springs dried up, and many others are currently experiencing depletion due to the distortion of the natural groundwater flow. The primary objective of this study is to identify potential groundwater areas and determine suitable sites for artificial groundwater recharge. Two distinct statistical approaches were used to develop models: The Frequency Ratio (FR) and Shannon Entropy (SE) methods. The study utilized both primary and secondary datasets and incorporated significant role and controlling factors derived from field works and literature reviews. Field data collection involved spring inventory, soil analysis, lithology assessment, and hydro-geomorphology study. Additionally, slope, aspect, drainage density, and lineament density were extracted from a Digital Elevation Model (DEM) using GIS and transformed into thematic layers. For training and validation, 114 springs were divided into a 70/30 ratio, with an equal number of non-spring pixels. After assigning weights to each class based on the two proposed models, a groundwater potential map was generated using GIS, classifying the area into five levels: very low, low, moderate, high, and very high. The model's outcome reveals that over 41% of the area falls into the low and very low potential categories, while only 30% of the area demonstrates a high probability of groundwater potential. To evaluate model performance, accuracy was assessed using the Area under the Curve (AUC). The success rate AUC values for the FR and SE methods were determined to be 78.73% and 77.09%, respectively. Additionally, the prediction rate AUC values for the FR and SE methods were calculated as 76.31% and 74.08%. The results indicate that the FR model exhibits greater prediction capability compared to the SE model in this case study.

Keywords: groundwater potential mapping, frequency ratio, Shannon’s Entropy, Lesser Himalaya Zone, sustainable groundwater management

Procedia PDF Downloads 58