Search results for: small signal modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9670

Search results for: small signal modeling

1750 In Search of Commonalities in the Determinants of Child Sex Ratios in India and People's of Republic of China

Authors: Suddhasil Siddhanta, Debasish Nandy

Abstract:

Child sex ratios pattern in the Asian Population is highly masculine mainly due to birth masculinity and gender bias in child mortality. The vast and the growing literature of female deficit in world population points out the diffusion of child sex ratio pattern in many Asian as well as neighboring European countries. However, little attention has been given to understand the common factors in different demographics in explaining child sex ratio pattern. Such a scholarship is extremely important as level of gender inequity is different in different country set up. Our paper tries to explain the major structural commonalities in the child masculinity pattern in two demographic billionaires - India and China. The analysis reveals that apart from geographical diffusion of sex selection technology, patrilocal social structure, as proxied by households with more than one generation in China and proportion of population aged 65 years and above in India, can explain significant variation of missing girl child in these two countries. Even after controlling for individual capacity building factors like educational attainment, or work force participation, the measure of social stratification is coming out to be the major determinant of child sex ratio variation. Other socio economic factors that perform much well are the agency building factors of the females, like changing pattern of marriage customs which is proxied by divorce and remarriage ratio for china and percentage of female marrying at or after the age of 20 years in India and the female workforce participation. Proportion of minorities in socio-religious composition of the population and gender bias in scholastic attainment in both these counties are also found to be significant in modeling child sex ratio variations. All these significant common factors associated with child sex ratio point toward the one single most important factor: the historical evolution of patriarchy and its contemporary perpetuation in both the countries. It seems that prohibition of sex selection might not be sufficient to combat the peculiar skewness of excessive maleness in child population in both these countries. Demand sided policies is therefore utmost important to root out the gender bias in child sex ratios.

Keywords: child sex ratios, gender bias, structural factors, prosperity, patrilocality

Procedia PDF Downloads 134
1749 Effect of Good Agriculture Management Practices and Constraints on Grape Farming: A Case Study in Mirbachakot, Kalakan and Shakardara Districts Kabul, Afghanistan

Authors: Mohammad Mirwais Yusufi

Abstract:

Skillful management is one of the most important success factors for today’s farms. When a farm is well managed, it can generate funds for its sustainability. Grape is one of the most diffused fruits in the world and one of the most important cash crops with high potential of production in Afghanistan as well. While there are several organizations intervening for improvement of this cash crop, the quality and quantity are still not satisfactory for producers and external markets. The situation has not changed over the years. Therefore, a survey was conducted in 2017 with 60 grape growers, supported by questionnaires in Mirbachakot, Kalakan and Shakardara districts of Kabul province. The purpose was to get an understanding of the current socio-demographic characteristics of farmers, management methods, constraints, farm size, yield and contribution of grape farming to household income. Findings indicate that grape farming was predominant 83.3% male, 16.6% female and small-scale farmers were the main grape producers, 60% < 1 ha of land under grape production. Likewise, 50% had more than > 10 years and 33.3% between 1-5 years’ experience in grape farming. The high level of illiteracy and diseases had significant digit effect on growth, yield and quality of grapes. The results showed that vineyard management operations to protect grapes from mechanical damage are very poor or completely absent. Comparing developed countries, table grape is one of the fruits with the highest input of technology, while in developing countries the cost of labor is low but the purchase of the equipment is very high due to financial situation. Hence the low quality and quantity of grape are influenced by poor management methods, such as non-availability of experts and lack of technical guidance in the study site. Thereby, the study suggested that improved agricultural extension services and managerial skills could contribute to addressing the problems.

Keywords: constraints, effect, management, Kabul

Procedia PDF Downloads 91
1748 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 129
1747 De Novo Design of Functional Metalloproteins for Biocatalytic Reactions

Authors: Ketaki D. Belsare, Nicholas F. Polizzi, Lior Shtayer, William F. DeGrado

Abstract:

Nature utilizes metalloproteins to perform chemical transformations with activities and selectivities that have long been the inspiration for design principles in synthetic and biological systems. The chemical reactivities of metalloproteins are directly linked to local environment effects produced by the protein matrix around the metal cofactor. A complete understanding of how the protein matrix provides these interactions would allow for the design of functional metalloproteins. The de novo computational design of proteins have been successfully used in design of active sites that bind metals like di-iron, zinc, copper containing cofactors; however, precisely designing active sites that can bind small molecule ligands (e.g., substrates) along with metal cofactors is still a challenge in the field. The de novo computational design of a functional metalloprotein that contains a purposefully designed substrate binding site would allow for precise control of chemical function and reactivity. Our research strategy seeks to elucidate the design features necessary to bind the cofactor protoporphyrin IX (hemin) in close proximity to a substrate binding pocket in a four helix bundle. First- and second-shell interactions are computationally designed to control orientation, electronic structure, and reaction pathway of the cofactor and substrate. The design began with a parameterized helical backbone that positioned a single histidine residue (as an axial ligand) to receive a second-shell H-bond from a Threonine on the neighboring helix. The metallo-cofactor, hemin was then manually placed in the binding site. A structural feature, pi-bulge was introduced to give substrate access to the protoporphyrin IX. These de novo metalloproteins are currently being tested for their activity towards hydroxylation and epoxidation. The de novo designed protein shows hydroxylation of aniline to 4-aminophenol. This study will help provide structural information of utmost importance in understanding de novo computational design variables impacting the functional activities of a protein.

Keywords: metalloproteins, protein design, de novo protein, biocatalysis

Procedia PDF Downloads 135
1746 Learning at Workplace: Competences and Contexts in Sensory Evaluation

Authors: Ulriikka Savela-Huovinen, Hanni Muukkonen, Auli Toom

Abstract:

The development of workplace as a learning environment has been emphasized in research field of workplace learning. The prior literature on sensory performance emphasized the individual’s competences as assessor, while the competences in the collaborative interactional and knowledge creation practices as workplace learning method are not often mentioned. In the present study aims to find out what kinds of competences and contexts are central when assessor conducts food sensory evaluation in authentic professional context. The aim was to answer the following questions: first, what kinds of competences does sensory evaluation require according to assessors? And second, what kinds of contexts for sensory evaluation do assessors report? Altogether thirteen assessors from three Finnish food companies were interviewed by using semi-structural thematic interviews to map practices and development intentions as well as to explicate already established practices. The qualitative data were analyzed by following the principles of abductive and inductive content analysis. Analysis phases were combined and their results were considered together as a cross-analysis. When evaluated independently required competences were perception, knowledge of specific domains and methods and cognitive skills e.g. memory. Altogether, 42% of analysis units described individual evaluation contexts, 53% of analysis units described collaborative interactional contexts, and 5% of analysis units described collaborative knowledge creation contexts. Related to collaboration, analysis reviewed learning, sharing and reviewing both external and in-house consumer feedback, developing methods to moderate small-panel evaluation and developing product vocabulary collectively between the assessors. Knowledge creation contexts individualized from daily practices especially in cases product defects were sought and discussed. The study findings contribute to the explanation that sensory assessors learn extensively from one another in the collaborative interactional and knowledge creation context. Assessors learning and abilities to work collaboratively in the interactional and knowledge creation contexts need to be ensured in the development of the expertise.

Keywords: assessor, collaboration, competences, contexts, learning and practices, sensory evaluation

Procedia PDF Downloads 217
1745 Prediction of the Dark Matter Distribution and Fraction in Individual Galaxies Based Solely on Their Rotation Curves

Authors: Ramzi Suleiman

Abstract:

Recently, the author proposed an observationally-based relativity theory termed information relativity theory (IRT). The theory is simple and is based only on basic principles, with no prior axioms and no free parameters. For the case of a body of mass in uniform rectilinear motion relative to an observer, the theory transformations uncovered a matter-dark matter duality, which prescribes that the sum of the densities of the body's baryonic matter and dark matter, as measured by the observer, is equal to the body's matter density at rest. It was shown that the theory transformations were successful in predicting several important phenomena in small particle physics, quantum physics, and cosmology. This paper extends the theory transformations to the cases of rotating disks and spheres. The resulting transformations for a rotating disk are utilized to derive predictions of the radial distributions of matter and dark matter densities in rotationally supported galaxies based solely on their observed rotation curves. It is also shown that for galaxies with flattening curves, good approximations of the radial distributions of matter and dark matter and of the dark matter fraction could be obtained from one measurable scale radius. Test of the model on five galaxies, chosen randomly from the SPARC database, yielded impressive predictions. The rotation curves of all the investigated galaxies emerged as accurate traces of the predicted radial density distributions of their dark matter. This striking result raises an intriguing physical explanation of gravity in galaxies, according to which it is the proximal drag of the stars and gas in the galaxy by its rotating dark matter web. We conclude by alluding briefly to the application of the proposed model to stellar systems and black holes. This study also hints at the potential of the discovered matter-dark matter duality in fixing the standard model of elementary particles in a natural manner without the need for hypothesizing about supersymmetric particles.

Keywords: dark matter, galaxies rotation curves, SPARC, rotating disk

Procedia PDF Downloads 50
1744 Impacts on Marine Ecosystems Using a Multilayer Network Approach

Authors: Nelson F. F. Ebecken, Gilberto C. Pereira, Lucio P. de Andrade

Abstract:

Bays, estuaries and coastal ecosystems are some of the most used and threatened natural systems globally. Its deterioration is due to intense and increasing human activities. This paper aims to monitor the socio-ecological in Brazil, model and simulate it through a multilayer network representing a DPSIR structure (Drivers, Pressures, States-Impacts-Responses) considering the concept of Management based on Ecosystems to support decision-making under the National/State/Municipal Coastal Management policy. This approach considers several interferences and can represent a significant advance in several scientific aspects. The main objective of this paper is the coupling of three different types of complex networks, the first being an ecological network, the second a social network, and the third a network of economic activities, in order to model the marine ecosystem. Multilayer networks comprise two or more "layers", which may represent different types of interactions, different communities, different points in time, and so on. The dependency between layers results from processes that affect the various layers. For example, the dispersion of individuals between two patches affects the network structure of both samples. A multilayer network consists of (i) a set of physical nodes representing entities (e.g., species, people, companies); (ii) a set of layers, which may include multiple layering aspects (e.g., time dependency and multiple types of relationships); (iii) a set of state nodes, each of which corresponds to the manifestation of a given physical node in a layer-specific; and (iv) a set of edges (weighted or not) to connect the state nodes among themselves. The edge set includes the intralayer edges familiar and interlayer ones, which connect state nodes between layers. The applied methodology in an existent case uses the Flow cytometry process and the modeling of ecological relationships (trophic and non-trophic) following fuzzy theory concepts and graph visualization. The identification of subnetworks in the fuzzy graphs is carried out using a specific computational method. This methodology allows considering the influence of different factors and helps their contributions to the decision-making process.

Keywords: marine ecosystems, complex systems, multilayer network, ecosystems management

Procedia PDF Downloads 77
1743 Evaluation of Coagulation Efficiency of Protein Extracts from Lupinus Albus L., Moringa Stenopetala Cufod., Trigonella Foenum-Graecum L. And Vicia Faba L. For Water Purification

Authors: Neway Adele, Adey Feleke

Abstract:

Access to clean drinking water is a basic human right. However, an estimated 1.2 billion people across the world consume unclean water daily. Interest has been growing in natural coagulants as the health and environmental concerns of conventional chemical coagulants are rising. Natural coagulants have the potential to serve as alternative water treatment agents. In this study, Lupinus albus, Moringa stenopetala, Trigonella foenum-graecum and Vicia faba protein extracts were evaluated as natural coagulants for water treatment. The protein extracts were purified from crude extracts using a protein purifier, and protein concentrations were determined by the spectrophotometric method. Small-volume coagulation efficiency tests were conducted on raw water taken from the Legedadi water treatment plant. These were done using a completely randomized design (CRD) experiment with settling times of 0 min (initial time), 90 min, 180 min and 270 min and protein extract doses of 5 mg/L, 10 mg/L, 15 mg/L and 20 mg/L. Raw water as negative control and polyelectrolyte as positive control were also included. The optical density (OD) values were measured for all the samples. At 270 min and 20 mg/L, the coagulation efficiency percentages for Lupinus albus, Moringa stenopetala, Trigonella foenum-graecum and Vicia faba protein extracts were 71%, 89%, 12% and 67% in the water sample collected in April 2019 respectively. Similarly, Lupinus albus, Moringa stenopetala and Vicia faba achieved 17%, 92% and 12% at 270 min settling times and 5 mg/L, 20 mg/L and 10 mg/L concentration in the water sample collected from August 2019, respectively. Negative control (raw water) and polyelectrolyte (positive control) were also 6 − 10% and 89 − 94% at 270 min settling time in April and August 2019, respectively. Among the four protein extracts, Moringa stenopetala showed the highest coagulation efficiency, similar to polyelectrolyte. This study concluded that Moringa stenopetala protein extract could be used as a natural coagulant for water purification in both sampling times.

Keywords: coagulation efficiency, extraction, natural coagulant, protein extract

Procedia PDF Downloads 40
1742 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 94
1741 Study of the Design and Simulation Work for an Artificial Heart

Authors: Mohammed Eltayeb Salih Elamin

Abstract:

This study discusses the concept of the artificial heart using engineering concepts, of the fluid mechanics and the characteristics of the non-Newtonian fluid. For the purpose to serve heart patients and improve aspects of their lives and since the Statistics review according to world health organization (WHO) says that heart disease and blood vessels are the first cause of death in the world. Statistics shows that 30% of the death cases in the world by the heart disease, so simply we can consider it as the number one leading cause of death in the entire world is heart failure. And since the heart implantation become a very difficult and not always available, the idea of the artificial heart become very essential. So it’s important that we participate in the developing this idea by searching and finding the weakness point in the earlier designs and hoping for improving it for the best of humanity. In this study a pump was designed in order to pump blood to the human body and taking into account all the factors that allows it to replace the human heart, in order to work at the same characteristics and the efficiency of the human heart. The pump was designed on the idea of the diaphragm pump. Three models of blood obtained from the blood real characteristics and all of these models were simulated in order to study the effect of the pumping work on the fluid. After that, we study the properties of this pump by using Ansys15 software to simulate blood flow inside the pump and the amount of stress that it will go under. The 3D geometries modeling was done using SOLID WORKS and the geometries then imported to Ansys design modeler which is used during the pre-processing procedure. The solver used throughout the study is Ansys FLUENT. This is a tool used to analysis the fluid flow troubles and the general well-known term used for this branch of science is known as Computational Fluid Dynamics (CFD). Basically, Design Modeler used during the pre-processing procedure which is a crucial step before the start of the fluid flow problem. Some of the key operations are the geometry creations which specify the domain of the fluid flow problem. Next is mesh generation which means discretization of the domain to solve governing equations at each cell and later, specify the boundary zones to apply boundary conditions for the problem. Finally, the pre–processed work will be saved at the Ansys workbench for future work continuation.

Keywords: Artificial heart, computational fluid dynamic heart chamber, design, pump

Procedia PDF Downloads 438
1740 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters

Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran

Abstract:

The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.

Keywords: electric propulsion, mass gauging, propellant, PVT, xenon

Procedia PDF Downloads 323
1739 Using ICESat-2 Dynamic Ocean Topography to Estimate Western Arctic Freshwater Content

Authors: Joshua Adan Valdez, Shawn Gallaher

Abstract:

Global climate change has impacted atmospheric temperatures contributing to rising sea levels, decreasing sea ice, and increased freshening of high latitude oceans. This freshening has contributed to increased stratification inhibiting local mixing and nutrient transport, modifying regional circulations in polar oceans. In recent years, the Western Arctic has seen an increase in freshwater volume at an average rate of 397+-116km3/year across the Beaufort Gyre. The majority of the freshwater volume resides in the Beaufort Gyre surface lens driven by anticyclonic wind forcing, sea ice melt, and Arctic river runoff, and is typically defined as water fresher than 34.8. The near-isothermal nature of Arctic seawater and non-linearities in the equation of state for near-freezing waters result in a salinity-driven pycnocline as opposed to the temperature-driven density structure seen in the lower latitudes. In this study, we investigate the relationship between freshwater content and dynamic ocean topography (DOT). In situ measurements of freshwater content are useful in providing information on the freshening rate of the Beaufort Gyre; however, their collection is costly and time-consuming. Utilizing NASA’s ICESat-2’s DOT remote sensing capabilities and Air Expendable CTD (AXCTD) data from the Seasonal Ice Zone Reconnaissance Surveys (SIZRS), a linear regression model between DOT and freshwater content is determined along the 150° west meridian. Freshwater content is calculated by integrating the volume of water between the surface and a depth with a reference salinity of ~34.8. Using this model, we compare interannual variability in freshwater content within the gyre, which could provide a future predictive capability of freshwater volume changes in the Beaufort-Chukchi Sea using non-in situ methods. Successful employment of the ICESat-2’s DOT approximation of freshwater content could potentially demonstrate the value of remote sensing tools to reduce reliance on field deployment platforms to characterize physical ocean properties.

Keywords: Cryosphere, remote sensing, Arctic oceanography, climate modeling, Ekman transport

Procedia PDF Downloads 52
1738 Integrated Livestock and Cropping System and Sustainable Rural Development in India: A Case Study

Authors: Nizamuddin Khan

Abstract:

Integrated livestock and cropping system is very old agricultural practice since antiquity. It is an eco-friendly and sustainable farming system in which both the resources are optimally and rationally utilized through the recycling and re-utilization of their by-products. Indian farmers follow in- farm integrated farming system unlike in developed countries where both farm and off-farm system prevailed. The data on different components of the integrated farming system is very limited and that too is not widely available in published form. The primary source is the only option for understanding the mechanism, process, evaluation and performance of integrated livestock cropping system. Researcher generated data through the field survey of sampled respondents from sampled villages from Bulandshahr district of Uttar Pradesh. The present paper aims to understand the component group of system, degree, and level of integration, level of generation of employment, income, improvement in farm ecology, the economic viability of farmers and check in rural-urban migration. The study revealed that area witnessed intra farm integration in which both livestock and cultivation of crops take place on the same farm. Buffalo, goat, and poultry are common components of integration. Wheat, paddy, sugarcane and horticulture are among the crops. The farmers are getting 25% benefit more than those who do not follow the integrated system. Livestock husbandry provides employment and income through the year, especially during agriculture offseason. 80% of farmers viewed that approximately 35% of the total expenditure incurred is met from the livestock sector. Landless, marginal and small farmers are highly benefited from agricultural integration. About 70% of farmers acknowledged that using wastes of animals and crops the soil ecology is significantly maintained. Further, the integrated farming system is helpful in reducing rural to urban migration. An incentive with credit facilities, assured marketing, technological aid and government support is urgently needed for sustainable development of agriculture and farmers.

Keywords: integrated, recycle, employment, soil ecology, sustainability

Procedia PDF Downloads 145
1737 Dielectric Study of Ethanol Water Mixtures at Different Concentration Using Hollow Channel Cantilever Platform

Authors: Maryam S. Ghoraishi, John E. Hawk, Thomas Thundat

Abstract:

Understanding liquid properties in small scale has become important in recent decades as immerging new microelectromechanical systems (MEMS) devices have been widely used for micro pumps, drug delivery, and many other laboratory-on-microchips analysis. Often in microfluidic devices, fluids are transported electrokinetically. Therefore, extensive knowledge of fluid flow, heat transport, electrokinetics and electrochemistry are key to successful lab on a chip design. Among different microfluidic devices, recently developed hollow channel cantilever offers an ideal platform to study different fluid properties simultaneously without drastic decrease in quality factor which normally occurs when traditional cantilevers operate in the liquid phase. Using hollow channel cantilever, we monitor changes in density and viscosity of liquid while simultaneously investigating dielectric properties of alcohol water binary mixtures. Considerable research has been conducted on alcohol-water mixtures since such a mixture is a typical prototype for biomolecules, Micelle formation, and structural stability of proteins (to name a few). Here we show that hollow channel cantilever can be employed to investigate dielectric properties of ethanol/water mixtures in different concentrations. We study dynamic amplitude shifts of hollow channel cantilever oscillation at different concentrations of ethanol/water for different voltages. Our results show how interactions between solute and solvent, and possibly cluster formation, could change dielectric properties and dipole reorientation of the mixture, as well as the resulting force on the hollow cantilever. For comparison, we also examine higher conductivity ionic mixtures of sodium sulfate solution under the same conditions as low conductivity ethanol/water mixtures. We will show the results from systematic investigation of solvent effects on dielectric properties of the binary mixture. We will also address the question of resolution limits in dielectric study of analyte molecules imposed by solvent concentrations.

Keywords: dielectric constant, cantilever sensors, ethanol water mixtures, low frequency

Procedia PDF Downloads 177
1736 Study and Simulation of a Sever Dust Storm over West and South West of Iran

Authors: Saeed Farhadypour, Majid Azadi, Habibolla Sayyari, Mahmood Mosavi, Shahram Irani, Aliakbar Bidokhti, Omid Alizadeh Choobari, Ziba Hamidi

Abstract:

In the recent decades, frequencies of dust events have increased significantly in west and south west of Iran. First, a survey on the dust events during the period (1990-2013) is investigated using historical dust data collected at 6 weather stations scattered over west and south-west of Iran. After statistical analysis of the observational data, one of the most severe dust storm event that occurred in the region from 3rd to 6th July 2009, is selected and analyzed. WRF-Chem model is used to simulate the amount of PM10 and how to transport it to the areas. The initial and lateral boundary conditions for model obtained from GFS data with 0.5°×0.5° spatial resolution. In the simulation, two aerosol schemas (GOCART and MADE/SORGAM) with 3 options (chem_opt=106,300 and 303) were evaluated. Results of the statistical analysis of the historical data showed that south west of Iran has high frequency of dust events, so that Bushehr station has the highest frequency between stations and Urmia station has the lowest frequency. Also in the period of 1990 to 2013, the years 2009 and 1998 with the amounts of 3221 and 100 respectively had the highest and lowest dust events and according to the monthly variation, June and July had the highest frequency of dust events and December had the lowest frequency. Besides, model results showed that the MADE / SORGAM scheme has predicted values and trends of PM10 better than the other schemes and has showed the better performance in comparison with the observations. Finally, distribution of PM10 and the wind surface maps obtained from numerical modeling showed that the formation of dust plums formed in Iraq and Syria and also transportation of them to the West and Southwest of Iran. In addition, comparing the MODIS satellite image acquired on 4th July 2009 with model output at the same time showed the good ability of WRF-Chem in simulating spatial distribution of dust.

Keywords: dust storm, MADE/SORGAM scheme, PM10, WRF-Chem

Procedia PDF Downloads 246
1735 Socio-Economic Impact of Covid-19 in Ethiopia

Authors: Kebron Abich Asnake

Abstract:

The outbreak of COVID-19 has had far-reaching socio-economic consequences globally, and Ethiopia is no exception. This abstract provides a summary of a research study on the socio-economic impact of COVID-19 in Ethiopia. The study analyzes the health impact, economic repercussions, social consequences, government response measures, and opportunities for post-crisis recovery. In terms of health impact, the research explores the spread and transmission of the virus, the capacity and response of the healthcare system, and the mortality rate, with a focus on vulnerable populations. The economic impact analysis entails investigating the contraction of the GDP, employment and income loss, disruption in key sectors such as agriculture, tourism, and manufacturing, and the specific implications for small and medium-sized enterprises (SMEs), foreign direct investment, and remittances. The social impact section looks at the disruptions in education and the digital divide, food security and nutrition challenges, increased poverty and inequality, gender-based violence, and mental health issues. The research also examines the measures taken by the Ethiopian government, including health and safety regulations, economic stimulus packages, social protection programs, and support for vulnerable populations. Furthermore, the study outlines long-term recovery prospects, social cohesion, and community resilience challenges. It highlights the need to strengthen the healthcare system and finds a balance between health and economic priorities. The research concludes by presenting recommendations for policy-makers and stakeholders, emphasizing opportunities for post-crisis recovery such as diversification of the economy, enhanced healthcare infrastructure, investment in digital infrastructure and technology, and support for domestic tourism and local industries. This research provides valuable insights into the socio-economic impact of COVID-19 in Ethiopia, offering a comprehensive analysis of the challenges faced and potential pathways towards recovery.

Keywords: impact, covid, ethiopia, health

Procedia PDF Downloads 48
1734 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance

Authors: Rajinder Singh, Ram Valluru

Abstract:

Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.

Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility

Procedia PDF Downloads 102
1733 Inhalable Lipid-Coated-Chitosan Nano-Embedded Microdroplets of an Antifungal Drug for Deep Lung Delivery

Authors: Ranjot Kaur, Om P. Katare, Anupama Sharma, Sarah R. Dennison, Kamalinder K. Singh, Bhupinder Singh

Abstract:

Respiratory microbial infections being among the top leading cause of death worldwide are difficult to treat as the microbes reside deep inside the airways, where only a small fraction of drug can access after traditional oral or parenteral routes. As a result, high doses of drugs are required to maintain drug levels above minimum inhibitory concentrations (MIC) at the infection site, unfortunately leading to severe systemic side-effects. Therefore, delivering antimicrobials directly to the respiratory tract provides an attractive way out in such situations. In this context, current study embarks on the systematic development of lung lia pid-modified chitosan nanoparticles for inhalation of voriconazole. Following the principles of quality by design, the chitosan nanoparticles were prepared by ionic gelation method and further coated with major lung lipid by precipitation method. The factor screening studies were performed by fractional factorial design, followed by optimization of the nanoparticles by Box-Behnken Design. The optimized formulation has a particle size range of 170-180nm, PDI 0.3-0.4, zeta potential 14-17, entrapment efficiency 45-50% and drug loading of 3-5%. The presence of a lipid coating was confirmed by FESEM, FTIR, and X-RD. Furthermore, the nanoparticles were found to be safe upto 40µg/ml on A549 and Calu-3 cell lines. The quantitative and qualitative uptake studies also revealed the uptake of nanoparticles in lung epithelial cells. Moreover, the data from Spraytec and next-generation impactor studies confirmed the deposition of nanoparticles in lower airways. Also, the interaction of nanoparticles with DPPC monolayers signifies its biocompatibility with lungs. Overall, the study describes the methodology and potential of lipid-coated chitosan nanoparticles in futuristic inhalation nanomedicine for the management of pulmonary aspergillosis.

Keywords: dipalmitoylphosphatidylcholine, nebulization, DPPC monolayers, quality-by-design

Procedia PDF Downloads 116
1732 Treatment of Non-Small Cell Lung Cancer (NSCLC) With Activating Mutations Considering ctDNA Fluctuations

Authors: Moiseenko F. V., Volkov N. M., Zhabina A. S., Stepanova E. O., Kirillov A. V., Myslik A. V., Artemieva E. V., Agranov I. R., Oganesyan A. P., Egorenkov V. V., Abduloeva N. H., Aleksakhina S. Yu., Ivantsov A. O., Kuligina E. S., Imyanitov E. N., Moiseyenko V. M.

Abstract:

Analysis of ctDNA in patients with NSCLC is an emerging biomarker. Multiple research efforts of quantitative or at least qualitative analysis before and during the first periods of treatment with TKI showed the prognostic value of ctDNA clearance. Still, these important results are not incorporated in clinical standards. We evaluated the role of ctDNA in EGFR-mutated NSCLC receiving first-line TKI. Firstly, we analyzed sequential plasma samples from 30 patients that were collected before intake of the first tablet (at baseline) and at 6, 12, 24, 36, and 48 hours after the “starting point.” EGFR-M+ allele was measured by ddPCR. Afterward, we included sequential qualitative analysis of ctDNA with cobas® EGFR Mutation Test v2 from 99 NSCLC patients before the first dose, after 2 and 4 months of treatment, and on progression. Early response analysis showed the decline of EGFR-M+ level in plasma within the first 48 hours of treatment in 11 subjects. All these patients showed objective tumor response. 10 patients showed either elevation of EGFR-M+ plasma concentration (n = 5) or stable content of circulating EGFR-M+ after the start of the therapy (n = 5); only 3 of these patients achieved an objective response (p = 0.026) when compared to the former group). The rapid decline of plasma EGFR-M+ DNA concentration also predicted for longer PFS (13.7 vs. 11.4 months, p = 0.030). Long-term ctDNA monitoring showed clinically significant heterogeneity of EGFR-mutated NSCLC treated with 1st line TKIs in terms of progression-free and overall survival. Patients without detectable ctDNA at baseline (N = 32) possess the best prognosis on the duration of treatment (PFS: 24.07 [16.8-31.3] and OS: 56.2 [21.8-90.7] months). Those who achieve clearance after two months of TKI (N = 42) have indistinguishably good PFS (19.0 [13.7 – 24.2]). Individuals who retain ctDNA after 2 months (N = 25) have the worst prognosis (PFS: 10.3 [7.0 – 13.5], p = 0.000). 9/25 patients did not develop ctDNA clearance at 4 months with no statistical difference in PFS from those without clearance at 2 months. Prognostic heterogeneity of EGFR-mutated NSCLC should be taken into consideration in planning further clinical trials and optimizing the outcomes of patients.

Keywords: NSCLC, EGFR, targeted therapy, ctDNA, prognosis

Procedia PDF Downloads 31
1731 The Onset of Ironing during Casing Expansion

Authors: W. Assaad, D. Wilmink, H. R. Pasaribu, H. J. M. Geijselaers

Abstract:

Shell has developed a mono-diameter well concept for oil and gas wells as opposed to the traditional telescopic well design. A Mono-diameter well design allows well to have a single inner diameter from the surface all the way down to reservoir to increase production capacity, reduce material cost and reduce environmental footprint. This is achieved by expansion of liners (casing string) concerned using an expansion tool (e.g. a cone). Since the well is drilled in stages and liners are inserted to support the borehole, overlap sections between consecutive liners exist which should be expanded. At overlap, the previously inserted casing which can be expanded or unexpanded is called the host casing and the newly inserted casing is called the expandable casing. When the cone enters the overlap section, an expandable casing is expanded against a host casing, a cured cement layer and formation. In overlap expansion, ironing or lengthening may appear instead of shortening in the expandable casing when the pressure exerted by the host casing, cured cement layer and formation exceeds a certain limit. This pressure is related to cement strength, thickness of cement layer, host casing material mechanical properties, host casing thickness, formation type and formation strength. Ironing can cause implications that hinder the deployment of the technology. Therefore, the understanding of ironing becomes essential. A physical model is built in-house to calculate expansion forces, stresses, strains and post expansion casing dimensions under different conditions. In this study, only free casing and overlap expansion of two casings are addressed while the cement and formation will be incorporated in future study. Since the axial strain can be predicted by the physical model, the onset of ironing can be confirmed. In addition, this model helps in understanding ironing and the parameters influencing it. Finally, the physical model is validated with Finite Element (FE) simulations and small-scale experiments. The results of the study confirm that high pressure leads to ironing when the casing is expanded in tension mode.

Keywords: casing expansion, cement, formation, metal forming, plasticity, well design

Procedia PDF Downloads 159
1730 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel

Authors: Hamed Kalhori, Lin Ye

Abstract:

In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.

Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction

Procedia PDF Downloads 511
1729 The Influence of Contextual Factors on Long-Term Contraceptive Use in East Java

Authors: Ni'mal Baroya, Andrei Ramani, Irma Prasetyowati

Abstract:

The access to reproduction health services, including with safe and effective contraception were human rights regardless of social stratum and residence. In addition to individual factors, family and contextual factors were also believed to be the cause in the use of contraceptive methods. This study aimed to assess the determinants of long-term contraceptive methods (LTCM) by considering all the factors at either the individual level or contextual level. Thereby, this study could provide basic information for program development of prevalence enhancement of MKJP in East Java. The research, which used cross-sectional design, utilized Riskesdas 2013 data, particularly in East Java Province for further analysis about multilevel modeling of MKJP application. The sample of this study consisted of 20.601 married women who were not in pregnant that were drawn by using probability sampling following the sampling technique of Riskesdas 2013. Variables in this study were including the independent variables at the individual level that consisted of education, age, occupation, access to family planning services (KB), economic status and residence. As independent variables in district level were the Human Development Index (HDI, henceforth as IPM) in each districts of East Java Province, the ratio of field officers, the ratio of midwives, the ratio of community health centers and the ratio of doctors. As for the dependent variable was the use of Long-Term Contraceptive Method (LTCM or MKJP). The data were analyzed by using chi-square test and Pearson product moment correlation. The multivariable analysis was using multilevel logistic regression with 95% of Confidence Interval (CI) at the significance level of p < 0.05 and 80% of strength test. The results showed a low CPR LTCM was concentrated in districts in Madura Island and the north coast. The women which were 25 to 35 or more than 35 years old, at least high school education, working, and middle-class social status were more likely to use LTCM or MKJP. The IPM and low PLKB ratio had implications for poor CPR LTCM / MKJP.

Keywords: multilevel, long-term contraceptive methods, east java, contextual factor

Procedia PDF Downloads 221
1728 Research Related to the Academic Learning Stress, Reflected into PubMed Website Publications

Authors: Ramona-Niculina Jurcau, Ioana-Marieta Jurcau, Dong Hun Kwak, Nicolae-Alexandru Colceriu

Abstract:

Background: Academic environment led, in time, to the birth of some research subjects concluded with many publications. One of these issues is related to the learning stress. Thus far, the PubMed website displays an impressive number of papers related to the academic stress. Aims: Through this study, we aimed to evaluate the research concerning academic learning stress (ALS), by a retrospective analysis of PubMed publications. Methods: We evaluated the ALS, considering: a) different keywords as - ‘academic stress’ (AS), ‘academic stressors’ (ASs), ‘academic learning stress’ (ALS), ‘academic student stress’ (ASS), ‘academic stress college’ (ASC), ‘medical academic stress’ (MAS), ‘non-medical academic stress’ (NMAS), ‘student stress’ (SS), ‘nursing student stress’ (NS), ‘college student stress’ (CSS), ‘university student stress’ (USS), ‘medical student stress’ (MSS), ‘dental student stress’ (DSS), ‘non-medical student stress’ (NMSS), ‘learning students stress’ (LSS), ‘medical learning student stress’ (MLSS), ‘non-medical learning student stress’ (NMLSS); b) the year average for decades; c) some selection filters provided by PubMed website: Article types - Journal Article (JA), Clinical Trial (CT), Review (R); Species - Humans (H); Sex - Male (M) and Female (F); Ages - 13-18, 19-24, 19-44. Statistical evaluation was made on the basis of the Student test. Results: There were differences between keywords, referring to all filters. Nevertheless, for all keywords were noted the following: the majority of studies have indicated that subjects were humans; there were no important differences between the number of subjects M and F; the age of participants was mentioned only in some studies, predominating those with teenagers and subjects between 19-24 years. Conclusions: 1) PubMed publications document that concern for the research field of academic stress, lasts for 56 years and was materialized in more than 5.010 papers. 2) Number of publications in the field of academic stress varies depending on the selected keywords: those with a general framing (AS, ASs, ALS, ASS, SS, USS, LSS) are more numerous than those with a specific framing (ASC, MAS, NMAS, NS, CSS, MSS, DSS, NMSS, MLSS, NMLSS); those concerning the academic medical environment (MAS, NS, MSS, DSS, MLSS) prevailed compared to the non-medical environment (NMAS, NMSS, NMLSS). 3) Most of the publications are included at JA, of which a small percentage are CT and R. 4) Most of the academic stress studies were conducted with subjects both M and F, most aged under 19 years and between 19-24 years.

Keywords: academic stress, student stress, academic learning stress, medical student stress

Procedia PDF Downloads 533
1727 Predicting Child Attachment Style Based on Positive and Safe Parenting Components and Mediating Maternal Attachment Style in Children With ADHD

Authors: Alireza Monzavi Chaleshtari, Maryam Aliakbari

Abstract:

Objective: The aim of this study was to investigate the prediction of child attachment style based on a positive and safe combination parenting method mediated by maternal attachment styles in children with attention deficit hyperactivity disorder. Method: The design of the present study was descriptive of correlation and structural equations and applied in terms of purpose. The population of this study includes all children with attention deficit hyperactivity disorder living in Chaharmahal and Bakhtiari province and their mothers. The sample size of the above study includes 165children with attention deficit hyperactivity disorder in Chaharmahal and Bakhtiari province with their mothers, who were selected by purposive sampling method based on the inclusion criteria. The obtained data were analyzed in two sections of descriptive and inferential statistics. In the descriptive statistics section, statistical indices of mean, standard deviation, frequency distribution table and graph were used. In the inferential section, according to the nature of the hypotheses and objectives of the research, the data were analyzed using Pearson correlation coefficient tests, Bootstrap test and structural equation model. findings:The results of structural equation modeling showed that the research models fit and showed a positive and safe combination parenting style mediated by the mother attachment style has an indirect effect on the child attachment style. Also, a positive and safe combined parenting style has a direct relationship with child attachment style, and She has a mother attachment style. Conclusion:The results and findings of the present study show that there is a significant relationship between positive and safe combination parenting methods and attachment styles of children with attention deficit hyperactivity disorder with maternal attachment style mediation. Therefore, it can be expected that parents using a positive and safe combination232 parenting method can effectively lead to secure attachment in children with attention deficit hyperactivity disorder.

Keywords: child attachment style, positive and safe parenting, maternal attachment style, ADHD

Procedia PDF Downloads 38
1726 Association of Mir-196a Expression in Esophageal Tissue with Barrett´s Esophagus and Esophageal Adenocarcinoma

Authors: Petra Borilova Linhartova, Michaela Ruckova, Sabina Sevcikova, Natalie Mlcuchova, Jan Bohm, Katerina Zukalova, Monika Vlachova, Jiri Dolina, Lumir Kunovsky, Radek Kroupa, Zdenek Pavlovsky, Zdenek Danek, Tereza Deissova, Lydie Izakovicova Holla, Ondrej Slaby, Zdenek Kala

Abstract:

Esophageal adenocarcinoma (EAC) is a highly aggressive malignancy that frequently develops from Barrett's esophagus (BE), a premalignant pathologic change occurring in the lower end of the esophagus. Specific microRNAs (miRNAs), small non-coding RNAs that function as posttranscriptional regulators of gene expression, were repeatedly proved to play key roles in the pathogenesis of these diseases. This pilot study aimed to analyze four selected miRNAs in esophageal tissues from healthy controls (HC) and patients with reflux esophagitis (RE)/BE/EAC, as well as to compare expression at the site of Barrett's mucosa/adenocarcinoma and healthy esophageal tissue outside the area of the main pathology in patients with BE/EAC. In this pilot study, 22 individuals (3 HC, 8 RE, 5 BE, 6 EAC) were included and endoscopically examined. RNA was isolated from the fresh-frozen esophageal tissue (stored in the RNAlater™ Stabilization Solution −70°C) using the AllPrep DNA/RNA/miRNA Universal Kit. Subsequent RT-qPCR analysis was performed using selected TaqMan MicroRNA Assays for miR-21, miR-34a, miR-196a, miR-196b, and endogenous control (RNU44). While the expression of miR-21 in the esophageal tissue with the main pathology was decreased in BE and EAC patients in comparison to the group of HC and RE patients (p=0.01), the expression of miR-196a was increased in the BE and EAC patients (p<0.01). Correlations between those miRNAs expression in tissue and severity of diagnosis were observed (p<0.05). In addition, miR-196a was significantly more expressed at the site with the main pathology than in paired adjacent esophageal tissue in BE and EAC patients (p<0.01). In conclusion, our pilot results showed that miR-196a, which regulates the proliferation, invasion, and migration (and was previously associated with esophageal squamous cell carcinoma and marked as a potential therapeutic target), could be a diagnostic tissue biomarker for BE and EAC as well.

Keywords: microRNA, barrett´s esophagus, esophageal adenocarcinoma, biomarker

Procedia PDF Downloads 82
1725 The Effect of Foot Progression Angle on Human Lower Extremity

Authors: Sungpil Ha, Ju Yong Kang, Sangbaek Park, Seung-Ju Lee, Soo-Won Chae

Abstract:

The growing number of obese patients in aging societies has led to an increase in the number of patients with knee medial osteoarthritis (OA). Artificial joint insertion is the most common treatment for knee medial OA. Surgery is effective for patients with serious arthritic symptoms, but it is costly and dangerous. It is also inappropriate way to prevent a disease as an early stage. Therefore Non-operative treatments such as toe-in gait are proposed recently. Toe-in gait is one of non-surgical interventions, which restrain the progression of arthritis and relieves pain by reducing knee adduction moment (KAM) to facilitate lateral distribution of load on to knee medial cartilage. Numerous studies have measured KAM in various foot progression angle (FPA), and KAM data could be obtained by motion analysis. However, variations in stress at knee cartilage could not be directly observed or evaluated by these experiments of measuring KAM. Therefore, this study applied motion analysis to major gait points (1st peak, mid –stance, 2nd peak) with regard to FPA, and to evaluate the effects of FPA on the human lower extremity, the finite element (FE) method was employed. Three types of gait analysis (toe-in, toe-out, baseline gait) were performed with markers placed at the lower extremity. Ground reaction forces (GRF) were obtained by the force plates. The forces associated with the major muscles were computed using GRF and marker trajectory data. MRI data provided by the Visible Human Project were used to develop a human lower extremity FE model. FE analyses for three types of gait simulations were performed based on the calculated muscle force and GRF. We observed the maximum stress point during toe-in gait was lower than the other types, by comparing the results of FE analyses at the 1st peak across gait types. This is the same as the trend exhibited by KAM, measured through motion analysis in other papers. This indicates that the progression of knee medial OA could be suppressed by adopting toe-in gait. This study integrated motion analysis with FE analysis. One advantage of this method is that re-modeling is not required even with changes in posture. Therefore another type of gait simulation or various motions of lower extremity can be easily analyzed using this method.

Keywords: finite element analysis, gait analysis, human model, motion capture

Procedia PDF Downloads 314
1724 Analysis of the Impact of Suez Canal on the Robustness of Global Shipping Networks

Authors: Zimu Li, Zheng Wan

Abstract:

The Suez Canal plays an important role in global shipping networks and is one of the most frequently used waterways in the world. The 2021 canal obstruction by ship Ever Given in March 2021, however, completed blocked the Suez Canal for a week and caused significant disruption to world trade. Therefore, it is very important to quantitatively analyze the impact of the accident on the robustness of the global shipping network. However, the current research on maritime transportation networks is usually limited to local or small-scale networks in a certain region. Based on the complex network theory, this study establishes a global shipping complex network covering 2713 nodes and 137830 edges by using the real trajectory data of the global marine transport ship automatic identification system in 2018. At the same time, two attack modes, deliberate (Suez Canal Blocking) and random, are defined to calculate the changes in network node degree, eccentricity, clustering coefficient, network density, network isolated nodes, betweenness centrality, and closeness centrality under the two attack modes, and quantitatively analyze the actual impact of Suez Canal Blocking on the robustness of global shipping network. The results of the network robustness analysis show that Suez Canal blocking was more destructive to the shipping network than random attacks of the same scale. The network connectivity and accessibility decreased significantly, and the decline decreased with the distance between the port and the canal, showing the phenomenon of distance attenuation. This study further analyzes the impact of the blocking of the Suez Canal on Chinese ports and finds that the blocking of the Suez Canal significantly interferes withChina's shipping network and seriously affects China's normal trade activities. Finally, the impact of the global supply chain is analyzed, and it is found that blocking the canal will seriously damage the normal operation of the global supply chain.

Keywords: global shipping networks, ship AIS trajectory data, main channel, complex network, eigenvalue change

Procedia PDF Downloads 144
1723 Reactive Transport Modeling in Carbonate Rocks: A Single Pore Model

Authors: Priyanka Agrawal, Janou Koskamp, Amir Raoof, Mariette Wolthers

Abstract:

Calcite is the main mineral found in carbonate rocks, which form significant hydrocarbon reservoirs and subsurface repositories for CO2 sequestration. The injected CO2 mixes with the reservoir fluid and disturbs the geochemical equilibrium, triggering calcite dissolution. Different combinations of fluid chemistry and injection rate may therefore result in different evolution of porosity, permeability and dissolution patterns. To model the changes in porosity and permeability Kozeny-Carman equation K∝〖(∅)〗^n is used, where K is permeability and ∅ is porosity. The value of n is mostly based on experimental data or pore network models. In pore network models, this derivation is based on accuracy of relation used for conductivity and pore volume change. In fact, at a single pore scale, this relationship is the result of the pore shape development due to dissolution. We have prepared a new reactive transport model for a single pore which simulates the complex chemical reaction of carbonic-acid induced calcite dissolution and subsequent pore-geometry evolution at a single pore scale. We use COMSOL Multiphysics package 5.3 for the simulation. COMSOL utilizes the arbitary-Lagrangian Eulerian (ALE) method for the free-moving domain boundary. We examined the effect of flow rate on the evolution of single pore shape profiles due to calcite dissolution. We used three flow rates to cover diffusion dominated and advection-dominated transport regimes. The fluid in diffusion dominated flow (Pe number 0.037 and 0.37) becomes less reactive along the pore length and thus produced non-uniform pore shapes. However, for the advection-dominated flow (Pe number 3.75), the fast velocity of the fluid keeps the fluid relatively more reactive towards the end of the pore length, thus yielding uniform pore shape. Different pore shapes in terms of inlet opening vs overall pore opening will have an impact on the relation between changing volumes and conductivity. We have related the shape of pore with the Pe number which controls the transport regimes. For every Pe number, we have derived the relation between conductivity and porosity. These relations will be used in the pore network model to get the porosity and permeability variation.

Keywords: single pore, reactive transport, calcite system, moving boundary

Procedia PDF Downloads 347
1722 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 102
1721 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks

Procedia PDF Downloads 328