Search results for: equivalent circuit models
4910 Determination of the Axial-Vector from an Extended Linear Sigma Model
Authors: Tarek Sayed Taha Ali
Abstract:
The dependence of the axial-vector coupling constant gA on the quark masses has been investigated in the frame work of the extended linear sigma model. The field equations have been solved in the mean-field approximation. Our study shows a better fitting to the experimental data compared with the existing models.Keywords: extended linear sigma model, nucleon properties, axial coupling constant, physic
Procedia PDF Downloads 4464909 Open Reading Frame Marker-Based Capacitive DNA Sensor for Ultrasensitive Detection of Escherichia coli O157:H7 in Potable Water
Authors: Rehan Deshmukh, Sunil Bhand, Utpal Roy
Abstract:
We report the label-free electrochemical detection of Escherichia coli O157:H7 (ATCC 43895) in potable water using a DNA probe as a sensing molecule targeting the open reading frame marker. Indium tin oxide (ITO) surface was modified with organosilane and, glutaraldehyde was applied as a linker to fabricate the DNA sensor chip. Non-Faradic electrochemical impedance spectroscopy (EIS) behavior was investigated at each step of sensor fabrication using cyclic voltammetry, impedance, phase, relative permittivity, capacitance, and admittance. Atomic force microscopy (AFM) and scanning electron microscopy (SEM) revealed significant changes in surface topographies of DNA sensor chip fabrication. The decrease in the percentage of pinholes from 2.05 (Bare ITO) to 1.46 (after DNA hybridization) suggested the capacitive behavior of the DNA sensor chip. The results of non-Faradic EIS studies of DNA sensor chip showed a systematic declining trend of the capacitance as well as the relative permittivity upon DNA hybridization. DNA sensor chip exhibited linearity in 0.5 to 25 pg/10mL for E. coli O157:H7 (ATCC 43895). The limit of detection (LOD) at 95% confidence estimated by logistic regression was 0.1 pg DNA/10mL of E. coli O157:H7 (equivalent to 13.67 CFU/10mL) with a p-value of 0.0237. Moreover, the fabricated DNA sensor chip used for detection of E. coli O157:H7 showed no significant cross-reactivity with closely and distantly related bacteria such as Escherichia coli MTCC 3221, Escherichia coli O78:H11 MTCC 723 and Bacillus subtilis MTCC 736. Consequently, the results obtained in our study demonstrated the possible application of developed DNA sensor chips for E. coli O157:H7 ATCC 43895 in real water samples as well.Keywords: capacitance, DNA sensor, Escherichia coli O157:H7, open reading frame marker
Procedia PDF Downloads 1444908 Experimental Study Damage in a Composite Structure by Vibration Analysis- Glass / Polyester
Authors: R. Abdeldjebar, B. Labbaci, L. Missoum, B. Moudden, M. Djermane
Abstract:
The basic components of a composite material made him very sensitive to damage, which requires techniques for detecting damage reliable and efficient. This work focuses on the detection of damage by vibration analysis, whose main objective is to exploit the dynamic response of a structure to detect understand the damage. The experimental results are compared with those predicted by numerical models to confirm the effectiveness of the approach.Keywords: experimental, composite, vibration analysis, damage
Procedia PDF Downloads 6744907 Pomegranate Attenuated Levodopa-Induced Dyskinesia and Dopaminergic Degeneration in MPTP Mice Models of Parkinson’s Disease
Authors: Mahsa Hadipour Jahromy, Sara Rezaii
Abstract:
Parkinson’s disease (PD) results primarily from the death of dopaminergic neurons in the substantia nigra. Soon after the discovery of levodopa and its beneficial effects in chronic administration, debilitating involuntary movements observed, termed levodopa-induced dyskinesia (LID) with poorly understood pathogenesis. Polyphenol-rich compounds, like pomegranate, provided neuroprotection in several animal models of brain diseases. In the present work, we investigated whether pomegranate has preventive effects following 4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-induced dopaminergic degenerations and the potential to diminish LID in mice. Mice model of PD was induced by MPTP (30 mg/kg daily for five consecutive days). To induce a mice model of LID, valid PD mice were treated with levodopa (50 mg/kg, i.p) for 15 days. Then the effects of chronic co-administration of pomegranate juice (20 ml/kg) with levodopa and continuing for 10 days, evaluated. Behavioural tests were performed in all groups, every other day including: Abnormal involuntary movements (AIMS), forelimb adjusting steps, cylinder, and catatonia tests. Finally, brain tissue sections were prepared to study substantia nigra changes and dopamine neuron density after treatments. With this MPTP regimen, significant movement disorders revealed in AIMS tests and there was a reduction in dopamine striatal density. Levodopa attenuates their loss caused by MPTP, however, in chronic administration, dyskinesia observed in forelimb adjusting step and cylinder tests. Besides, catatonia observed in some cases. Chronic pomegranate co-administration significantly improved LID in both tests and reduced dopaminergic loss in substantia nigra. These data indicate that pomegranate might be a good adjunct for preserving dopaminergic neurons in the substantia nigra and reducing LID in mice.Keywords: levodopa-induced dyskinesia, MPTP, Parkinson’s disease, pomegranate
Procedia PDF Downloads 4934906 Evaluating the Dosimetric Performance for 3D Treatment Planning System for Wedged and Off-Axis Fields
Authors: Nashaat A. Deiab, Aida Radwan, Mohamed S. Yahiya, Mohamed Elnagdy, Rasha Moustafa
Abstract:
This study is to evaluate the dosimetric performance of our institution's 3D treatment planning system for wedged and off-axis 6MV photon beams, guided by the recommended QA tests documented in the AAPM TG53; NCS report 15 test packages, IAEA TRS 430 and ESTRO booklet no.7. The study was performed for Elekta Precise linear accelerator designed for clinical range of 4, 6 and 15 MV photon beams with asymmetric jaws and fully integrated multileaf collimator that enables high conformance to target with sharp field edges. Ten tests were applied on solid water equivalent phantom along with 2D array dose detection system. The calculated doses using 3D treatment planning system PrecisePLAN were compared with measured doses to make sure that the dose calculations are accurate for simple situations such as square and elongated fields, different SSD, beam modifiers e.g. wedges, blocks, MLC-shaped fields and asymmetric collimator settings. The QA results showed dosimetric accuracy of the TPS within the specified tolerance limits. Except for large elongated wedged field, the central axis and outside central axis have errors of 0.2% and 0.5%, respectively, and off- planned and off-axis elongated fields the region outside the central axis of the beam errors are 0.2% and 1.1%, respectively. The dosimetric investigated results yielded differences within the accepted tolerance level as recommended. Differences between dose values predicted by the TPS and measured values at the same point are the result from limitations of the dose calculation, uncertainties in the measurement procedure, or fluctuations in the output of the accelerator.Keywords: quality assurance, dose calculation, wedged fields, off-axis fields, 3D treatment planning system, photon beam
Procedia PDF Downloads 4464905 D-Wave Quantum Computing Ising Model: A Case Study for Forecasting of Heat Waves
Authors: Dmytro Zubov, Francesco Volponi
Abstract:
In this paper, D-Wave quantum computing Ising model is used for the forecasting of positive extremes of daily mean air temperature. Forecast models are designed with two to five qubits, which represent 2-, 3-, 4-, and 5-day historical data respectively. Ising model’s real-valued weights and dimensionless coefficients are calculated using daily mean air temperatures from 119 places around the world, as well as sea level (Aburatsu, Japan). In comparison with current methods, this approach is better suited to predict heat wave values because it does not require the estimation of a probability distribution from scarce observations. Proposed forecast quantum computing algorithm is simulated based on traditional computer architecture and combinatorial optimization of Ising model parameters for the Ronald Reagan Washington National Airport dataset with 1-day lead-time on learning sample (1975-2010 yr). Analysis of the forecast accuracy (ratio of successful predictions to total number of predictions) on the validation sample (2011-2014 yr) shows that Ising model with three qubits has 100 % accuracy, which is quite significant as compared to other methods. However, number of identified heat waves is small (only one out of nineteen in this case). Other models with 2, 4, and 5 qubits have 20 %, 3.8 %, and 3.8 % accuracy respectively. Presented three-qubit forecast model is applied for prediction of heat waves at other five locations: Aurel Vlaicu, Romania – accuracy is 28.6 %; Bratislava, Slovakia – accuracy is 21.7 %; Brussels, Belgium – accuracy is 33.3 %; Sofia, Bulgaria – accuracy is 50 %; Akhisar, Turkey – accuracy is 21.4 %. These predictions are not ideal, but not zeros. They can be used independently or together with other predictions generated by different method(s). The loss of human life, as well as environmental, economic, and material damage, from extreme air temperatures could be reduced if some of heat waves are predicted. Even a small success rate implies a large socio-economic benefit.Keywords: heat wave, D-wave, forecast, Ising model, quantum computing
Procedia PDF Downloads 5004904 Buy-and-Hold versus Alternative Strategies: A Comparison of Market-Timing Techniques
Authors: Jonathan J. Burson
Abstract:
With the rise of virtually costless, mobile-based trading platforms, stock market trading activity has increased significantly over the past decade, particularly for the millennial generation. This increased stock market attention, combined with the recent market turmoil due to the economic upset caused by COVID-19, make the topics of market-timing and forecasting particularly relevant. While the overall stock market saw an unprecedented, historically-long bull market from March 2009 to February 2020, the end of that bull market reignited a search by investors for a way to reduce risk and increase return. Similar searches for outperformance occurred in the early, and late 2000’s as the Dotcom bubble burst and the Great Recession led to years of negative returns for mean-variance, index investors. Extensive research has been conducted on fundamental analysis, technical analysis, macroeconomic indicators, microeconomic indicators, and other techniques—all using different methodologies and investment periods—in pursuit of higher returns with lower risk. The enormous variety of timeframes, data, and methodologies used by the diverse forecasting methods makes it difficult to compare the outcome of each method directly to other methods. This paper establishes a process to evaluate the market-timing methods in an apples-to-apples manner based on simplicity, performance, and feasibility. Preliminary findings show that certain technical analysis models provide a higher return with lower risk when compared to the buy-and-hold method and to other market-timing strategies. Furthermore, technical analysis models tend to be easier for individual investors both in terms of acquiring the data and in analyzing it, making technical analysis-based market-timing methods the preferred choice for retail investors.Keywords: buy-and-hold, forecast, market-timing, probit, technical analysis
Procedia PDF Downloads 974903 A Study on the Construction Process and Sustainable Renewal Development of High-Rise Residential Areas in Chongqing (1978-2023)
Authors: Xiaoting Jing, Ling Huang
Abstract:
After the reform and opening up, Chongqing has formed far more high-rise residential areas than other cities in its more than 40 years of urban construction. High-rise residential areas have become one of the main modern living models in Chongqing and an important carrier reflecting the city's high quality of life. Reviewing the construction process and renewal work helps understand the characteristics of high-rise residential areas in Chongqing at different stages, clarify current development demands, and look forward to the focus of future renewal work. Based on socio-economic development and policy background, the article sorts the construction process of high-rise residential areas in Chongqing into four stages: the early experimental construction period of high-rise residential areas (1978-1996), the rapid start-up period of high-rise commodity housing construction (1997-2006), the large-scale construction period of high-rise commodity housing and public rental housing (2007-2014), and the period of renewal and renovation of high-rise residential areas and step-by-step construction of quality commodity housing (2015-present). Based on the construction demands and main construction types of each stage, the article summarizes that the construction of high-rise residential areas in Chongqing features large scale, high speed, and high density. It points out that a large number of high-rise residential areas built after 2000 will become important objects of renewal and renovation in the future. Based on existing renewal work experience, it is urgent to explore a path for sustainable renewal and development in terms of policy mechanisms, digital supervision, and renewal and renovation models, leading the high-rise living in Chongqing toward high-quality development.Keywords: high-rise residential areas, construction process, renewal and renovation, Chongqing
Procedia PDF Downloads 684902 GIS Based Spatial Modeling for Selecting New Hospital Sites Using APH, Entropy-MAUT and CRITIC-MAUT: A Study in Rural West Bengal, India
Authors: Alokananda Ghosh, Shraban Sarkar
Abstract:
The study aims to identify suitable sites for new hospitals with critical obstetric care facilities in Birbhum, one of the vulnerable and underserved districts of Eastern India, considering six main and 14 sub-criteria, using GIS-based Analytic Hierarchy Process (AHP) and Multi-Attribute Utility Theory (MAUT) approach. The criteria were identified through field surveys and previous literature. After collecting expert decisions, a pairwise comparison matrix was prepared using the Saaty scale to calculate the weights through AHP. On the contrary, objective weighting methods, i.e., Entropy and Criteria Importance through Interaction Correlation (CRITIC), were used to perform the MAUT. Finally, suitability maps were prepared by weighted sum analysis. Sensitivity analyses of AHP were performed to explore the effect of dominant criteria. Results from AHP reveal that ‘maternal death in transit’ followed by ‘accessibility and connectivity’, ‘maternal health care service (MHCS) coverage gap’ were three important criteria with comparatively higher weighted values. Whereas ‘accessibility and connectivity’ and ‘maternal death in transit’ were observed to have more imprint in entropy and CRITIC, respectively. While comparing the predictive suitable classes of these three models with the layer of existing hospitals, except Entropy-MAUT, the other two are pointing towards the left-over underserved areas of existing facilities. Only 43%-67% of existing hospitals were in the moderate to lower suitable class. Therefore, the results of the predictive models might bring valuable input in future planning.Keywords: hospital site suitability, analytic hierarchy process, multi-attribute utility theory, entropy, criteria importance through interaction correlation, multi-criteria decision analysis
Procedia PDF Downloads 684901 Assessment of Some Biological Activities of Methanolic Crude Extract from Polygonum maritimum L.
Authors: Imad Abdelhamid El-Haci, Wissame Mazari, Fayçal Hassani, Fawzia Atik Bekkara
Abstract:
Much attention has been paid to the antioxidants, which are expected to prevent food and living systems from peroxidative damage. Incorporation of synthetic antioxidants in food products is under strict regulation due to the potential health hazards caused by such compounds. The use of plants as traditional health remedies is very popular and important for 80% of the world’s population in African, Asian, Latin America and Middle Eastern Countries. Their use is reported to have minimal side effects. In recent years, pharmaceutical companies have spent considerable time and money in developing therapeutics based upon natural products extracted from plants. In other part, due to the continuous emergence of antibiotic-resistant strains there is continual demand for new antibiotics. Chemical compounds from medicinal plant especially are targeted by many researches. In this light, genus Polygonum (Polygonaceae), comprising about 45 genera (300 species), is distributed worldwide, mostly in north temperate regions. They have been reported to have uses in traditional medicine, such as anti-inflammation, promoting blood circulation, dysentery, diuretic, haemorrhage and many other uses. In our study, Polygonum maritimum (from Algerian coast) was extracted with 80% methanol to obtain a crude extract. P. maritimum extract (PME) had a very high content of total phenol, which was 352.49 ± 18.03 mg/g dry weight, expressed as gallic acid equivalent. PME exhibited excellent antioxidant activity, as measured using DPPH and H2O2 scavenging assays. It also showed a high antibacterial activity against gram positive bacterial strains: Bacillus cereus, Bacillus subtilis and Staphylococcus aureus with an MIC 0,12 mg/mL.Keywords: Polygonum maritimum, crude extract, antioxidant activity, antibacterial activity
Procedia PDF Downloads 3114900 A New Model for Production Forecasting in ERP
Authors: S. F. Wong, W. I. Ho, B. Lin, Q. Huang
Abstract:
ERP has been used in many enterprises for management, the accuracy of the production forecasting module is vital to the decision making of the enterprise, and the profit is affected directly. Therefore, enhancing the accuracy of the production forecasting module can also increase the efficiency and profitability. To deal with a lot of data, a suitable, reliable and accurate statistics model is necessary. LSSVM and Grey System are two main models to be studied in this paper, and a case study is used to demonstrate how the combination model is effective to the result of forecasting.Keywords: ERP, grey system, LSSVM, production forecasting
Procedia PDF Downloads 4634899 A Hybrid Model of Structural Equation Modelling-Artificial Neural Networks: Prediction of Influential Factors on Eating Behaviors
Authors: Maryam Kheirollahpour, Mahmoud Danaee, Amir Faisal Merican, Asma Ahmad Shariff
Abstract:
Background: The presence of nonlinearity among the risk factors of eating behavior causes a bias in the prediction models. The accuracy of estimation of eating behaviors risk factors in the primary prevention of obesity has been established. Objective: The aim of this study was to explore the potential of a hybrid model of structural equation modeling (SEM) and Artificial Neural Networks (ANN) to predict eating behaviors. Methods: The Partial Least Square-SEM (PLS-SEM) and a hybrid model (SEM-Artificial Neural Networks (SEM-ANN)) were applied to evaluate the factors affecting eating behavior patterns among university students. 340 university students participated in this study. The PLS-SEM analysis was used to check the effect of emotional eating scale (EES), body shape concern (BSC), and body appreciation scale (BAS) on different categories of eating behavior patterns (EBP). Then, the hybrid model was conducted using multilayer perceptron (MLP) with feedforward network topology. Moreover, Levenberg-Marquardt, which is a supervised learning model, was applied as a learning method for MLP training. The Tangent/sigmoid function was used for the input layer while the linear function applied for the output layer. The coefficient of determination (R²) and mean square error (MSE) was calculated. Results: It was proved that the hybrid model was superior to PLS-SEM methods. Using hybrid model, the optimal network happened at MPLP 3-17-8, while the R² of the model was increased by 27%, while, the MSE was decreased by 9.6%. Moreover, it was found that which one of these factors have significantly affected on healthy and unhealthy eating behavior patterns. The p-value was reported to be less than 0.01 for most of the paths. Conclusion/Importance: Thus, a hybrid approach could be suggested as a significant methodological contribution from a statistical standpoint, and it can be implemented as software to be able to predict models with the highest accuracy.Keywords: hybrid model, structural equation modeling, artificial neural networks, eating behavior patterns
Procedia PDF Downloads 1564898 Arabic Light Word Analyser: Roles with Deep Learning Approach
Authors: Mohammed Abu Shquier
Abstract:
This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN
Procedia PDF Downloads 434897 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration
Authors: Danny Barash
Abstract:
Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods
Procedia PDF Downloads 2344896 Market Chain Analysis of Onion: The Case of Northern Ethiopia
Authors: Belayneh Yohannes
Abstract:
In Ethiopia, onion production is increasing from time to time mainly due to its high profitability per unit area. Onion has a significant contribution to generating cash income for farmers in the Raya Azebo district. Therefore, enhancing onion producers’ access to the market and improving market linkage is an essential issue. Hence, this study aimed to analyze structure-conduct-performance of onion market and identifying factors affecting the market supply of onion producers. Data were collected from both primary and secondary sources. Primary data were collected from 150 farm households and 20 traders. Four onion marketing channels were identified in the study area. The highest total gross margin is 27.6 in channel IV. The highest gross marketing margin of producers of the onion market is 88% in channel II. The result from the analysis of market concentration indicated that the onion market is characterized by a strong oligopolistic market structure, with the buyers’ concentration ratio of 88.7 in Maichew town and 82.7 in Mekelle town. Lack of capital, licensing problems, and seasonal supply was identified as the major entry barrier to onion marketing. Market conduct shows that the price of onion is set by traders while producers are price takers. Multiple linear regression model results indicated that family size in adult equivalent, irrigated land size, access to information, frequency of extension contact, and ownership of transport significantly determined the quantity of onion supplied to the market. It is recommended that strengthening and diversifying extension services in information, marketing, post-harvest handling, irrigation application, and water harvest technology is highly important.Keywords: oligopoly, onion, market chain, multiple linear regression
Procedia PDF Downloads 1474895 Changing Subjective Well-Being and Social Trust in China: 2010-2020
Authors: Mengdie Ruan
Abstract:
The authors investigate how subjective well-being (SWB) and social trust changed in China over the period 2010–2020 by relying on data from six rounds of the China Family Panel Studies (CFPS), then re-examine Easterlin’s hypothesis for China, with a more focus on the role of social trust and estimate income-compensating differentials for social trust. They find that the evolution of well-being is not sensitive to the measures of well-being one uses. Specifically, self-reported life satisfaction scores and hedonic happiness scores experienced a significant increase across all income groups from 2010 to 2020. Social trust seems to have increased based on CFPS in China for all socioeconomic classes in recent years, and male, urban resident individuals with higher income have a higher social trust at a given point in time and over time. However, when we use an alternative measure of social trust, out-group trust, which is a more valid measure of generalized trust and represents “most people”, social trust in China literally declines, and the level is extremely low. In addition, this paper also suggests that in the typical query on social trust, the term "most people" mostly denotes in-groups in China, which contrasts sharply with most Western countries where it predominantly connotes out-groups. Individual fixed effects analysis of well-being that controls for time-invariant variables reveals social trust and relative social status are important correlates of life satisfaction and happiness, whereas absolute income plays a limited role in boosting an individual’s well-being. The income-equivalent value for social capital is approximately tripling of income. It has been found that women, urban and coastal residents, and people with higher income, young people, those with high education care more about social trust in China, irrespective of measures on SWB. Policy aiming at preserving and enhancing SWB should focus on social capital besides economic growth.Keywords: subjective well-being, life satisfaction, happiness, social trust, China
Procedia PDF Downloads 774894 Smart Cities, Morphology of the Uncertain: A Study on Development Processes Applied by Amazonian Cities in Ecuador
Authors: Leonardo Coloma
Abstract:
The world changes constantly, every second its properties vary due either natural factors or human intervention. As the most intelligent creatures on the planet, human beings have transformed the environment and paradoxically –have allowed ‘mother nature’ to lose species, accelerate the processes of climate change, the deterioration of the ozone layer, among others. The rapid population growth, the procurement, administration and distribution of resources, waste management, and technological advances are some of the factors that boost urban sprawl whose gray stain extends over the territory, facing challenges such as pollution, overpopulation and scarcity of resources. In Ecuador, these problems are added to the social, cultural, economic and political anomalies that have historically affected it. This fact can represent a greater delay when trying to solve global problems, without having paid attention to local inconveniences –smaller ones, but ones that could be the key to project smart solutions on bigger ones. This research aims to highlight the main characteristics of the development models adopted by two Amazonian cities, and analyze the impact of such urban growth on society; to finally define the parameters that would allow the development of an intelligent city in Ecuador, prepared for the challenges of the XXI Century. Contrasts in the climate, temperature, and landscape of Ecuadorian cities are fused with the cultural diversity of its people, generating a multiplicity of nuances of an indecipherable wealth. However, we strive to apply development models that do not recognize that wealth, not understanding them and ignoring that their proposals will vary according to where they are applied. Urban plans seem to take a bit of each of the new theories and proposals of development, which, in the encounter with the informal growth of cities, with those excluded and ‘isolated’ societies, generate absurd morphologies - where the uncertain becomes tangible. The desire to project smart cities is ever growing, but it is important to consider that this concept does not only have to do with the use of information and communication technologies. Its success is achieved when advances in science and technology allow the establishment of a better relationship between people and their context (natural and built). As a research methodology, urban analysis through mappings, diagrams and geographical studies, as well as the identification of sensorial elements when living the city, will make evident the shortcomings of the urban models adopted by certain populations of the Ecuadorian Amazon. Following the vision of previous investigations started since 2014 as part of ‘Centro de Acciones Urbanas,’ the results of this study will encourage the dialogue between the city (as a physical fact) and those who ‘make the city’ (people as its main actors). This research will allow the development of workshops and meetings with different professionals, organizations and individuals in general.Keywords: Latin American cities, smart cities, urban development, urban morphology, urban sprawl
Procedia PDF Downloads 1574893 Partitioning of Non-Metallic Nutrients in Lactating Crossbred Cattle Fed Buffers
Authors: Awadhesh Kishore
Abstract:
The goal of the study was to determine how different non-metallic nutrients are partitioned from feed in various physiological contexts and how buffer addition in ruminant nutrition affects these processes. Six lactating crossbred dairy cows were selected and divided into three groups on the basis of their phenotypic and productive features (374±14 kg LW). Two treatments, T1 and T2, were randomly assigned to one animal from each group. Animals under T1 and T2 were moved to T2 and T1, respectively, after 30 days. T2 was the only group to receive buffers containing magnesium oxide and sodium bicarbonate at 0.0 and 0.01% of LW (the real amounts are equivalent to 75.3±4.0 and 30 7.7±2.0 g/d, respectively). T1 was used as the control. Wheat straw and berseem were part of the base diet, whereas wheat grain and mustard cake were part of the concentrate mixture. Following a 21-day feeding period, metabolic and milk production trials were carried out for seven consecutive days. The Kearl equation used the urine's calorific value to determine its volume. Chemical analyses were performed to determine the levels of nitrogen, carbohydrates, calories, and phosphorus in samples of feed, waste, buffer, mineral mixture, water, feces, urine, and milk that were collected. The information was analyzed statistically. Notable results included decreased nitrogen and carbohydrate partitioning to feces from feed, while increased calorie partitioning to milk and body storage, and increased carbohydrate partitioning to body storage. Phosphorus balance was significantly better in T2. The application of buffers in ruminant diets was found to increase the output of calories in milk, as well as the number of calories and carbohydrates stored in the body, while decreasing the amount of nitrogen in faeces. As a result, it may be advised to introduce buffers to feed crossbred dairy cattle.Keywords: cattle, Magnesium oxide, non-metallic nutrients, partitioning, Sodium bicarbonate
Procedia PDF Downloads 584892 Geometric Design to Improve the Temperature
Authors: H. Ghodbane, A. A. Taleb, O. Kraa
Abstract:
This paper presents geometric design of induction heating system. The objective of this design is to improve the temperature distribution in the load. The study of such a device requires the use of models or modeling representation, physical, mathematical, and numerical. This modeling is the basis of the understanding, the design, and optimization of these systems. The optimization technique is to find values of variables that maximize or minimize the objective function.Keywords: optimization, modeling, geometric design system, temperature increase
Procedia PDF Downloads 5304891 Seasonal Assessment of Snow Cover Dynamics Based on Aerospace Multispectral Data on Livingston Island, South Shetland Islands in Antarctica and on Svalbard in Arctic
Authors: Temenuzhka Spasova, Nadya Yanakieva
Abstract:
Snow modulates the hydrological cycle and influences the functioning of ecosystems and is a significant resource for many populations whose water is harvested from cold regions. Snow observations are important for validating climate models. The accumulation and rapid melt of snow are two of the most dynamical seasonal environmental changes on the Earth’s surface. The actuality of this research is related to the modern tendencies of the remote sensing application in the solution of problems of different nature in the ecological monitoring of the environment. The subject of the study is the dynamic during the different seasons on Livingstone Island, South Shetland Islands in Antarctica and on Svalbard in Arctic. The objects were analyzed and mapped according to the Еuropean Space Agency data (ESA), acquired by sensors Sentinel-1 SAR (Synthetic Aperture Radar), Sentinel 2 MSI and GIS. Results have been obtained for changes in snow coverage during the summer-winter transition and its dynamics in the two hemispheres. The data used is of high time-spatial resolution, which is an advantage when looking at the snow cover. The MSI images are with different spatial resolution at the Earth surface range. The changes of the environmental objects are shown with the SAR images and different processing approaches. The results clearly show that snow and snow melting can be best registered by using SAR data via hh- horizontal polarization. The effect of the researcher on aerospace data and technology enables us to obtain different digital models, structuring and analyzing results excluding the subjective factor. Because of the large extent of terrestrial snow coverage and the difficulties in obtaining ground measurements over cold regions, remote sensing and GIS represent an important tool for studying snow areas and properties from regional to global scales.Keywords: climate changes, GIS, remote sensing, SAR images, snow coverage
Procedia PDF Downloads 2194890 Centrifuge Modelling Approach on Sysmic Loading Analysis of Clay: A Geotechnical Study
Authors: Anthony Quansah, Tresor Ntaryamira, Shula Mushota
Abstract:
Models for geotechnical centrifuge testing are usually made from re-formed soil, allowing for comparisons with naturally occurring soil deposits. However, there is a fundamental omission in this process because the natural soil is deposited in layers creating a unique structure. Nonlinear dynamics of clay material deposit is an essential part of changing the attributes of ground movements when subjected to solid seismic loading, particularly when diverse intensification conduct of speeding up and relocation are considered. The paper portrays a review of axis shaking table tests and numerical recreations to explore the offshore clay deposits subjected to seismic loadings. These perceptions are accurately reenacted by DEEPSOIL with appropriate soil models and parameters reviewed from noteworthy centrifuge modeling researches. At that point, precise 1-D site reaction investigations are performed on both time and recurrence spaces. The outcomes uncover that for profound delicate clay is subjected to expansive quakes, noteworthy increasing speed lessening may happen close to the highest point of store because of soil nonlinearity and even neighborhood shear disappointment; nonetheless, huge enhancement of removal at low frequencies are normal in any case the forces of base movements, which proposes that for dislodging touchy seaward establishments and structures, such intensified low-recurrence relocation reaction will assume an essential part in seismic outline. This research shows centrifuge as a tool for creating a layered sample important for modelling true soil behaviour (such as permeability) which is not identical in all directions. Currently, there are limited methods for creating layered soil samples.Keywords: seismic analysis, layered modeling, terotechnology, finite element modeling
Procedia PDF Downloads 1564889 Static Test Pad for Solid Rocket Motors
Authors: Svanik Garg
Abstract:
Static Test Pads are stationary mechanisms that hold a solid rocket motor, measuring the different parameters of its operation including thrust and temperature to better calibrate it for launch. This paper outlines a specific STP designed to test high powered rocket motors with a thrust upwards of 4000N and limited to 6500N. The design includes a specific portable mechanism with cost an integral part of the design process to make it accessible to small scale rocket developers with limited resources. Using curved surfaces and an ergonomic design, the STP has a delicately engineered façade/case with a focus on stability and axial calibration of thrust. This paper describes the design, operation and working of the STP and its widescale uses given the growing market of aviation enthusiasts. Simulations on the CAD model in Fusion 360 provided promising results with a safety factor of 2 established and stress limited along with the load coefficient A PCB was also designed as part of the test pad design process to help obtain results, with visual output and various virtual terminals to collect data of different parameters. The circuitry was simulated using ‘proteus’ and a special virtual interface with auditory commands was also created for accessibility and wide-scale implementation. Along with this description of the design, the paper also emphasizes the design principle behind the STP including a description of its vertical orientation to maximize thrust accuracy along with a stable base to prevent micromovements. Given the rise of students and professionals alike building high powered rockets, the STP described in this paper is an appropriate option, with limited cost, portability, accuracy, and versatility. There are two types of STP’s vertical or horizontal, the one discussed in this paper is vertical to utilize the axial component of thrust.Keywords: static test pad, rocket motor, thrust, load, circuit, avionics, drag
Procedia PDF Downloads 3824888 Innovations for Freight Transport Systems
Authors: M. Lu
Abstract:
The paper presents part of the results of EU-funded projects: SoCool@EU (Sustainable Organisation between Clusters Of Optimized Logistics @ Europe), DG-RTD (Research and Innovation), Regions of Knowledge Programme (FP7-REGIONS-2011-1). It will provide an in-depth review of emerging technologies for further improving urban mobility and freight transport systems, such as (information and physical) infrastructure, ICT-based Intelligent Transport Systems (ITS), vehicles, advanced logistics, and services. Furthermore, the paper will provide an analysis of the barriers and will review business models for the market uptake of innovations. From a perspective of science and technology, the challenges of urbanization could be mainly handled through adequate (human-oriented) solutions for urban planning, sustainable energy, the water system, building design and construction, the urban transport system (both physical and information aspects), and advanced logistics and services. Implementation of solutions for these domains should be follow a highly integrated and balanced approach, a silo approach should be avoided. To develop a sustainable urban transport system (for people and goods), including inter-hubs and intra-hubs, a holistic view is needed. To achieve a sustainable transport system for people and goods (in terms of cost-effectiveness, efficiency, environment-friendliness and fulfillment of the mobility, transport and logistics needs of the society), a proper network and information infrastructure, advanced transport systems and operations, as well as ad hoc and seamless services are required. In addition, a road map for an enhanced urban transport system until 2050 will be presented. This road map aims to address the challenges of urban transport, and to provide best practices in inter-city and intra-city environments from various perspectives, including policy, traveler behaviour, economy, liability, business models, and technology.Keywords: synchromodality, multimodal transport, logistics, Intelligent Transport Systems (ITS)
Procedia PDF Downloads 3164887 Inverse Matrix in the Theory of Dynamical Systems
Authors: Renata Masarova, Bohuslava Juhasova, Martin Juhas, Zuzana Sutova
Abstract:
In dynamic system theory a mathematical model is often used to describe their properties. In order to find a transfer matrix of a dynamic system we need to calculate an inverse matrix. The paper contains the fusion of the classical theory and the procedures used in the theory of automated control for calculating the inverse matrix. The final part of the paper models the given problem by the Matlab.Keywords: dynamic system, transfer matrix, inverse matrix, modeling
Procedia PDF Downloads 5164886 Scenario Analysis to Assess the Competitiveness of Hydrogen in Securing the Italian Energy System
Authors: Gianvito Colucci, Valeria Di Cosmo, Matteo Nicoli, Orsola Maria Robasto, Laura Savoldi
Abstract:
The hydrogen value chain deployment is likely to be boosted in the near term by the energy security measures planned by European countries to face the recent energy crisis. In this context, some countries are recognized to have a crucial role in the geopolitics of hydrogen as importers, consumers and exporters. According to the European Hydrogen Backbone Initiative, Italy would be part of one of the 5 corridors that will shape the European hydrogen market. However, the set targets are very ambitious and require large investments to rapidly develop effective hydrogen policies: in this regard, scenario analysis is becoming increasingly important to support energy planning, and energy system optimization models appear to be suitable tools to quantitively carry on that kind of analysis. The work aims to assess the competitiveness of hydrogen in contributing to the Italian energy security in the coming years, under different price and import conditions, using the energy system model TEMOA-Italy. A wide spectrum of hydrogen technologies is included in the analysis, covering the production, storage, delivery, and end-uses stages. National production from fossil fuels with and without CCS, as well as electrolysis and import of low-carbon hydrogen from North Africa, are the supply solutions that would compete with other ones, such as natural gas, biomethane and electricity value chains, to satisfy sectoral energy needs (transport, industry, buildings, agriculture). Scenario analysis is then used to study the competition under different price and import conditions. The use of TEMOA-Italy allows the work to catch the interaction between the economy and technological detail, which is much needed in the energy policies assessment, while the transparency of the analysis and of the results is ensured by the full accessibility of the TEMOA open-source modeling framework.Keywords: energy security, energy system optimization models, hydrogen, natural gas, open-source modeling, scenario analysis, TEMOA
Procedia PDF Downloads 1164885 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations
Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso
Abstract:
Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.Keywords: pipeline, leakage, detection, AI
Procedia PDF Downloads 1914884 Enhancement of Aircraft Longitudinal Stability Using Tubercles
Authors: Muhammad Umer, Aishwariya Giri, Umaiyma Rakha
Abstract:
Mimicked from the humpback whale flippers, the application of tubercle technology is seen to be particularly advantageous at high angles of attack. This particular advantage is of paramount importance when it comes to structures producing lift at high angles of attack. This characteristic of the technology makes it ideal for horizontal stabilizers and selecting the same as the subject of study to identify and exploit the advantage highlighted by researchers on airfoils, this project aims in establishing a foundation for the application of the bio-mimicked technology on an existing aircraft. Using a baseline and 2 tubercle configuration integrated models, the project targets to achieve the twin aim of highlighting the possibility and merits over the base model and also choosing the right configuration in providing the best characteristic suitable for high angles of attack. To facilitate this study, the required models are generated using Solidworks followed by trials in a virtual aerodynamic environment using Fluent in Ansys for resolving the project objectives. Following a structured plan, the aim is to initially identify the advantages mathematically and then selecting the optimal configuration, simulate the end configuration at angles mimicking the actual operation envelope for the particular structure. Upon simulating the baseline configuration at various angles of attack, the stall angle was determined to be 22 degrees. Thus, the tubercle configurations will be simulated and compared at 4 different angles of attacks: 0, 10, 20, and 24. Further, after providing the optimum configuration of horizontal stabilizers, this study aims at the integration of aircraft structure so that the results better imply the end deliverables of real life application. This draws the project scope closer at this point into longitudinal static stability considerations and improvements in the manoeuvrability characteristics. The objective of the study is to achieve a complete overview ready for real life application with marked benefits obtainable from bio morphing of the tubercle technology.Keywords: flow simulation, horizontal stabilizer, stability enhancement, tubercle
Procedia PDF Downloads 3204883 The Use of the TRIGRS Model and Geophysics Methodologies to Identify Landslides Susceptible Areas: Case Study of Campos do Jordao-SP, Brazil
Authors: Tehrrie Konig, Cassiano Bortolozo, Daniel Metodiev, Rodolfo Mendes, Marcio Andrade, Marcio Moraes
Abstract:
Gravitational mass movements are recurrent events in Brazil, usually triggered by intense rainfall. When these events occur in urban areas, they end up becoming disasters due to the economic damage, social impact, and loss of human life. To identify the landslide-susceptible areas, it is important to know the geotechnical parameters of the soil, such as cohesion, internal friction angle, unit weight, hydraulic conductivity, and hydraulic diffusivity. The measurement of these parameters is made by collecting soil samples to analyze in the laboratory and by using geophysical methodologies, such as Vertical Electrical Survey (VES). The geophysical surveys analyze the soil properties with minimal impact in its initial structure. Statistical analysis and mathematical models of physical basis are used to model and calculate the Factor of Safety for steep slope areas. In general, such mathematical models work from the combination of slope stability models and hydrological models. One example is the mathematical model TRIGRS (Transient Rainfall Infiltration and Grid-based Regional Slope- Stability Model) which calculates the variation of the Factor of Safety of a determined study area. The model relies on changes in pore-pressure and soil moisture during a rainfall event. TRIGRS was written in the Fortran programming language and associates the hydrological model, which is based on the Richards Equation, with the stability model based on the principle of equilibrium limit. Therefore, the aims of this work are modeling the slope stability of Campos do Jordão with TRIGRS, using geotechnical and geophysical methodologies to acquire the soil properties. The study area is located at southern-east of Sao Paulo State in the Mantiqueira Mountains and has a historic landslide register. During the fieldwork, soil samples were collected, and the VES method applied. These procedures provide the soil properties, which were used as input data in the TRIGRS model. The hydrological data (infiltration rate and initial water table height) and rainfall duration and intensity, were acquired from the eight rain gauges installed by Cemaden in the study area. A very high spatial resolution digital terrain model was used to identify the slopes declivity. The analyzed period is from March 6th to March 8th of 2017. As results, the TRIGRS model calculates the variation of the Factor of Safety within a 72-hour period in which two heavy rainfall events stroke the area and six landslides were registered. After each rainfall, the Factor of Safety declined, as expected. The landslides happened in areas identified by the model with low values of Factor of Safety, proving its efficiency on the identification of landslides susceptible areas. This study presents a critical threshold for landslides, in which an accumulated rainfall higher than 80mm/m² in 72 hours might trigger landslides in urban and natural slopes. The geotechnical and geophysics methods are shown to be very useful to identify the soil properties and provide the geological characteristics of the area. Therefore, the combine geotechnical and geophysical methods for soil characterization and the modeling of landslides susceptible areas with TRIGRS are useful for urban planning. Furthermore, early warning systems can be developed by combining the TRIGRS model and weather forecast, to prevent disasters in urban slopes.Keywords: landslides, susceptibility, TRIGRS, vertical electrical survey
Procedia PDF Downloads 1734882 Removal of Cr (VI) from Water through Adsorption Process Using GO/PVA as Nanosorbent
Authors: Syed Hadi Hasan, Devendra Kumar Singh, Viyaj Kumar
Abstract:
Cr (VI) is a known toxic heavy metal and has been considered as a priority pollutant in water. The effluent of various industries including electroplating, anodizing baths, leather tanning, steel industries and chromium based catalyst are the major source of Cr (VI) contamination in the aquatic environment. Cr (VI) show high mobility in the environment and can easily penetrate cell membrane of the living tissues to exert noxious effects. The Cr (VI) contamination in drinking water causes various hazardous health effects to the human health such as cancer, skin and stomach irritation or ulceration, dermatitis, damage to liver, kidney circulation and nerve tissue damage. Herein, an attempt has been done to develop an efficient adsorbent for the removal of Cr (VI) from water. For this purpose nanosorbent composed of polyvinyl alcohol functionalized graphene oxide (GO/PVA) was prepared. Thus, obtained GO/PVA was characterized through FTIR, XRD, SEM, and Raman Spectroscopy. As prepared nanosorbent of GO/PVA was utilized for the removal Cr (VI) in batch mode experiment. The process variables such as contact time, initial Cr (VI) concentration, pH, and temperature were optimized. The maximum 99.8 % removal of Cr (VI) was achieved at initial Cr (VI) concentration 60 mg/L, pH 2, temperature 35 °C and equilibrium was achieved within 50 min. The two widely used isotherm models viz. Langmuir and Freundlich were analyzed using linear correlation coefficient (R2) and it was found that Langmuir model gives best fit with high value of R2 for the data of present adsorption system which indicate the monolayer adsorption of Cr (VI) on the GO/PVA. Kinetic studies were also conducted using pseudo-first order and pseudo-second order models and it was observed that chemosorptive pseudo-second order model described the kinetics of current adsorption system in better way with high value of correlation coefficient. Thermodynamic studies were also conducted and results showed that the adsorption was spontaneous and endothermic in nature.Keywords: adsorption, GO/PVA, isotherm, kinetics, nanosorbent, thermodynamics
Procedia PDF Downloads 3894881 The Structural Behavior of Fiber Reinforced Lightweight Concrete Beams: An Analytical Approach
Authors: Jubee Varghese, Pouria Hafiz
Abstract:
Increased use of lightweight concrete in the construction industry is mainly due to its reduction in the weight of the structural elements, which in turn reduces the cost of production, transportation, and the overall project cost. However, the structural application of these lightweight concrete structures is limited due to its reduced density. Hence, further investigations are in progress to study the effect of fiber inclusion in improving the mechanical properties of lightweight concrete. Incorporating structural steel fibers, in general, enhances the performance of concrete and increases its durability by minimizing its potential to cracking and providing crack arresting mechanism. In this research, Geometric and Materially Non-linear Analysis (GMNA) was conducted for Finite Element Modelling using a software known as ABAQUS, to investigate the structural behavior of lightweight concrete with and without the addition of steel fibers and shear reinforcement. 21 finite element models of beams were created to study the effect of steel fibers based on three main parameters; fiber volume fraction (Vf = 0, 0.5 and 0.75%), shear span to depth ratio (a/d of 2, 3 and 4) and ratio of area of shear stirrups to spacing (As/s of 0.7, 1 and 1.6). The models created were validated with the previous experiment conducted by H.K. Kang et al. in 2011. It was seen that the lightweight fiber reinforcement can replace the use of fiber reinforced normal weight concrete as structural elements. The effect of an increase in steel fiber volume fraction is dominant for beams with higher shear span to depth ratio than for lower ratios. The effect of stirrups in the presence of fibers was very negligible; however; it provided extra confinement to the cracks by reducing the crack propagation and extra shear resistance than when compared to beams with no stirrups.Keywords: ABAQUS, beams, fiber-reinforced concrete, finite element, light weight, shear span-depth ratio, steel fibers, steel-fiber volume fraction
Procedia PDF Downloads 107