Search results for: equation modeling methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19342

Search results for: equation modeling methods

13162 A Ground Structure Method to Minimize the Total Installed Cost of Steel Frame Structures

Authors: Filippo Ranalli, Forest Flager, Martin Fischer

Abstract:

This paper presents a ground structure method to optimize the topology and discrete member sizing of steel frame structures in order to minimize total installed cost, including material, fabrication and erection components. The proposed method improves upon existing cost-based ground structure methods by incorporating constructability considerations well as satisfying both strength and serviceability constraints. The architecture for the method is a bi-level Multidisciplinary Feasible (MDF) architecture in which the discrete member sizing optimization is nested within the topology optimization process. For each structural topology generated, the sizing optimization process seek to find a set of discrete member sizes that result in the lowest total installed cost while satisfying strength (member utilization) and serviceability (node deflection and story drift) criteria. To accurately assess cost, the connection details for the structure are generated automatically using accurate site-specific cost information obtained directly from fabricators and erectors. Member continuity rules are also applied to each node in the structure to improve constructability. The proposed optimization method is benchmarked against conventional weight-based ground structure optimization methods resulting in an average cost savings of up to 30% with comparable computational efficiency.

Keywords: cost-based structural optimization, cost-based topology and sizing, optimization, steel frame ground structure optimization, multidisciplinary optimization of steel structures

Procedia PDF Downloads 326
13161 Azadrachea indica Leaves Extract Assisted Green Synthesis of Ag-TiO₂ for Degradation of Dyes in Aqueous Medium

Authors: Muhammad Saeed, Sheeba Khalid

Abstract:

Aqueous pollution due to the textile industry is an important issue. Photocatalysis using metal oxides as catalysts is one of the methods used for eradication of dyes from textile industrial effluents. In this study, the synthesis, characterization, and evaluation of photocatalytic activity of Ag-TiO₂ are reported. TiO₂ catalysts with 2, 4, 6 and 8% loading of Ag were prepared by green methods using Azadrachea indica leaves' extract as reducing agent and titanium dioxide and silver nitrate as precursor materials. The 4% Ag-TiO₂ exhibited the best catalytic activity for degradation of dyes. Prepared catalyst was characterized by advanced techniques. Catalytic degradation of methylene blue and rhodamine B were carried out in Pyrex glass batch reactor. Deposition of Ag greatly enhanced the catalytic efficiency of TiO₂ towards degradation of dyes. Irradiation of catalyst excites electrons from conduction band of catalyst to valence band yielding an electron-hole pair. These photoexcited electrons and positive hole undergo secondary reaction and produce OH radicals. These active radicals take part in the degradation of dyes. More than 90% of dyes were degraded in 120 minutes. It was found that there was no loss catalytic efficiency of prepared Ag-TiO₂ after recycling it for two times. Photocatalytic degradation of methylene blue and rhodamine B followed Eley-Rideal mechanism which states that dye reacts in fluid phase with adsorbed oxygen. 27 kJ/mol and 20 kJ/mol were found as activation energy for photodegradation of methylene blue and rhodamine B dye respectively.

Keywords: TiO₂, Ag-TiO₂, methylene blue, Rhodamine B., photo degradation

Procedia PDF Downloads 147
13160 Methodological Proposal, Archival Thesaurus in Colombian Sign Language

Authors: Pedro A. Medina-Rios, Marly Yolie Quintana-Daza

Abstract:

Having the opportunity to communicate in a social, academic and work context is very relevant for any individual and more for a deaf person when oral language is not their natural language, and written language is their second language. Currently, in Colombia, there is not a specialized dictionary for our best knowledge in sign language archiving. Archival is one of the areas that the deaf community has a greater chance of performing. Nourishing new signs in dictionaries for deaf people extends the possibility that they have the appropriate signs to communicate and improve their performance. The aim of this work was to illustrate the importance of designing pedagogical and technological strategies of knowledge management, for the academic inclusion of deaf people through proposals of lexicon in Colombian sign language (LSC) in the area of archival. As a method, the analytical study was used to identify relevant words in the technical area of the archival and its counterpart with the LSC, 30 deaf people, apprentices - students of the Servicio Nacional de Aprendizaje (SENA) in Documentary or Archival Management programs, were evaluated through direct interviews in LSC. For the analysis tools were maintained to evaluate correlation patterns and linguistic methods of visual, gestural analysis and corpus; besides, methods of linear regression were used. Among the results, significant data were found among the variables socioeconomic stratum, academic level, labor location. The need to generate new signals on the subject of the file to improve communication between the deaf person, listener and the sign language interpreter. It is concluded that the generation of new signs to nourish the LSC dictionary in archival subjects is necessary to improve the labor inclusion of deaf people in Colombia.

Keywords: archival, inclusion, deaf, thesaurus

Procedia PDF Downloads 261
13159 Sudden Death and Chronic Disseminated Intravascular Coagulation (DIC): Two Case Reports

Authors: Saker Lilia, Youcef Mellouki, Lakhdar Sellami, Yacine Zerairia, Abdelhaid Zetili, Fatma Guahria, Fateh Kaious, Nesrine Belkhodja, Abdelhamid Mira

Abstract:

Background: Sudden death is regarded as a suspicious demise necessitating autopsy, as stipulated by legal authorities. Chronic disseminated intravascular coagulation (DIC) is an acquired clinical and biological syndrome characterized by a severe and fatal prognosis, stemming from systemic, uncontrolled, diffuse coagulation activation. Irrespective of their origins, DIC is associated with a diverse spectrum of manifestations, encompassing minor biological coagulation alterations to profoundly severe conditions wherein hemorrhagic complications may take precedence. Simultaneously, microthrombi contribute to the development of multi-organ failures. Objective This study seeks to evaluate the role of autopsy in determining the causes of death. Materials and Methods: We present two instances of sudden death involving females who underwent autopsy at the Forensic Medicine Department of the University Hospital of Annaba, Algeria. These autopsies were performed at the request of the prosecutor, aiming to determine the causes of death and illuminate the exact circumstances surrounding it. Methods Utilized: Analysis of the initial information report; Findings from postmortem examinations; Histological assessments and toxicological analyses. Results: The presence of DIC was noted, affecting nearly all veins with distinct etiologies. Conclusion: For the establishment of a meaningful diagnosis: • Thorough understanding of the subject matter is imperative; • Precise alignment with medicolegal data is essential.

Keywords: chronic disseminated intravascular coagulation, sudden death, autopsy, causes of death

Procedia PDF Downloads 65
13158 Magnetohemodynamic of Blood Flow Having Impact of Radiative Flux Due to Infrared Magnetic Hyperthermia: Spectral Relaxation Approach

Authors: Ebenezer O. Ige, Funmilayo H. Oyelami, Joshua Olutayo-Irheren, Joseph T. Okunlola

Abstract:

Hyperthermia therapy is an adjuvant procedure during which perfused body tissues is subjected to elevated range of temperature in bid to achieve improved drug potency and efficacy of cancer treatment. While a selected class of hyperthermia techniques is shouldered on the thermal radiations derived from single-sourced electro-radiation measures, there are deliberations on conjugating dual radiation field sources in an attempt to improve the delivery of therapy procedure. This paper numerically explores the thermal effectiveness of combined infrared hyperemia having nanoparticle recirculation in the vicinity of imposed magnetic field on subcutaneous strata of a model lesion as ablation scheme. An elaborate Spectral relaxation method (SRM) was formulated to handle equation of coupled momentum and thermal equilibrium in the blood-perfused tissue domain of a spongy fibrous tissue. Thermal diffusion regimes in the presence of external magnetic field imposition were described leveraging on the renowned Roseland diffusion approximation to delineate the impact of radiative flux within the computational domain. The contribution of tissue sponginess was examined using mechanics of pore-scale porosity over a selected of clinical informed scenarios. Our observations showed for a substantial depth of spongy lesion, magnetic field architecture constitute the control regimes of hemodynamics in the blood-tissue interface while facilitating thermal transport across the depth of the model lesion. This parameter-indicator could be utilized to control the dispensing of hyperthermia treatment in intravenous perfused tissue.

Keywords: spectra relaxation scheme, thermal equilibrium, Roseland diffusion approximation, hyperthermia therapy

Procedia PDF Downloads 98
13157 Modeling of Ductile Fracture Using Stress-Modified Critical Strain Criterion for Typical Pressure Vessel Steel

Authors: Carlos Cuenca, Diego Sarzosa

Abstract:

Ductile fracture occurs by the mechanism of void nucleation, void growth and coalescence. Potential sites for initiation are second phase particles or non-metallic inclusions. Modelling of ductile damage at the microscopic level is very difficult and complex task for engineers. Therefore, conservative predictions of ductile failure using simple models are necessary during the design and optimization of critical structures like pressure vessels and pipelines. Nowadays, it is well known that the initiation phase is strongly influenced by the stress triaxiality and plastic deformation at the microscopic level. Thus, a simple model used to study the ductile failure under multiaxial stress condition is the Stress Modified Critical Strain (SMCS) approach. Ductile rupture has been study for a structural steel under different stress triaxiality conditions using the SMCS method. Experimental tests are carried out to characterize the relation between stress triaxiality and equivalent plastic strain by notched round bars. After calibration of the plasticity and damage properties, predictions are made for low constraint bending specimens with and without side grooves. Stress/strain fields evolution are compared between the different geometries. Advantages and disadvantages of the SMCS methodology are discussed.

Keywords: damage, SMSC, SEB, steel, failure

Procedia PDF Downloads 287
13156 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 327
13155 Heuristics for Optimizing Power Consumption in the Smart Grid

Authors: Zaid Jamal Saeed Almahmoud

Abstract:

Our increasing reliance on electricity, with inefficient consumption trends, has resulted in several economical and environmental threats. These threats include wasting billions of dollars, draining limited resources, and elevating the impact of climate change. As a solution, the smart grid is emerging as the future power grid, with smart techniques to optimize power consumption and electricity generation. Minimizing the peak power consumption under a fixed delay requirement is a significant problem in the smart grid. In addition, matching demand to supply is a key requirement for the success of the future electricity. In this work, we consider the problem of minimizing the peak demand under appliances constraints by scheduling power jobs with uniform release dates and deadlines. As the problem is known to be NP-Hard, we propose two versions of a heuristic algorithm for solving this problem. Our theoretical analysis and experimental results show that our proposed heuristics outperform existing methods by providing a better approximation to the optimal solution. In addition, we consider dynamic pricing methods to minimize the peak load and match demand to supply in the smart grid. Our contribution is the proposal of generic, as well as customized pricing heuristics to minimize the peak demand and match demand with supply. In addition, we propose optimal pricing algorithms that can be used when the maximum deadline period of the power jobs is relatively small. Finally, we provide theoretical analysis and conduct several experiments to evaluate the performance of the proposed algorithms.

Keywords: heuristics, optimization, smart grid, peak demand, power supply

Procedia PDF Downloads 76
13154 The Reliability of Wireless Sensor Network

Authors: Bohuslava Juhasova, Igor Halenar, Martin Juhas

Abstract:

The wireless communication is one of the widely used methods of data transfer at the present days. The benefit of this communication method is the partial independence of the infrastructure and the possibility of mobility. In some special applications it is the only way how to connect. This paper presents some problems in the implementation of a sensor network connection for measuring environmental parameters in the area of manufacturing plants.

Keywords: network, communication, reliability, sensors

Procedia PDF Downloads 642
13153 Zinc Nanoparticles Modified Electrode as an Insulin Sensor

Authors: Radka Gorejova, Ivana Sisolakova, Jana Shepa, Frederika Chovancova, Renata Orinakova

Abstract:

Diabetes mellitus (DM) is a serious metabolic disease characterized by chronic hyperglycemia. Often, the symptoms are not sufficiently observable at early stages, and so hyperglycemia causes pathological and functional changes before the diagnosis of the DM. Therefore, the development of an electrochemical sensor that will be fast, accurate, and instrumentally undemanding is currently needful. Screen-printed carbon electrodes (SPCEs) can be considered as the most suitable matrix material for insulin sensors because of the small size of the working electrode. It leads to the analyst's volume reduction to only 50 µl for each measurement. The surface of bare SPCE was modified by a combination of chitosan, multi-walled carbon nanotubes (MWCNTs), and zinc nanoparticles (ZnNPs) to obtain better electrocatalytic activity towards insulin oxidation. ZnNPs were electrochemically deposited on the chitosan-MWCNTs/SPCE surface using the pulse deposition method. Thereafter, insulin was determined on the prepared electrode using chronoamperometry and electrochemical impedance spectroscopy (EIS). The chronoamperometric measurement was performed by adding a constant amount of insulin in 0.1 M NaOH and PBS (2 μl) with the concentration of 2 μM, and the current response of the system was monitored after a gradual increase in concentration. Subsequently, the limit of detection (LOD) of the prepared electrode was determined via the Randles-Ševčík equation. The LOD was 0.47 µM. Prepared electrodes were studied also as the impedimetric sensors for insulin determination. Therefore, various insulin concentrations were determined via EIS. Based on the performed measurements, the ZnNPs/chitosan-MWCNTs/SPCE can be considered as a potential candidate for novel electrochemical sensor for insulin determination. Acknowledgments: This work has been supported by the projects Visegradfund project number 22020140, VEGA 1/0095/21 of the Slovak Scientific Grant Agency, and APVV-PP-COVID-20-0036 of the Slovak Research and Development Agency.

Keywords: zinc nanoparticles, insulin, chronoamperometry, electrochemical impedance spectroscopy

Procedia PDF Downloads 110
13152 Climate Change and Sustainable Development among Agricultural Communities in Tanzania; An Analysis of Southern Highland Rural Communities

Authors: Paschal Arsein Mugabe

Abstract:

This paper examines sustainable development planning in the context of environmental concerns in rural areas of the Tanzania. It challenges mainstream approaches to development, focusing instead upon transformative action for environmental justice. The goal is to help shape future sustainable development agendas in local government, international agencies and civil society organisations. Research methods: The approach of the study is geographical, but also involves various Trans-disciplinary elements, particularly from development studies, sociology and anthropology, management, geography, agriculture and environmental science. The research methods included thematic and questionnaire interviews, participatory tools such as focus group discussion, participatory research appraisal and expert interviews for primary data. Secondary data were gathered through the analysis of land use/cover data and official documents on climate, agriculture, marketing and health. Also several earlier studies that were made in the area provided an important reference base. Findings: The findings show that, agricultural sustainability in Tanzania appears likely to deteriorate as a consequence of climate change. Noteworthy differences in impacts across households are also present both by district and by income category. Also food security cannot be explained by climate as the only influencing factor. A combination of economic, political and socio-cultural context of the community are crucial. Conclusively, it is worthy knowing that people understand their relationship between climate change and their livelihood.

Keywords: agriculture, climate change, environment, sustainable development

Procedia PDF Downloads 315
13151 Building Information Modelling: A Solution to the Limitations of Prefabricated Construction

Authors: Lucas Peries, Rolla Monib

Abstract:

The construction industry plays a vital role in the global economy, contributing billions of dollars annually. However, the industry has been struggling with persistently low productivity levels for years, unlike other sectors that have shown significant improvements. Modular and prefabricated construction methods have been identified as potential solutions to boost productivity in the construction industry. These methods offer time advantages over traditional construction methods. Despite their potential benefits, modular and prefabricated construction face hindrances and limitations that are not present in traditional building systems. Building information modelling (BIM) has the potential to address some of these hindrances, but barriers are preventing its widespread adoption in the construction industry. This research aims to enhance understanding of the shortcomings of modular and prefabricated building systems and develop BIM-based solutions to alleviate or eliminate these hindrances. The research objectives include identifying and analysing key issues hindering the use of modular and prefabricated building systems, investigating the current state of BIM adoption in the construction industry and factors affecting its successful implementation, proposing BIM-based solutions to address the issues associated with modular and prefabricated building systems, and assessing the effectiveness of the developed solutions in removing barriers to their use. The research methodology involves conducting a critical literature review to identify the key issues and challenges in modular and prefabricated construction and BIM adoption. Additionally, an online questionnaire will be used to collect primary data from construction industry professionals, allowing for feedback and evaluation of the proposed BIM-based solutions. The data collected will be analysed to evaluate the effectiveness of the solutions and their potential impact on the adoption of modular and prefabricated building systems. The main findings of the research indicate that the identified issues from the literature review align with the opinions of industry professionals, and the proposed BIM-based solutions are considered effective in addressing the challenges associated with modular and prefabricated construction. However, the research has limitations, such as a small sample size and the need to assess the feasibility of implementing the proposed solutions. In conclusion, this research contributes to enhancing the understanding of modular and prefabricated building systems' limitations and proposes BIM-based solutions to overcome these limitations. The findings are valuable to construction industry professionals and BIM software developers, providing insights into the challenges and potential solutions for implementing modular and prefabricated construction systems in future projects. Further research should focus on addressing the limitations and assessing the feasibility of implementing the proposed solutions from technical and legal perspectives.

Keywords: building information modelling, modularisation, prefabrication, technology

Procedia PDF Downloads 83
13150 Distribution of Maximum Loss of Fractional Brownian Motion with Drift

Authors: Ceren Vardar Acar, Mine Caglar

Abstract:

In finance, the price of a volatile asset can be modeled using fractional Brownian motion (fBm) with Hurst parameter H>1/2. The Black-Scholes model for the values of returns of an asset using fBm is given as, 〖Y_t=Y_0 e^((r+μ)t+σB)〗_t^H, 0≤t≤T where Y_0 is the initial value, r is constant interest rate, μ is constant drift and σ is constant diffusion coefficient of fBm, which is denoted by B_t^H where t≥0. Black-Scholes model can be constructed with some Markov processes such as Brownian motion. The advantage of modeling with fBm to Markov processes is its capability of exposing the dependence between returns. The real life data for a volatile asset display long-range dependence property. For this reason, using fBm is a more realistic model compared to Markov processes. Investors would be interested in any kind of information on the risk in order to manage it or hedge it. The maximum possible loss is one way to measure highest possible risk. Therefore, it is an important variable for investors. In our study, we give some theoretical bounds on the distribution of maximum possible loss of fBm. We provide both asymptotical and strong estimates for the tail probability of maximum loss of standard fBm and fBm with drift and diffusion coefficients. In the investment point of view, these results explain, how large values of possible loss behave and its bounds.

Keywords: maximum drawdown, maximum loss, fractional brownian motion, large deviation, Gaussian process

Procedia PDF Downloads 475
13149 Agent-Based Modeling of Pedestrian Corridor Congestion on the Characteristics of Physical Space Form

Authors: Sun Shi, Sun Cheng

Abstract:

The pedestrian corridor is the most crowded area in the public space. The crowded severity has been focused on the field of evacuation strategies of the entrance in large public spaces. The aim of this paper is to analyze the walking efficiency in different spaces of pedestrian corridor with the variation of spatial parameters. The congestion condition caused by the variation of walking efficiency is modeled as well. This study established the space model of the walking corridor by setting the width, slope, turning form and turning angle of the pedestrian corridor. The pedestrian preference of walking mode varied with the difference of the crowded severity, walking speed, field of vision, sight direction and the expected destination, which is influenced by the characters of physical space form. Swarm software is applied to build Agent model. According to the output of the Agent model, the relationship between the pedestrian corridor width, ground slope, turning forms, turning angle and the walking efficiency, crowded severity is acquired. The results of the simulation can be applied to pedestrian corridor design in order to reduce the crowded severity and the potential safety risks caused by crowded people.

Keywords: crowded severity, multi-agent, pedestrian preference, urban space design

Procedia PDF Downloads 200
13148 Point-Mutation in a Rationally Engineered Esterase Inverts its Enantioselectivity

Authors: Yasser Gaber, Mohamed Ismail, Serena Bisagni, Mohamad Takwa, Rajni Hatti-Kaul

Abstract:

Enzymes are safe and selective catalysts. They skillfully catalyze chemical reactions; however, the native form is not usually suitable for industrial applications. Enzymes are therefore engineered by several techniques to meet the required catalytic task. Clopidogrel is recorded among the five best selling pharmaceutical in 2010 under the brand name Plavix. The commonly used route for production of the drug on an industrial scale is the synthesis of the racemic mixture followed by diastereomeric resolution to obtain the pure S isomer. The process consumes a lot of solvents and chemicals. We have evaluated a biocatalytic cleaner approach for asymmetric hydrolysis of racemic clopidogrel. Initial screening of a selected number of hydrolases showed only one enzyme EST to exhibit activity and selectivity towards the desired stereoisomer. As the crude EST is a mixture of several isoenzymes, a homology model of EST-1 was used in molecular dynamic simulations to study the interaction of the enzyme with R and S isomers of clopidogrel. Analysis of the geometric hindrances of the tetrahedral intermediates revealed a potential site for mutagenesis in order to improve the activity and the selectivity. Single point mutation showed dramatic increase in activity and inversion of the enantioselectivity (400 fold change in E value).

Keywords: biocatalysis, biotechnology, enzyme, protein engineering, molecular modeling

Procedia PDF Downloads 430
13147 Characterization of 2,4,6-Trinitrotoluene (Tnt)-Metabolizing Bacillus Cereus Sp TUHP2 Isolated from TNT-Polluted Soils in the Vellore District, Tamilnadu, India

Authors: S. Hannah Elizabeth, A. Panneerselvam

Abstract:

Objective: The main objective was to evaluate the degradative properties of Bacillus cereus sp TUHP2 isolated from TNT-Polluted soils in the Vellore District, Tamil Nadu, India. Methods: Among the 3 bacterial genera isolated from different soil samples, one potent TNT degrading strain Bacillus cereus sp TUHP2 was identified. The morphological, physiological and the biochemical properties of the strain Bacillus cereus sp TUHP2 was confirmed by conventional methods and genotypic characterization was carried out using 16S r-DNA partial gene amplification and sequencing. The broken down by products of DNT in the extract was determined by Gas Chromatogram- Mass spectrometry (GC-MS). Supernatant samples from the broth studied at 24 h interval were analyzed by HPLC analysis and the effect on various nutritional and environmental factors were analysed and optimized for the isolate. Results: Out of three isolates one strain TUHP2 were found to have potent efficiency to degrade TNT and revealed the genus Bacillus. 16S rDNA gene sequence analysis showed highest homology (98%) with Bacillus cereus and was assigned as Bacillus cereus sp TUHP2. Based on the energy of the predicted models, the secondary structure predicted by MFE showed the more stable structure with a minimum energy. Products of TNT Transformation showed colour change in the medium during cultivation. TNT derivates such as 2HADNT and 4HADNT were detected by HPLC chromatogram and 2ADNT, 4ADNT by GC/MS analysis. Conclusion: Hence this study presents the clear evidence for the biodegradation process of TNT by strain Bacillus cereus sp TUHP2.

Keywords: bioremediation, biodegradation, biotransformation, sequencing

Procedia PDF Downloads 449
13146 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 91
13145 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 106
13144 Integrated Mass Rapid Transit System for Smart City Project in Western India

Authors: Debasis Sarkar, Jatan Talati

Abstract:

This paper is an attempt to develop an Integrated Mass Rapid Transit System (MRTS) for a smart city project in Western India. Integrated transportation is one of the enablers of smart transportation for providing a seamless intercity as well as regional level transportation experience. The success of a smart city project at the city level for transportation is providing proper integration to different mass rapid transit modes by way of integrating information, physical, network of routes fares, etc. The methodology adopted for this study was primary data research through questionnaire survey. The respondents of the questionnaire survey have responded on the issues about their perceptions on the ways and means to improve public transport services in urban cities. The respondents were also required to identify the factors and attributes which might motivate more people to shift towards the public mode. Also, the respondents were questioned about the factors which they feel might restrain the integration of various modes of MRTS. Furthermore, this study also focuses on developing a utility equation for respondents with the help of multiple linear regression analysis and its probability to shift to public transport for certain factors listed in the questionnaire. It has been observed that for shifting to public transport, the most important factors that need to be considered were travel time saving and comfort rating. Also, an Integrated MRTS can be obtained by combining metro rail with BRTS, metro rail with monorail, monorail with BRTS and metro rail with Indian railways. Providing a common smart card to transport users for accessing all the different available modes would be a pragmatic solution towards integration of the available modes of MRTS.

Keywords: mass rapid transit systems, smart city, metro rail, bus rapid transit system, multiple linear regression, smart card, automated fare collection system

Procedia PDF Downloads 252
13143 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings

Authors: Youlu Huang, Huanjun Jiang

Abstract:

A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.

Keywords: old buildings, adding story, seismic strengthening, seismic performance

Procedia PDF Downloads 112
13142 Advanced Approach to Analysis the Thin Strip Profile in Cold Rolling of Pair Roll Crossing and Shifting Mill Using an Arbitrary Lagrangian-Eulerian Technique

Authors: Abdulrahman Aljabri, Essam R. I. Mahmoud, Hamad Almohamedi, Zhengyi Jiang

Abstract:

Cold rolled thin strip has received intensive attention through technological and theoretical progress in the rolling process, as well as researchers have focused on its control during rolling as an essential parameter for producing thinner strip with good shape and profile. An advanced approach has been proposed to analysis the thin strip profile in cold rolling of pair roll crossing and shifting mill using Finite Element Analysis (FEA) with an ALE technique. The ALE (Arbitrary Lagrangian-Eulerian) techniques to enable more flexibility of the ALE technique in the adjustment of the finite element mesh, which provides a significant tool for simulating the thin strip under realistic rolling process constraint and provide accurate model results. The FEA can provide theoretical basis for the 3D model of controlling the strip shape and profile in thin strip rolling, and deliver an optimal rolling process parameter, and suggest corrective changes during cold rolling of thin strip.

Keywords: pair roll crossing, work roll shifting, strip shape and profile, finite element modeling

Procedia PDF Downloads 84
13141 Qsar Studies of Certain Novel Heterocycles Derived From bis-1, 2, 4 Triazoles as Anti-Tumor Agents

Authors: Madhusudan Purohit, Stephen Philip, Bharathkumar Inturi

Abstract:

In this paper we report the quantitative structure activity relationship of novel bis-triazole derivatives for predicting the activity profile. The full model encompassed a dataset of 46 Bis- triazoles. Tripos Sybyl X 2.0 program was used to conduct CoMSIA QSAR modeling. The Partial Least-Squares (PLS) analysis method was used to conduct statistical analysis and to derive a QSAR model based on the field values of CoMSIA descriptor. The compounds were divided into test and training set. The compounds were evaluated by various CoMSIA parameters to predict the best QSAR model. An optimum numbers of components were first determined separately by cross-validation regression for CoMSIA model, which were then applied in the final analysis. A series of parameters were used for the study and the best fit model was obtained using donor, partition coefficient and steric parameters. The CoMSIA models demonstrated good statistical results with regression coefficient (r2) and the cross-validated coefficient (q2) of 0.575 and 0.830 respectively. The standard error for the predicted model was 0.16322. In the CoMSIA model, the steric descriptors make a marginally larger contribution than the electrostatic descriptors. The finding that the steric descriptor is the largest contributor for the CoMSIA QSAR models is consistent with the observation that more than half of the binding site area is occupied by steric regions.

Keywords: 3D QSAR, CoMSIA, triazoles, novel heterocycles

Procedia PDF Downloads 431
13140 Prevalence of Knee Pain and Risk Factors and Its Impact on Functional Impairment among Saudi Adolescents

Authors: Ali H.Alyami, Hussam Darraj, Faisal Hakami, Mohammed Awaf, Sulaiman Hamdi, Nawaf Bakri, Abdulaziz Saber, Khalid Hakami, Almuhanad Alyami, Mohammed khashab

Abstract:

Introduction: Adolescents frequently self-report pain, according to epidemiological research. The knee is one of the sites where the pain is most common. One of the main factors contributing to the number of years people spend disabled and having substantial personal, societal, and economic burdens globally are musculoskeletal disorders. Adolescents may have knee pain due to an abrupt, traumatic injury or an insidious, slowly building onset that neither the adolescent nor the parent is aware of. Objectives: The present study’s authors aimed to estimate the prevalence of knee pain in Saudi adolescents. Methods: This cross-sectional survey, carried out from June to November 2022, included 676 adolescents ages 10 to 18. Data are presented as frequencies and percentages for categorical variables. Analysis of variance (ANOVA) was used to compare means between groups, while the chi-square test was used for the comparison of categorical variables. Statistical significance was set at P< 0.05.Result: Adolescents were invited to take part in the study. 57.5% were girls, and 42.5% were males,68.8% were 676 aged between 15 and 18. The prevalence of knee pain was considerably high among females (26%), while it was 19.2% among males. Moreover, age was a significant predictor for knee pain; also BMI was significant for knee pain. Conclusion: Our study noted a high rate of knee pain among adolescents, so we need to raise awareness about risk factors. Adolescent knee pain can be prevented with conservative methods and some minor lifestyle/activity modifications.

Keywords: knee pain, prevalence of knee pain, exercise training, physical activity

Procedia PDF Downloads 90
13139 Improvement of Visual Acuity in Patient Undergoing Occlusion Therapy

Authors: Rajib Husain, Mezbah Uddin, Mohammad Shamsal Islam, Rabeya Siddiquee

Abstract:

Purpose: To determine the improvement of visual acuity in patients undergoing occlusion therapy. Methods: This was a prospective hospital-based study of newly diagnosed of amblyopia seen at the pediatric clinic of Chittagong Eye Infirmary & Training Complex. There were 32 refractive amblyopia subjects were examined & questionnaire was piloted. Included were all patients diagnosed with refractive amblyopia between 5 to 8 years, without previous amblyopia treatment, and whose parents were interested to participate in the study. Patients diagnosed with strabismic amblyopia were excluded. Patients were first corrected with the best correction for a month. When the VA in the amblyopic eye did not improve over a month, then occlusion treatment was started. Occlusion was done daily for 6-8 h together with vision therapy. The occlusion was carried out for three months. Results: Out of study 32 children, 31 of them have a good compliance of amblyopic treatment whereas one child has poor compliance. About 6% Children have amblyopia from Myopia, 7% Hyperopia, 32% from myopic astigmatism, 42% from hyperopic astigmatism and 13% have mixed astigmatism. The mean and Standard deviation of present average VA was 0.452±0.275 Log MAR and after an intervention of amblyopia therapy with vision therapy mean and Standard deviation VA was 0.155±0.157 Log MAR. Out of total respondent 21.85% have BCVA in range from (0-.2) log MAR, 37.5% have BCVA in range from (0.22-0.5) log MAR, 35.95% have in range from (0.52-0.8) log MAR, 4.7% have in range from (0.82-1) log MAR and after intervention of occlusion therapy with vision therapy 76.6% have VA in range from (0-.2) log MAR, 21.85% have VA in range from (0.22-0.5) log MAR, 1.5% have in range from (0.52-0.8) log MAR. Conclusion: Amblyopia is a most important factor in pediatric age group because it can lead to visual impairment. Thus, this study concludes that occlusion therapy with vision therapy is probably one of the best treatment methods for amblyopic patients (age 5-8 years), and compliance and age were the most critical factor predicting a successful outcome.

Keywords: amblyopia, occlusion therapy, vision therapy, eccentric fixation, visuoscopy

Procedia PDF Downloads 493
13138 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 141
13137 Effect of Key Parameters on Performances of an Adsorption Solar Cooling Machine

Authors: Allouache Nadia

Abstract:

Solid adsorption cooling machines have been extensively studied recently. They constitute very attractive solutions recover important amount of industrial waste heat medium temperature and to use renewable energy sources such as solar energy. The development of the technology of these machines can be carried out by experimental studies and by mathematical modelisation. This last method allows saving time and money because it is suppler to use to simulate the variation of different parameters. The adsorption cooling machines consist essentially of an evaporator, a condenser and a reactor (object of this work) containing a porous medium, which is in our case the activated carbon reacting by adsorption with ammoniac. The principle can be described as follows: When the adsorbent (at temperature T) is in exclusive contact with vapour of adsorbate (at pressure P), an amount of adsorbate is trapped inside the micro-pores in an almost liquid state. This adsorbed mass m, is a function of T and P according to a divariant equilibrium m=f (T,P). Moreover, at constant pressure, m decreases as T increases, and at constant adsorbed mass P increases with T. This makes it possible to imagine an ideal refrigerating cycle consisting of a period of heating/desorption/condensation followed by a period of cooling/adsorption/evaporation. Effect of key parameters on the machine performances are analysed and discussed.

Keywords: activated carbon-ammoniac pair, effect of key parameters, numerical modeling, solar cooling machine

Procedia PDF Downloads 242
13136 Assessing Flood Risk and Mapping Inundation Zones in the Kelantan River Basin: A Hydrodynamic Modeling Approach

Authors: Fatemehsadat Mortazavizadeh, Amin Dehghani, Majid Mirzaei, Nurulhuda Binti Mohammad Ramli, Adnan Dehghani

Abstract:

Flood is Malaysia's most common and serious natural disaster. Kelantan River Basin is a tropical basin that experiences a rainy season during North-East Monsoon from November to March. It is also one of the hardest hit areas in Peninsular Malaysia during the heavy monsoon rainfall. Considering the consequences of the flood events, it is essential to develop the flood inundation map as part of the mitigation approach. In this study, the delineation of flood inundation zone in the area of Kelantan River basin using a hydrodynamic model is done by HEC-RAS, QGIS and ArcMap. The streamflow data has been generated with the weather generator based on the observation data. Then, the data is statistically analyzed with the Extreme Value (EV1) method for 2-, 5-, 25-, 50- and 100-year return periods. The minimum depth, maximum depth, mean depth, and the standard deviation of all the scenarios, including the OBS, are observed and analyzed. Based on the results, generally, the value of the data increases with the return period for all the scenarios. However, there are certain scenarios that have different results, which not all the data obtained are increasing with the return period. Besides, OBS data resulted in the middle range within Scenario 1 to Scenario 40.

Keywords: flood inundation, kelantan river basin, hydrodynamic model, extreme value analysis

Procedia PDF Downloads 55
13135 Evaluation of Virtual Reality for the Rehabilitation of Athlete Lower Limb Musculoskeletal Injury: A Method for Obtaining Practitioner’s Viewpoints through Observation and Interview

Authors: Hannah K. M. Tang, Muhammad Ateeq, Mark J. Lake, Badr Abdullah, Frederic A. Bezombes

Abstract:

Based on a theoretical assessment of current literature, virtual reality (VR) could help to treat sporting injuries in a number of ways. However, it is important to obtain rehabilitation specialists’ perspectives in order to design, develop and validate suitable content for a VR application focused on treatment. Subsequently, a one-day observation and interview study focused on the use of VR for the treatment of lower limb musculoskeletal conditions in athletes was conducted at St George’s Park England National Football Centre with rehabilitation specialists. The current paper established the methods suitable for obtaining practitioner’s viewpoints through observation and interview in this context. Particular detail was provided regarding the method of qualitatively processing interview results using the qualitative data analysis software tool NVivo, in order to produce a narrative of overarching themes. The observations and overarching themes identified could be used as a framework and success criteria of a VR application developed in future research. In conclusion, this work explained the methods deemed suitable for obtaining practitioner’s viewpoints through observation and interview. This was required in order to highlight characteristics and features of a VR application designed to treat lower limb musculoskeletal injury of athletes and could be built upon to direct future work.

Keywords: athletes, lower-limb musculoskeletal injury, rehabilitation, return-to-sport, virtual reality

Procedia PDF Downloads 238
13134 Mayan Culture and Attitudes towards Sustainability

Authors: Sarah Ryu

Abstract:

Agricultural methods and ecological approaches employed by the pre-colonial Mayans may provide valuable insights into forest management and viable alternatives for resource sustainability in the face of major deforestation across Central and South America.Using a combination of observation data collected from the modern indigenous inhabitants near Mixco in Guatemala and historical data, this study was able to create a holistic picture of how the Maya maintained their ecosystems. Surveys and observations were conducted in the field, over a period of twelve weeks across two years. Geographic and archaeological data for this area was provided by Guatemalan organizations such as the Universidad de San Carlos de Guatemala. Observations of current indigenous populations around Mixco showed that they adhered to traditional Mayan methods of agriculture, such as terrace construction and arboriculture. Rather than planting one cash crop as was done by the Spanish, indigenous peoples practice agroforestry, cultivating forests that would provide trees for construction material, wild plant foods, habitat for game, and medicinal herbs. The emphasis on biodiversity prevented deforestation and created a sustainable balance between human consumption and forest regrowth. Historical data provided by MayaSim showed that the Mayans successfully maintained their ecosystems from about 800BCE to 700CE. When the Mayans practiced natural resource conservation and cultivated a harmonious relationship with the forest around them, they were able to thrive and prosper alongside nature. Having lasted over a thousand years, the Mayan empire provides a valuable lesson in sustainability and human attitudes towards the environment.

Keywords: biodiversity, forestry, mayan, sustainability

Procedia PDF Downloads 169
13133 Gendering the Political Crisis in Hong Kong: A Cultural Analysis of Spectatorship on Marvel Superhero Movies in Hong Kong

Authors: Chi S. Lee

Abstract:

Marvel superhero movies have obtained its unprecedented popularity around the globe. It is a dominant narrative in current scholarship on superhero studies that the political trauma of America, such as attack of September 11, and the masculinity represented in superhero genre are symbolically connected in a way of remasculinization, a standardized plot that before becoming a superhero, a man has to overcome its trauma in his life. Through this standardized plot, American audience finds their pleasure in the spectatorship of equating this plot of remasculinization with the situation of America, rewriting their traumatic memory and resolving around the economic, social, political, and psychological instability of precarity in their own context. Shifting the context to Hong Kong, where Marvel superhero movies have been reaching its dominant status in the local film market, this analysis finds its limitation in explaining the connection between text and context. This article aims to retain this connection through investigation of the Hong Kong audience’s spectatorship. It is argued that the masculinity represented in Marvel superhero movies no longer fits into the stereotypical image of superhero, but presents itself in crisis. This crisis is resolved by the technological excess of the superpower, namely, technological remasculinization. The technological remasculinization offers a sense of futurity through which it is felt that this remasculinization can be achieved in the foreseeable future instead of remaining imaginary and fictional. In this way, the political crisis of Hong Kong is gendered as masculinity in crisis which is worth being remasculinized in the future. This gendering process is a historical product as the symbolic equation between politics and masculinity has for long been encoded in the colonial history of Hong Kong. In short, Marvel superhero’s masculinity offers a sense of masculine hope for the Hong Kong audiences to overcome the political crisis they confront in reality through a postponed identification with the superhero’s masculinity. After the discussion of the Hong Kong audience’s spectatorship on Marvel superhero movies with the insights casted by spectatorship theory, above idea is generated.

Keywords: political crisis in Hong Kong, Marvel superhero movies, spectatorship, technological remasculinization

Procedia PDF Downloads 266