Search results for: reference values
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9039

Search results for: reference values

1779 Effects of Extrusion Conditions on the Cooking Properties of Extruded Rice Vermicelli Using Twin-Screw Extrusion

Authors: Hasika Mith, Hassany Ly, Hengsim Phoung, Rathana Sovann, Pichmony Ek, Sokuntheary Theng

Abstract:

Rice is one of the most important crops used in the production of ready-to-cook (RTC) products such as rice vermicelli, noodles, rice paper, Banh Kanh, wine, snacks, and desserts. Meanwhile, extrusion is the most creative food processing method used for developing products with improved nutritional, functional, and sensory properties. This method authorizes process control such as mixing, cooking, and product shaping. Therefore, the objectives of this study were to produce rice vermicelli using a twin screw extruder, and the cooking properties of extruded rice vermicelli were investigated. Response Surface Methodology (RSM) with Box-Behnken design was applied to optimize extrusion conditions in order to achieve the most desirable product characteristics. The feed moisture rate (30–35%), the barrel temperature (90–110°C), and the screw speed (200–400 rpm) all play a big role and have a significant impact on the water absorption index (WAI), cooking yield (CY), and cooking loss (CL) of extrudate rice vermicelli. Results showed that the WAI of the final extruded rice vermicelli ranged between 216.97% and 571.90%. The CY ranged from 147.94 to 203.19%, while the CL ranged from 8.55 to 25.54%. The findings indicated that at a low screw speed or low temperature, there are likely to be more unbroken polymer chains and more hydrophilic groups, which can bind more water and make WAI values higher. The extruded rice vermicelli's cooking yield value had altered considerably after processing under various conditions, proving that the screw speed had little effect on each extruded rice vermicelli's CY. The increase in barrel temperature tended to increase cooking yield and reduce cooking loss. In conclusion, the extrusion processing by a twin-screw extruder had a significant effect on the cooking quality of the rice vermicelli extrudate.

Keywords: cooking loss, cooking quality, cooking yield, extruded rice vermicelli, twin-screw extruder, water absorption index

Procedia PDF Downloads 83
1778 An Assessment of Floodplain Vegetation Response to Groundwater Changes Using the Soil & Water Assessment Tool Hydrological Model, Geographic Information System, and Machine Learning in the Southeast Australian River Basin

Authors: Newton Muhury, Armando A. Apan, Tek N. Marasani, Gebiaw T. Ayele

Abstract:

The changing climate has degraded freshwater availability in Australia that influencing vegetation growth to a great extent. This study assessed the vegetation responses to groundwater using Terra’s moderate resolution imaging spectroradiometer (MODIS), Normalised Difference Vegetation Index (NDVI), and soil water content (SWC). A hydrological model, SWAT, has been set up in a southeast Australian river catchment for groundwater analysis. The model was calibrated and validated against monthly streamflow from 2001 to 2006 and 2007 to 2010, respectively. The SWAT simulated soil water content for 43 sub-basins and monthly MODIS NDVI data for three different types of vegetation (forest, shrub, and grass) were applied in the machine learning tool, Waikato Environment for Knowledge Analysis (WEKA), using two supervised machine learning algorithms, i.e., support vector machine (SVM) and random forest (RF). The assessment shows that different types of vegetation response and soil water content vary in the dry and wet seasons. The WEKA model generated high positive relationships (r = 0.76, 0.73, and 0.81) between NDVI values of all vegetation in the sub-basins against soil water content (SWC), the groundwater flow (GW), and the combination of these two variables, respectively, during the dry season. However, these responses were reduced by 36.8% (r = 0.48) and 13.6% (r = 0.63) against GW and SWC, respectively, in the wet season. Although the rainfall pattern is highly variable in the study area, the summer rainfall is very effective for the growth of the grass vegetation type. This study has enriched our knowledge of vegetation responses to groundwater in each season, which will facilitate better floodplain vegetation management.

Keywords: ArcSWAT, machine learning, floodplain vegetation, MODIS NDVI, groundwater

Procedia PDF Downloads 101
1777 Evaluation of Oligocene-Miocene Clay from the Northern Part of Palmyra Region (Syria) for Industrial Ceramic Applications

Authors: Abdul Salam Turkmani

Abstract:

Clay of the northern Palmyra region is one of the most important raw materials used in the Syrian ceramics industry. This study is focused on the evaluation of various laboratory analyses such as chemical analysis (XRF), mineral X-ray diffraction analysis (XRD), differential thermal analysis (DTA), and semi-industrial tests carried out on samples collected on two representative locations of the upper Oligocene in AlMkamen valley (MK) and lower Miocene in AlZukara valley (ZR) of the northern part of Palmyra, Syria. Chemical results classify the (MK) and (ZR) clays as semi-plastic red clay slightly carbonate and (eliminate probable) illite-chlorite clays with a very fine particle size distribution. Content of SiO₂ between 46.28-57.66%, Al2O3 13.81-25.2%, Fe₂O₃ 3.47-11.58%, CaO 1.15-7.19%, Na₂O+K₂O varied between 3.34-3.71%. Based on clay chemical composition and iron and carbonate content, these deposits can be considered as red firing clays. Their mineralogical composition is mainly represented by illite, kaolinite and quartz, and accessories minerals such as calcite, feldspar, phillipsite, and goethite. The results of the DTA test confirm the presence of gypsum and quartz phases in (MK) clay. Ceramic testing shows good green and dry bending strength values, which varied between 9-14 kg/cm², at 1160°C to 1180°C. Water absorption moves from 14.6 % at 1120°C to 2.2% at 1180°C to 1.6% at 1200°C. Breaking load after firing changes from 400 to 590 kg/cm². At 1200°C (MK), clay reaches perfect vitrification. After firing, the color of the clay changes from orange-hazel to red-brown at 1180°C. Technological results confirmed the suitability of the studied clays to produce floor and wall ceramic tiles. Using one of the two types of clay into the ceramic body or both types together gave satisfactory industrial results.

Keywords: ceramic, clay, industry , Palmyra

Procedia PDF Downloads 196
1776 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction

Authors: M. D. Haneef, R. B. Randall, Z. Peng

Abstract:

Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in the industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration based analysis and wear prediction. This work is an extension of a previous study, in which an engine simulation model was developed using a MATLAB/SIMULINK program, whereby the engine parameters used in the simulation were obtained experimentally from a Toyota 3SFE 2.0 litre petrol engines. Simulated hydrodynamic bearing forces were used to estimate vibrations signals and envelope analysis was carried out to analyze the effect of speed, load and clearance on the vibration response. Three different loads 50/80/110 N-m, three different speeds 1500/2000/3000 rpm, and three different clearances, i.e., normal, 2 times and 4 times the normal clearance were simulated to examine the effect of wear on bearing forces. The magnitude of the squared envelope of the generated vibration signals though not affected by load, but was observed to rise significantly with increasing speed and clearance indicating the likelihood of augmented wear. In the present study, the simulation model was extended further to investigate the bearing wear behavior, resulting as a consequence of different operating conditions, to complement the vibration analysis. In the current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. Also, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journal and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 µm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behavior and on the other hand it also helps to establish a correlation between wear based and vibration based analysis. Therefore, the model provides a cost-effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.

Keywords: condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction

Procedia PDF Downloads 310
1775 The Immunology Evolutionary Relationship between Signal Transducer and Activator of Transcription Genes from Three Different Shrimp Species in Response to White Spot Syndrome Virus Infection

Authors: T. C. C. Soo, S. Bhassu

Abstract:

Unlike the common presence of both innate and adaptive immunity in vertebrates, crustaceans, in particular, shrimps, have been discovered to possess only innate immunity. This further emphasizes the importance of innate immunity within shrimps in pathogenic resistance. Under the study of pathogenic immune challenge, different shrimp species actually exhibit varying degrees of immune resistance towards the same pathogen. Furthermore, even within the same shrimp species, different batches of challenged shrimps can have different strengths of immune defence. Several important pathways are activated within shrimps during pathogenic infection. One of them is JAK-STAT pathway that is activated during bacterial, viral and fungal infections by which STAT(Signal Transducer and Activator of Transcription) gene is the core element of the pathway. Based on theory of Central Dogma, the genomic information is transmitted in the order of DNA, RNA and protein. This study is focused in uncovering the important evolutionary patterns present within the DNA (non-coding region) and RNA (coding region). The three shrimp species involved are Macrobrachium rosenbergii, Penaeus monodon and Litopenaeus vannamei which all possess commercial significance. The shrimp species were challenged with a famous penaeid shrimp virus called white spot syndrome virus (WSSV) which can cause serious lethality. Tissue samples were collected during time intervals of 0h, 3h, 6h, 12h, 24h, 36h and 48h. The DNA and RNA samples were then extracted using conventional kits from the hepatopancreas tissue samples. PCR technique together with designed STAT gene conserved primers were utilized for identification of the STAT coding sequences using RNA-converted cDNA samples and subsequent characterization using various bioinformatics approaches including Ramachandran plot, ProtParam and SWISS-MODEL. The varying levels of immune STAT gene activation for the three shrimp species during WSSV infection were confirmed using qRT-PCR technique. For one sample, three biological replicates with three technical replicates each were used for qRT-PCR. On the other hand, DNA samples were important for uncovering the structural variations within the genomic region of STAT gene which would greatly assist in understanding the STAT protein functional variations. The partially-overlapping primers technique was used for the genomic region sequencing. The evolutionary inferences and event predictions were then conducted through the Bayesian Inference method using all the acquired coding and non-coding sequences. This was supplemented by the construction of conventional phylogenetic trees using Maximum likelihood method. The results showed that adaptive evolution caused STAT gene sequence mutations between different shrimp species which led to evolutionary divergence event. Subsequently, the divergent sites were correlated to the differing expressions of STAT gene. Ultimately, this study assists in knowing the shrimp species innate immune variability and selection of disease resistant shrimps for breeding purpose. The deeper understanding of STAT gene evolution from the perspective of both purifying and adaptive approaches not only can provide better immunological insight among shrimp species, but also can be used as a good reference for immunological studies in humans or other model organisms.

Keywords: gene evolution, JAK-STAT pathway, immunology, STAT gene

Procedia PDF Downloads 150
1774 Class Control Management Issues and Solutions in Interactive Learning Theories’ Efficiency and the Application Case Study: 3rd Year Primary School

Authors: Mohammed Belalia Douma

Abstract:

Interactive learning is considered as the most effective strategy of learning, it is an educational philosophy based on the learner's contribution and involvement mainly in classroom and how he interacts toward his small society “classroom”, and the level of his collaboration into challenge, discovering, games, participation, all these can be provided through the interactive learning, which aims to activate the learner's role in the operation of learning, which focuses on research and experimentation, and the learner's self-reliance in obtaining information, acquiring skills, and forming values and attitudes. Whereas not based on memorization only, but rather on developing thinking and the ability to solve problems, on teamwork and collaborative learning. With the exchange or roles - teacher to student- , when the student will be more active and performing operations more than the student under the interactive learning method; we might face a several issues dealing with class controlling management, noise, and stability of learning… etc. This research paper is observing the application of the interactive learning on reality “classroom” and answers several assumptions and analyzes the issues coming up of these strategies mainly: noise, class control…etc The research sample was about 150 student of the 3rd year primary school in “Chlef” district, Algeria, level: beginners in the range of age 08 to 10 years old . We provided a questionnaire of confidential fifteen questions and also analyzing the attitudes of learners during three months. it have witnessed as teachers a variety of strategies dealing with applying the interactive learning but with a different issues; time management, noise, uncontrolled classes, overcrowded classes. Finally, it summed up that although the active education is an inevitably effective method of teaching, however, there are drawbacks to this, in addition to the fact that not all theoretical strategies can be applied and we conclude with solutions of this case study.

Keywords: interactive learning, student, learners, strategies.

Procedia PDF Downloads 59
1773 Solar Photovoltaic Driven Air-Conditioning for Commercial Buildings: A Case of Botswana

Authors: Taboka Motlhabane, Pradeep Sahoo

Abstract:

The global demand for cooling has grown exponentially over the past century to meet economic development and social needs, accounting for approximately 10% of the global electricity consumption. As global temperatures continue to rise, the demand for cooling and heating, ventilation and air-conditioning (HVAC) equipment is set to rise with it. The increased use of HVAC equipment has significantly contributed to the growth of greenhouse gas (GHG) emissions which aid the climate crisis- one of the biggest challenges faced by the current generation. The need to address emissions caused directly by HVAC equipment and electricity generated to meet the cooling or heating demand is ever more pressing. Currently, developed countries account for the largest cooling and heating demand, however developing countries are anticipated to experience a huge increase in population growth in 10 years, resulting in a shift in energy demand. Developing countries, which are projected to account for nearly 60% of the world's GDP by 2030, are rapidly building infrastructure and economies to meet their growing needs and meet these projections. Cooling, a very energy-intensive process that can account for 20 % to 75% of a building's energy, depending on the building's use. Solar photovoltaic (PV) driven air-conditioning offers a great cost-effective alternative for adoption in both residential and non-residential buildings to offset grid electricity, particularly in countries with high irradiation, such as Botswana. This research paper explores the potential of a grid-connected solar photovoltaic vapor-compression air-conditioning system for the Peter-Smith herbarium at the Okavango Research Institute (ORI) University of Botswana campus in Maun, Botswana. The herbarium plays a critical role in the collection and preservation of botanical data, dating back over 100 years, with pristine collection from the Okavango Delta, a UNESCO world heritage site and serves as a reference and research site. Due to the herbarium’s specific needs, it operates throughout the day and year in an attempt to maintain a constant herbarium temperature of 16°?. The herbarium model studied simulates a variable-air-volume HVAC system with a system rating of 30 kW. Simulation results show that the HVAC system accounts for 68.9% of the building's total electricity at 296 509.60 kWh annually. To offset the grid electricity, a 175.1 kWp nominal power rated PV system requiring 416 modules to match the required power, covering an area of 928 m2 is used to meet the HVAC system annual needs. An economic assessment using PVsyst found that for an installation priced with average solar PV prices in Botswana totalled to be 787 090.00 BWP, with annual operating costs of 30 500 BWP/year. With self-project financing, the project is estimated to have recouped its initial investment within 6.7 years. At an estimated project lifetime of 20 years, the Net Present Value is projected at 1 565 687.00 BWP with a ROI of 198.9%, with 74 070.67 tons of CO2 saved at the end of the project lifetime. This study investigates the performance of the HVAC system to meet the indoor air comfort requirements, the annual PV system performance, and the building model has been simulated using DesignBuilder Software.

Keywords: vapor compression refrigeration, solar cooling, renewable energy, herbarium

Procedia PDF Downloads 126
1772 Study of the Physicochemical Characteristics of Liquid Effluents from the El Jadida Wastewater Treatment Plant

Authors: Aicha Assal, El Mostapha Lotfi

Abstract:

Rapid industrialization and population growth are currently the main causes of energy and environmental problems associated with wastewater treatment. Wastewater treatment plants (WWTPs) aim to treat wastewater before discharging it into the environment, but they are not yet capable of treating non-biodegradable contaminants such as heavy metals. Toxic heavy metals can disrupt biological processes in WWTPs. Consequently, it is crucial to combine additional physico-chemical treatments with WWTPs to ensure effective wastewater treatment. In this study, the authors examined the pretreatment process for urban wastewater generated by the El Jadida WWTP in order to assess its treatment efficiency. Various physicochemical and spatiotemporal parameters of the WWTP's raw and treated water were studied, including temperature, pH, conductivity, biochemical oxygen demand (BOD5), chemical oxygen demand (COD), suspended solids (SS), total nitrogen, and total phosphorus. The results showed an improvement in treatment yields, with measured performance values of 77% for BOD5, 63% for COD, and 66% for TSS. However, spectroscopic analyses revealed persistent coloration in wastewater samples leaving the WWTP, as well as the presence of heavy metals such as Zn, cadmium, chromium, and cobalt, detected by inductively coupled plasma optical emission spectroscopy (ICP-OES). To remedy these staining problems and reduce the presence of heavy metals, a new low-cost, environmentally-friendly eggshell-based solution was proposed. This method eliminated most heavy metals such as cobalt, beryllium, silver, and copper and significantly reduced the amount of cadmium, lead, chromium, manganese, aluminium, and Zn. In addition, the bioadsorbent was able to decolorize wastewater by up to 84%. This adsorption process is, therefore, of great interest for ensuring the quality of wastewater and promoting its reuse in irrigation.

Keywords: WWTP, wastewater, heavy metals, decoloration, depollution, COD, BOD5

Procedia PDF Downloads 64
1771 Decarboxylation of Waste Coconut Oil and Comparison of Acid Values

Authors: Pabasara H. Gamage, Sisira K. Weliwegamage, Sameera R. Gunatilake, Hondamuni I. C De Silva, Parakrama Karunaratne

Abstract:

Green diesel is an upcoming category of biofuels, which has more practical advantages than biodiesel. Production of green diesel involves production of hydrocarbons from various fatty acid sources. Though green diesel is chemically similar to fossil fuel hydrocarbons, it is more environmentally friendly. Decarboxylation of fatty acid sources is one of green diesel production methods and is less expensive and more energy efficient compared to hydrodeoxygenation. Free fatty acids (FFA), undergo decarboxylation readily than triglycerides. Waste coconut oil, which is a rich source of FFA, can be easily decarboxylated than other oils which have lower FFA contents. These free fatty acids can be converted to hydrocarbons by decarboxylation. Experiments were conducted to carry out decarboxylation of waste coconut oil in a high pressure hastealloy reactor (Toption Goup LTD), in the presence of soda lime and mixtures of soda lime and alumina. Acid value (AV) correlates to the amount of FFA available in a sample of oil. It can be shown that with the decreasing of AV, FFAs have converted to hydrocarbons. First, waste coconut oil was reacted with soda lime alone, at 150 °C, 200 °C, and 250 °C and 1.2 MPa pressure for 2 hours. AVs of products at different temperatures were compared. AV of products decreased with increasing temperature. Thereafter, different mixtures of soda lime and alumina (100% Soda lime, 1:1 soda lime and alumina and 100% alumina) were employed at temperatures 150 °C, 200 °C, and 250 °C and 1.2 MPa pressure. The lowest AV of 2.99±0.03 was obtained when 1:1 soda lime and alumina were employed at 250 °C. It can be concluded with respect to the AV that the amount of FFA decreased when decarboxylation temperature was increased. Soda lime:alumina 1:1 mixture showed the lowest AV among the compositions studied. These findings lead to formulate a method to successfully synthesize hydrocarbons by decarboxylating waste coconut oil in the presence of soda lime and alumina (1:1) at elevated tempertaures such as 250 °C.

Keywords: acid value, free fatty acids, green diesel, high pressure reactor, waste coconut oil

Procedia PDF Downloads 300
1770 Neuropsychology of Dyslexia and Rehabilitation Approaches: A Research Study Applied to School Aged Children with Reading Disorders in Greece

Authors: Rozi Laskaraki, Argyris Karapetsas, Aikaterini Karapetsa

Abstract:

This paper is focused on the efficacy of a rehabilitation program based on musical activities, implied to a group of school-aged dyslexic children. Objective: The purpose of this study was to investigate the efficacy of auditory training including musical exercises in children with developmental dyslexia (DD). Participants and Methods: 45 third-, and fourth-grade students with DD and a matched control group (n=45) were involved in this study. In the beginning, students participated in a clinical assessment, including both electrophysiological (i.e., event related potentials (ERPs) esp.P300 waveform) and neuropsychological tests, being conducted in Laboratory of Neuropsychology, at University of Thessaly, in Volos, Greece. Initial assessment’s results confirmed statistically significant lower performance for children with DD, compared to that of the typical readers. After clinical assessment, a subgroup of children with dyslexia was submitted to a music auditory training program, conducted in 45-minute training sessions, once a week, for twenty weeks. The program included structured and digitized musical activities involving pitch, rhythm, melody and tempo perception and discrimination as well as auditory sequencing. After the intervention period, children underwent a new recording of ERPs. Results: The electrophysiological results revealed that children had similar P300 latency values to that of the controls, after the remediation program; thus children overcame their deficits. Conclusion: The outcomes of the current study suggest that ERPs is a valid clinical tool in neuropsychological assessment settings and dyslexia can be ameliorated through music auditory training.

Keywords: dyslexia, event related potentials, learning disabilities, music, rehabilitation

Procedia PDF Downloads 147
1769 Identification of Genomic Mutations in Prostate Cancer and Cancer Stem Cells By Single Cell RNAseq Analysis

Authors: Wen-Yang Hu, Ranli Lu, Mark Maienschein-Cline, Danping Hu, Larisa Nonn, Toshi Shioda, Gail S. Prins

Abstract:

Background: Genetic mutations are highly associated with increased prostate cancer risk. In addition to whole genome sequencing, somatic mutations can be identified by aligning transcriptome sequences to the human genome. Here we analyzed bulk RNAseq and single cell RNAseq data of human prostate cancer cells and their matched non-cancer cells in benign regions from 4 individual patients. Methods: Sequencing raw reads were aligned to the reference genome hg38 using STAR. Variants were annotated using Annovar with respect to overlap gene annotation information, effect on gene and protein sequence, and SIFT annotation of nonsynonymous variant effect. We determined cancer-specific novel alleles by comparing variant calls in cancer cells to matched benign cells from the same individual by selecting unique alleles that were only detected in the cancer samples. Results: In bulk RNAseq data from 3 patients, the most common variants were the noncoding mutations at UTR3/UTR5, and the major variant types were single-nucleotide polymorphisms (SNP) including frameshift mutations. C>T transversion is the most frequently presented substitution of SNP. A total of 222 genes carrying unique exonic or UTR variants were revealed in cancer cells across 3 patients but not in benign cells. Among them, transcriptome levels of 7 genes (CITED2, YOD1, MCM4, HNRNPA2B1, KIF20B, DPYSL2, NR4A1) were significantly up or down regulated in cancer stem cells. Out of the 222 commonly mutated genes in cancer, 19 have nonsynonymous variants and 11 are damaged genes with variants including SIFT, frameshifts, stop gain/loss, and insertions/deletions (indels). Two damaged genes, activating transcription factor 6 (ATF6) and histone demethylase KDM3A are of particular interest; the former is a survival factor for certain cancer cells while the later positively activates androgen receptor target genes in prostate cancer. Further, single cell RNAseq data of cancer cells and their matched non-cancer benign cells from both primary 2D and 3D tumoroid cultures were analyzed. Similar to the bulk RNAseq data, single cell RNAseq in cancer demonstrated that the exonic mutations are less common than noncoding variants, with SNPs including frameshift mutations the most frequently presented types in cancer. Compared to cancer stem cell enriched-3D tumoroids, 2D cancer cells carried 3-times higher variants, 8-times more coding mutations and 10-times more nonsynonymous SNP. Finally, in both 2D primary and 3D tumoroid cultures, cancer stem cells exhibited fewer coding mutations and noncoding SNP or insertions/deletions than non-stem cancer cells. Summary: Our study demonstrates the usefulness of bulk and single cell RNAseaq data in identifying somatic mutations in prostate cancer, providing an alternative method in screening candidate genes for prostate cancer diagnosis and potential therapeutic targets. Cancer stem cells carry fewer somatic mutations than non-stem cancer cells due to their inherited immortal stand DNA from parental stem cells that explains their long-lived characteristics.

Keywords: prostate cancer, stem cell, genomic mutation, RNAseq

Procedia PDF Downloads 21
1768 Architectural Approaches to a Sustainable Community with Floating Housing Units Adapting to Climate Change and Sea Level Rise in Vietnam

Authors: Nguyen Thi Thu Trang

Abstract:

Climate change and sea level rise is one of the greatest challenges facing human beings in the 21st century. Because of sea level rise, several low-lying coastal areas around the globe are at risk of being completely submerged, disappearing under water. Particularly in Viet Nam, the rise in sea level is predicted to result in more frequent and even permanently inundated coastal plains. As a result, land reserving fund of coastal cities is going to be narrowed in near future, while construction ground is becoming increasingly limited due to a rapid growth in population. Faced with this reality, the solutions are being discussed not only in tradition view such as accommodation is raised or moved to higher areas, or “living with the water”, but also forwards to “living on the water”. Therefore, the concept of a sustainable floating community with floating houses based on the precious value of long term historical tradition of water dwellings in Viet Nam would be a sustainable solution for adaptation of climate change and sea level rise in the coastal areas. The sustainable floating community is comprised of sustainability in four components: architecture, environment, socio-economic and living quality. This research paper is focused on sustainability in architectural component of floating community. Through detailed architectural analysis of current floating houses and floating communities in Viet Nam, this research not only accumulates precious values of traditional architecture that need to be preserved and developed in the proposed concept, but also illustrates its weaknesses that need to address for optimal design of the future sustainable floating communities. Based on these studies the research would provide guidelines with appropriate architectural solutions for the concept of sustainable floating community with floating housing units that are adapted to climate change and sea level rise in Viet Nam.

Keywords: guidelines, sustainable floating community, floating houses, Vietnam

Procedia PDF Downloads 518
1767 Comparing Perceived Restorativeness in Natural and Urban Environment: A Meta-Analysis

Authors: Elisa Menardo, Margherita Pasini, Margherita Brondino

Abstract:

A growing body of empirical research from different areas of inquiry suggests that brief contact with natural environment restore mental resources. The Attention Restoration Theory (ART) is the widespread used and empirical founded theory developed to explain why exposure to nature helps people to recovery cognitive resources. It assumes that contact with nature allows people to free (and then recovery) voluntary attention resources and thus allows them to recover from a cognitive fatigue situation. However, it was suggested that some people could have more cognitive benefit after exposure to urban environment. The objective of this study is to report the results of a meta-analysis on studies (peer-reviewed articles) comparing the restorativeness (the quality to be restorative) perceived in natural environments than those perceived in urban environments. This meta-analysis intended to estimate how much nature environments (forests, parks, boulevards) are perceived to be more restorativeness than urban ones (i.e., the magnitude of the perceived restorativeness’ difference). Moreover, given the methodological difference between study, it studied the potential role of moderator variables as participants (student or other), instrument used (Perceived Restorativeness Scale or other), and procedure (in laboratory or in situ). PsycINFO, PsycARTICLES, Scopus, SpringerLINK, Web of Science online database were used to identify all peer-review articles on restorativeness published to date (k = 167). Reference sections of obtained papers were examined for additional studies. Only 22 independent studies (with a total of 1371 participants) met inclusion criteria (direct exposure to environment, comparison between one outdoor environment with natural element and one without natural element, and restorativeness measured by self-report scale) and were included in meta-analysis. To estimate the average effect size, a random effect model (Restricted Maximum-likelihood estimator) was used because the studies included in the meta-analysis were conducted independently and using different methods in different populations, so no common effect-size was expected. The presence of publication bias was checked using trim and fill approach. Univariate moderator analysis (mixed effect model) were run to determine whether the variable coded moderated the perceived restorativeness difference. Results show that natural environments are perceived to be more restorativeness than urban environments, confirming from an empirical point of view what is now considered a knowledge gained in environmental psychology. The relevant information emerging from this study is the magnitude of the estimated average effect size, which is particularly high (d = 1.99) compared to those that are commonly observed in psychology. Significant heterogeneity between study was found (Q(19) = 503.16, p < 0.001;) and studies’ variability was very high (I2[C.I.] = 96.97% [94.61 - 98.62]). Subsequent univariate moderator analyses were not significant. Methodological difference (participants, instrument, and procedure) did not explain variability between study. Other methodological difference (e.g., research design, environment’s characteristics, light’s condition) could explain this variability between study. In the mine while, studies’ variability could be not due to methodological difference but to individual difference (age, gender, education level) and characteristics (connection to nature, environmental attitude). Furthers moderator analysis are working in progress.

Keywords: meta-analysis, natural environments, perceived restorativeness, urban environments

Procedia PDF Downloads 169
1766 Integrating Data Mining within a Strategic Knowledge Management Framework: A Platform for Sustainable Competitive Advantage within the Australian Minerals and Metals Mining Sector

Authors: Sanaz Moayer, Fang Huang, Scott Gardner

Abstract:

In the highly leveraged business world of today, an organisation’s success depends on how it can manage and organize its traditional and intangible assets. In the knowledge-based economy, knowledge as a valuable asset gives enduring capability to firms competing in rapidly shifting global markets. It can be argued that ability to create unique knowledge assets by configuring ICT and human capabilities, will be a defining factor for international competitive advantage in the mid-21st century. The concept of KM is recognized in the strategy literature, and increasingly by senior decision-makers (particularly in large firms which can achieve scalable benefits), as an important vehicle for stimulating innovation and organisational performance in the knowledge economy. This thinking has been evident in professional services and other knowledge intensive industries for over a decade. It highlights the importance of social capital and the value of the intellectual capital embedded in social and professional networks, complementing the traditional focus on creation of intellectual property assets. Despite the growing interest in KM within professional services there has been limited discussion in relation to multinational resource based industries such as mining and petroleum where the focus has been principally on global portfolio optimization with economies of scale, process efficiencies and cost reduction. The Australian minerals and metals mining industry, although traditionally viewed as capital intensive, employs a significant number of knowledge workers notably- engineers, geologists, highly skilled technicians, legal, finance, accounting, ICT and contracts specialists working in projects or functions, representing potential knowledge silos within the organisation. This silo effect arguably inhibits knowledge sharing and retention by disaggregating corporate memory, with increased operational and project continuity risk. It also may limit the potential for process, product, and service innovation. In this paper the strategic application of knowledge management incorporating contemporary ICT platforms and data mining practices is explored as an important enabler for knowledge discovery, reduction of risk, and retention of corporate knowledge in resource based industries. With reference to the relevant strategy, management, and information systems literature, this paper highlights possible connections (currently undergoing empirical testing), between an Strategic Knowledge Management (SKM) framework incorporating supportive Data Mining (DM) practices and competitive advantage for multinational firms operating within the Australian resource sector. We also propose based on a review of the relevant literature that more effective management of soft and hard systems knowledge is crucial for major Australian firms in all sectors seeking to improve organisational performance through the human and technological capability captured in organisational networks.

Keywords: competitive advantage, data mining, mining organisation, strategic knowledge management

Procedia PDF Downloads 415
1765 Future Design and Innovative Economic Models for Futuristic Markets in Developing Countries

Authors: Nessreen Y. Ibrahim

Abstract:

Designing the future according to realistic analytical study for the futuristic market needs can be a milestone strategy to make a huge improvement in developing countries economics. In developing countries, access to high technology and latest science approaches is very limited. The financial problems in low and medium income countries have negative effects on the kind and quality of imported new technologies and application for their markets. Thus, there is a strong need for shifting paradigm thinking in the design process to improve and evolve their development strategy. This paper discusses future possibilities in developing countries, and how they can design their own future according to specific future models FDM (Future Design Models), which established to solve certain economical problems, as well as political and cultural conflicts. FDM is strategic thinking framework provides an improvement in both content and process. The content includes; beliefs, values, mission, purpose, conceptual frameworks, research, and practice, while the process includes; design methodology, design systems, and design managements tools. In this paper the main objective was building an innovative economic model to design a chosen possible futuristic scenario; by understanding the market future needs, analyze real world setting, solve the model questions by future driven design, and finally interpret the results, to discuss to what extent the results can be transferred to the real world. The paper discusses Egypt as a potential case study. Since, Egypt has highly complex economical problems, extra-dynamic political factors, and very rich cultural aspects; we considered Egypt is a very challenging example for applying FDM. The paper results recommended using FDM numerical modeling as a starting point to design the future.

Keywords: developing countries, economic models, future design, possible futures

Procedia PDF Downloads 267
1764 Gamipulation: Exploring Covert Manipulation through Gamification in the Context of Education

Authors: Aguiar-Castillo Lidia, Perez-Jimenez Rafael

Abstract:

The integration of gamification in educational settings aims to enhance student engagement and motivation through game design elements in learning activities. This paper introduces "Gamipulation," the subtle manipulation of students via gamification techniques serving hidden agendas without explicit consent. It highlights the need to distinguish between beneficial and exploitative uses of gamification in education, focusing on its potential to psychologically manipulate students for purposes misaligned with their best interests. Through a literature review and expert interviews, this study presents a conceptual framework outlining gamipulation's features. It examines ethical concerns like gradually introducing desired behaviors, using distraction to divert attention from significant learning objectives, immediacy of rewards fostering short-term engagement over long-term learning, infantilization of students, and exploitation of emotional responses over reflective thinking. Additionally, it discusses ethical issues in collecting and utilizing student data within gamified environments.  Key findings suggest that while gamification can enhance motivation and engagement, there's a fine line between ethical motivation and unethical manipulation. The study emphasizes the importance of transparency, respect for student autonomy, and alignment with educational values in gamified systems. It calls for educators and designers to be aware of gamification's manipulative potential and strive for ethical implementation that benefits students. In conclusion, this paper provides a framework for educators and researchers to understand and address gamipulation's ethical challenges. It encourages developing ethical guidelines and practices to ensure gamification in education remains a tool for positive engagement and learning rather than covert manipulation.

Keywords: gradualness, distraction, immediacy, infantilization, emotion

Procedia PDF Downloads 27
1763 „Real and Symbolic in Poetics of Multiplied Screens and Images“

Authors: Kristina Horvat Blazinovic

Abstract:

In the context of a work of art, one can talk about the idea-concept-term-intention expressed by the artist by using various forms of repetition (external, material, visible repetition). Such repetitions of elements (images in space or moving visual and sound images in time) suggest a "covert", "latent" ("dressed") repetition – i.e., "hidden", "latent" term-intention-idea. Repeating in this way reveals a "deeper truth" that the viewer needs to decode and which is hidden "under" the technical manifestation of the multiplied images. It is not only images, sounds, and screens that are repeated - something else is repeated through them as well, even if, in some cases, the very idea of repetition is repeated. This paper examines serial images and single-channel or multi-channel artwork in the field of video/film art and video installations, which in a way implies the concept of repetition and multiplication. Moving or static images and screens (as multi-screens) are repeated in time and space. The categories of the real and the symbolic partly refer to the Lacan registers of reality, i.e., the Imaginary - Symbolic – Real trinity that represents the orders within which human subjectivity is established. Authors such as Bruce Nauman, VALIE EXPORT, Ragnar Kjartansson, Wolf Vostell, Shirin Neshat, Paul Sharits, Harun Farocki, Dalibor Martinis, Andy Warhol, Douglas Gordon, Bill Viola, Frank Gillette, and Ira Schneider, and Marina Abramovic problematize, in different ways, the concept and procedures of multiplication - repetition, but not in the sense of "copying" and "repetition" of reality or the original, but of repeated repetitions of the simulacrum. Referential works of art are often connected by the theme of the traumatic. Repetitions of images and situations are a response to the traumatic (experience) - repetition itself is a symptom of trauma. On the other hand, repeating and multiplying traumatic images results in a new traumatic effect or cancels it. Reflections on repetition as a temporal and spatial phenomenon are in line with the chapters that link philosophical considerations of space and time and experience temporality with their manifestation in works of art. The observations about time and the relation of perception and memory are according to Henry Bergson and his conception of duration (durée) as "quality of quantity." The video works intended to be displayed as a video loop, express the idea of infinite duration ("pure time," according to Bergson). The Loop wants to be always present - to fixate in time. Wholeness is unrecognizable because the intention is to make the effect infinitely cyclic. Reflections on time and space end with considerations about the occurrence and effects of time and space intervals as places and moments "between" – the points of connection and separation, of continuity and stopping - by reference to the "interval theory" of Soviet filmmaker DzigaVertov. The scale of opportunities that can be explored in interval mode is wide. Intervals represent the perception of time and space in the form of pauses, interruptions, breaks (e.g., emotional, dramatic, or rhythmic) denote emptiness or silence, distance, proximity, interstitial space, or a gap between various states.

Keywords: video installation, performance, repetition, multi-screen, real and symbolic, loop, video art, interval, video time

Procedia PDF Downloads 173
1762 Body Types of Softball Players in the 39th National Games of Thailand

Authors: Nopadol Nimsuwan, Sumet Prom-in

Abstract:

The purpose of this study was to investigate the body types, size, and body compositions of softball players in the 39th National Games of Thailand. The population of this study was 352 softball players who participated in the 39th National Games of Thailand from which a sample size of 291 was determined using the Taro Yamane formula and selection is made with stratified sampling method. The data collected were weight, height, arm length, leg length, chest circumference, mid-upper arm circumference, calf circumference, subcutaneous fat in the upper arm area, the scapula bone area, above the pelvis area, and mid-calf area. Keys and Brozek formula was used to calculate the fat quantity, Kitagawa formula to calculate the muscle quantity, and Heath and Carter method was used to determine the values of body dimensions. The results of the study can be concluded as follows. The average body dimensions of the male softball players were the endo-mesomorph body type while the average body dimensions of female softball players were the meso-endomorph body type. When considered according to the softball positions, it was found that the male softball players in every position had the endo-mesomorph body type while the female softball players in every position had the meso-endomorph body type except for the center fielder that had the endo-ectomorph body type. The endo-mesomorph body type is suitable for male softball players, and the meso-endomorph body type is suitable for female softball players because these body types are suitable for the five basic softball skills which are: gripping, throwing, catching, hitting, and base running. Thus, people related to selecting softball players to play in sports competitions of different levels should consider factors in terms of body type, size, and body components of the players.

Keywords: body types, softball players, national games of Thailand, social sustainability

Procedia PDF Downloads 484
1761 Duration of the Disease in Systemic Sclerosis and Efficiency of Rituximab Therapy

Authors: Liudmila Garzanova, Lidia Ananyeva, Olga Koneva, Olga Ovsyannikova, Oxana Desinova, Mayya Starovoytova, Rushana Shayahmetova, Anna Khelkovskaya-Sergeeva

Abstract:

Objectives: The duration of the disease could be one of the leading factors in the effectiveness of therapy in systemic sclerosis (SSc). The aim of the study was to assess how the duration of the disease affects the changes of lung function in patients(pts) with interstitial lung disease (ILD) associated with SSc during long-term RTX therapy. Methods: We prospectively included 113pts with SSc in this study. 85% of pts were female. Mean age was 48.1±13years. The diffuse cutaneous subset of the disease had 62pts, limited–40, overlap–11. The mean disease duration was 6.1±5.4years. Pts were divided into 2 groups depending on the disease duration - group 1 (less than 5 years-63pts) and group 2 (more than 5 years-50 pts). All pts received prednisolone at mean dose of 11.5±4.6 mg/day and 53 of them - immunosuppressants at inclusion. The parameters were evaluated over the periods: at baseline (point 0), 13±2.3mo (point 1), 42±14mo (point 2) and 79±6.5mo (point 3) after initiation of RTX therapy. Cumulative mean dose of RTX in group 1 at point 1 was 1.7±0.6 g, at point 2 = 3.3±1.5g, at point 3 = 3.9±2.3g; in group 2 at point 1 = 1.6±0.6g, at point 2 = 2.7±1.5 g, at point 3 = 3.7±2.6 g. The results are presented in the form of mean values, delta(Δ), median(me), upper and lower quartile. Results. There was a significant increase of forced vital capacity % predicted (FVC) in both groups, but at points 1 and 2 the improvement was more significant in group 1. In group 2, an improvement of FVC was noted with a longer follow-up. Diffusion capacity for carbon monoxide % predicted (DLCO) remained stable at point 1, and then significantly improved by the 3rd year of RTX therapy in both groups. In group 1 at point 1: ΔFVC was 4.7 (me=4; [-1.8;12.3])%, ΔDLCO = -1.2 (me=-0.3; [-5.3;3.6])%, at point 2: ΔFVC = 9.4 (me=7.1; [1;16])%, ΔDLCO =3.7 (me=4.6; [-4.8;10])%, at point 3: ΔFVC = 13 (me=13.4; [2.3;25.8])%, ΔDLCO = 2.3 (me=1.6; [-5.6;11.5])%. In group 2 at point 1: ΔFVC = 3.4 (me=2.3; [-0.8;7.9])%, ΔDLCO = 1.5 (me=1.5; [-1.9;4.9])%; at point 2: ΔFVC = 7.6 (me=8.2; [0;12.6])%, ΔDLCO = 3.5 (me=0.7; [-1.6;10.7]) %; at point 3: ΔFVC = 13.2 (me=10.4; [2.8;15.4])%, ΔDLCO = 3.6 (me=1.7; [-2.4;9.2])%. Conclusion: Patients with an early SSc have more quick response to RTX therapy already in 1 year of follow-up. Patients with a disease duration more than 5 years also have response to therapy, but with longer treatment. RTX is effective option for the treatment of ILD-SSc, regardless of the duration of the disease.

Keywords: interstitial lung disease, systemic sclerosis, rituximab, disease duration

Procedia PDF Downloads 23
1760 Multi-source Question Answering Framework Using Transformers for Attribute Extraction

Authors: Prashanth Pillai, Purnaprajna Mangsuli

Abstract:

Oil exploration and production companies invest considerable time and efforts to extract essential well attributes (like well status, surface, and target coordinates, wellbore depths, event timelines, etc.) from unstructured data sources like technical reports, which are often non-standardized, multimodal, and highly domain-specific by nature. It is also important to consider the context when extracting attribute values from reports that contain information on multiple wells/wellbores. Moreover, semantically similar information may often be depicted in different data syntax representations across multiple pages and document sources. We propose a hierarchical multi-source fact extraction workflow based on a deep learning framework to extract essential well attributes at scale. An information retrieval module based on the transformer architecture was used to rank relevant pages in a document source utilizing the page image embeddings and semantic text embeddings. A question answering framework utilizingLayoutLM transformer was used to extract attribute-value pairs incorporating the text semantics and layout information from top relevant pages in a document. To better handle context while dealing with multi-well reports, we incorporate a dynamic query generation module to resolve ambiguities. The extracted attribute information from various pages and documents are standardized to a common representation using a parser module to facilitate information comparison and aggregation. Finally, we use a probabilistic approach to fuse information extracted from multiple sources into a coherent well record. The applicability of the proposed approach and related performance was studied on several real-life well technical reports.

Keywords: natural language processing, deep learning, transformers, information retrieval

Procedia PDF Downloads 193
1759 Clinical and Epidemiological Profile of Patients with Chronic Obstructive Pulmonary Disease in a Medical Institution from the City of Medellin, Colombia

Authors: Camilo Andres Agudelo-Velez, Lina María Martinez-Sanchez, Natalia Perilla-Hernandez, Maria De Los Angeles Rodriguez-Gazquez, Felipe Hernandez-Restrepo, Dayana Andrea Quintero-Moreno, Camilo Ruiz-Mejia, Isabel Cristina Ortiz-Trujillo, Monica Maria Zuluaga-Quintero

Abstract:

Chronic obstructive pulmonary disease is common condition, characterized by a persistent blockage of airflow, partially reversible and progressive, that represents 5% of total deaths around the world, and it is expected to become the third leading cause of death by 2030. Objective: To establish the clinical and epidemiological profile of patients with chronic obstructive pulmonary disease in a medical institution from the city of Medellin, Colombia. Methods: A cross-sectional study was performed, with a sample of 50 patients with a diagnosis of chronic obstructive pulmonary disease in a private institution in Medellin, during 2015. The software SPSS vr. 20 was used for the statistical analysis. For the quantitative variables, averages, standard deviations, and maximun and minimun values were calculated, while for ordinal and nominal qualitative variables, proportions were estimated. Results: The average age was 73.5±9.3 years, 52% of the patients were women, 50% of them had retired, 46% ere married and 80% lived in the city of Medellín. The mean time of diagnosis was 7.8±1.3 years and 100% of the patients were treated at the internal medicine service. The most common clinical features were: 36% were classified as class D for the disease, 34% had a FEV1 <30%, 88% had a history of smoking and 52% had oxygen therapy at home. Conclusion: It was found that class D was the most common, and the majority of the patients had a history of smoking, indicating the need to strengthen promotion and prevention strategies in this regard.

Keywords: pulmonary disease, chronic obstructive, pulmonary medicine, oxygen inhalation therapy

Procedia PDF Downloads 444
1758 Econophysical Approach on Predictability of Financial Crisis: The 2001 Crisis of Turkey and Argentina Case

Authors: Arzu K. Kamberli, Tolga Ulusoy

Abstract:

Technological developments and the resulting global communication have made the 21st century when large capitals are moved from one end to the other via a button. As a result, the flow of capital inflows has accelerated, and capital inflow has brought with it crisis-related infectiousness. Considering the irrational human behavior, the financial crisis in the world under the influence of the whole world has turned into the basic problem of the countries and increased the interest of the researchers in the reasons of the crisis and the period in which they lived. Therefore, the complex nature of the financial crises and its linearly unexplained structure have also been included in the new discipline, econophysics. As it is known, although financial crises have prediction mechanisms, there is no definite information. In this context, in this study, using the concept of electric field from the electrostatic part of physics, an early econophysical approach for global financial crises was studied. The aim is to define a model that can take place before the financial crises, identify financial fragility at an earlier stage and help public and private sector members, policy makers and economists with an econophysical approach. 2001 Turkey crisis has been assessed with data from Turkish Central Bank which is covered between 1992 to 2007, and for 2001 Argentina crisis, data was taken from IMF and the Central Bank of Argentina from 1997 to 2007. As an econophysical method, an analogy is used between the Gauss's law used in the calculation of the electric field and the forecasting of the financial crisis. The concept of Φ (Financial Flux) has been adopted for the pre-warning of the crisis by taking advantage of this analogy, which is based on currency movements and money mobility. For the first time used in this study Φ (Financial Flux) calculations obtained by the formula were analyzed by Matlab software, and in this context, in 2001 Turkey and Argentina Crisis for Φ (Financial Flux) crisis of values has been confirmed to give pre-warning.

Keywords: econophysics, financial crisis, Gauss's Law, physics

Procedia PDF Downloads 153
1757 Effect of Mixture of Flaxseed and Pumpkin Seeds Powder on Hypercholesterolemia

Authors: Zahra Ashraf

Abstract:

Flax and pumpkin seeds are a rich source of unsaturated fatty acids, antioxidants and fiber, known to have anti-atherogenic properties. Hypercholesterolemia is a state characterized by the elevated level of cholesterol in the blood. This research was designed to study the effect of flax and pumpkin seeds powder mixture on hypercholesterolemia and body weight. Rat’s species were selected as human representative. Thirty male albino rats were divided into three groups: a control group, a CD-chol group (control diet+cholesterol) fed with 1.5% cholesterol and FP-chol group (flaxseed and pumpkin seed powder+ cholesterol) fed with 1.5% cholesterol. Flax and pumpkin seed powder mixed at proportion of (5/1) (omega-3 and omega-6). Blood samples were collected to examine lipid profile and body weight was also measured. Thus the data was subjected to analysis of variance. In CD-chol group, body weight, total cholesterol TC, triacylglycerides TG in plasma, plasma LDL-C, ratio significantly increased with a decrease in plasma HDL (good cholesterol). In FP-chol group lipid parameters and body weights were decreased significantly with an increase in HDL and decrease in LDL (bad cholesterol). The mean values of body weight, total cholesterol, triglycerides, low density lipoprotein and high density lipoproteins in FP-chol group were 240.66±11.35g, 59.60±2.20mg/dl, 50.20±1.79 mg/dl, 36.20±1.62mg/dl, 36.40±2.20 mg/dl, respectively. Flaxseed and pumpkin seeds powder mixture showed reduction in body weight, serum cholesterol, low density lipoprotein and triglycerides. While significant increase was shown in high density lipoproteins when given to hypercholesterolemic rats. Our results suggested that flax and pumpkin seed mixture has hypocholesterolemic effects which were probably mediated by polyunsaturated fatty acids (omega-3 and omega-6) present in seed mixture.

Keywords: hypercolesterolemia, omega 3 and omega 6 fatty acids, cardiovascular diseases

Procedia PDF Downloads 420
1756 Analysis of Rural Roads in Developing Countries Using Principal Component Analysis and Simple Average Technique in the Development of a Road Safety Performance Index

Authors: Muhammad Tufail, Jawad Hussain, Hammad Hussain, Imran Hafeez, Naveed Ahmad

Abstract:

Road safety performance index is a composite index which combines various indicators of road safety into single number. Development of a road safety performance index using appropriate safety performance indicators is essential to enhance road safety. However, a road safety performance index in developing countries has not been given as much priority as needed. The primary objective of this research is to develop a general Road Safety Performance Index (RSPI) for developing countries based on the facility as well as behavior of road user. The secondary objectives include finding the critical inputs in the RSPI and finding the better method of making the index. In this study, the RSPI is developed by selecting four main safety performance indicators i.e., protective system (seat belt, helmet etc.), road (road width, signalized intersections, number of lanes, speed limit), number of pedestrians, and number of vehicles. Data on these four safety performance indicators were collected using observation survey on a 20 km road section of the National Highway N-125 road Taxila, Pakistan. For the development of this composite index, two methods are used: a) Principal Component Analysis (PCA) and b) Equal Weighting (EW) method. PCA is used for extraction, weighting, and linear aggregation of indicators to obtain a single value. An individual index score was calculated for each road section by multiplication of weights and standardized values of each safety performance indicator. However, Simple Average technique was used for weighting and linear aggregation of indicators to develop a RSPI. The road sections are ranked according to RSPI scores using both methods. The two weighting methods are compared, and the PCA method is found to be much more reliable than the Simple Average Technique.

Keywords: indicators, aggregation, principle component analysis, weighting, index score

Procedia PDF Downloads 158
1755 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model

Authors: Donatella Giuliani

Abstract:

In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.

Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation

Procedia PDF Downloads 217
1754 Proposed Algorithms to Assess Concussion Potential in Rear-End Motor Vehicle Collisions: A Meta-Analysis

Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin McCleery

Abstract:

Introduction: Mild traumatic brain injuries, also referred to as concussions, represent an increasing burden to society. Due to limited objective diagnostic measures, concussions are diagnosed by assessing subjective symptoms, often leading to disputes to their presence. Common biomechanical measures associated with concussion are high linear and/or angular acceleration to the head. With regards to linear acceleration, approximately 80g’s has previously been shown to equate with a 50% probability of concussion. Motor vehicle collisions (MVCs) are a leading cause of concussion, due to high head accelerations experienced. The change in velocity (delta-V) of a vehicle in an MVC is an established metric for impact severity. As acceleration is the rate of delta-V with respect to time, the purpose of this paper is to determine the relation between delta-V (and occupant parameters) with linear head acceleration. Methods: A meta-analysis was conducted for manuscripts collected using the following keywords: head acceleration, concussion, brain injury, head kinematics, delta-V, change in velocity, motor vehicle collision, and rear-end. Ultimately, 280 studies were surveyed, 14 of which fulfilled the inclusion criteria as studies investigating the human response to impacts, reporting head acceleration, and delta-V of the occupant’s vehicle. Statistical analysis was conducted with SPSS and R. The best fit line analysis allowed for an initial understanding of the relation between head acceleration and delta-V. To further investigate the effect of occupant parameters on head acceleration, a quadratic model and a full linear mixed model was developed. Results: From the 14 selected studies, 139 crashes were analyzed with head accelerations and delta-V values ranging from 0.6 to 17.2g and 1.3 to 11.1 km/h, respectively. Initial analysis indicated that the best line of fit (Model 1) was defined as Head Acceleration = 0.465

Keywords: acceleration, brain injury, change in velocity, Delta-V, TBI

Procedia PDF Downloads 233
1753 Subsurface Structures Delineation and Tectonic History Investigation Using Gravity, Magnetic and Well Data, In the Cyrenaica Platform, NE Libya

Authors: Mohamed Abdalla saleem

Abstract:

Around one hundred wells were drilled in the Cyrenaica platform north-east Libya, and almost all of them were dry. Although the drilled samples reveal good oil shows and good source rock maturity. Most of the upper Cretaceous age and the above deposit successions are outcrops in different places. We have a thorough understanding and mapping of the structures related to the Cretaceous and above Cenozoic Era. But the subsurface beneath these outcrops still needs more investigation and delineation. This study aims to give answers to some questions about the tectonic history and the types of structures that are distributed in the area using gravity, magnetic, and well data. According to the information that has been obtained from groups of wells drilled in concessions 31, 35, and 37, one can note that the depositional sections become ticker and deeper southward. The topography map of the study area shows that the area is highly elevated at the north, about 300 m above the sea level, while the minimum elevation (16–18 m) exists nearly in the middle (lat. 30°). South to this latitude, the area is started elevated again (more than 100 m). The third-order residual gravity map, which was constructed from the Bouguer gravity map, reveals that the area is dominated by a large negative anomaly working as a sub-basin (245 km x 220 km), which means a very thick depositional section, and the basement is very deep. The mentioned depocenter is surrounded by four high gravity anomalies (12-37 mGal), which means a shallow basement and a relative thinner succession of sediments. The highest gravity values are located beside the coast line. The total horizontal gradient (THG) map reveals various systems of structures, the first system where the structures are oriented NE-SW, which is crosscut by the second regime extending NW-SE. This second system is distributed through the whole area, but it is very strong and shallow near the coast line and at the south part, while it is relatively deep at the middle depocenter area.

Keywords: cyrenaica platform, gravity, structures, basement, tectonic history

Procedia PDF Downloads 3
1752 Balanced Scorecard (BSC) Project : A Methodological Proposal for Decision Support in a Corporate Scenario

Authors: David de Oliveira Costa, Miguel Ângelo Lellis Moreira, Carlos Francisco Simões Gomes, Daniel Augusto de Moura Pereira, Marcos dos Santos

Abstract:

Strategic management is a fundamental process for global companies that intend to remain competitive in an increasingly dynamic and complex market. To do so, it is necessary to maintain alignment with their principles and values. The Balanced Scorecard (BSC) proposes to ensure that the overall business performance is based on different perspectives (financial, customer, internal processes, and learning and growth). However, relying solely on the BSC may not be enough to ensure the success of strategic management. It is essential that companies also evaluate and prioritize strategic projects that need to be implemented to ensure they are aligned with the business vision and contribute to achieving established goals and objectives. In this context, the proposition involves the incorporation of the SAPEVO-M multicriteria method to indicate the degree of relevance between different perspectives. Thus, the strategic objectives linked to these perspectives have greater weight in the classification of structural projects. Additionally, it is proposed to apply the concept of the Impact & Probability Matrix (I&PM) to structure and ensure that strategic projects are evaluated according to their relevance and impact on the business. By structuring the business's strategic management in this way, alignment and prioritization of projects and actions related to strategic planning are ensured. This ensures that resources are directed towards the most relevant and impactful initiatives. Therefore, the objective of this article is to present the proposal for integrating the BSC methodology, the SAPEVO-M multicriteria method, and the prioritization matrix to establish a concrete weighting of strategic planning and obtain coherence in defining strategic projects aligned with the business vision. This ensures a robust decision-making support process.

Keywords: MCDA process, prioritization problematic, corporate strategy, multicriteria method

Procedia PDF Downloads 81
1751 Roundabout Implementation Analyses Based on Traffic Microsimulation Model

Authors: Sanja Šurdonja, Aleksandra Deluka-Tibljaš, Mirna Klobučar, Irena Ištoka Otković

Abstract:

Roundabouts are a common choice in the case of reconstruction of an intersection, whether it is to improve the capacity of the intersection or traffic safety, especially in urban conditions. The regulation for the design of roundabouts is often related to driving culture, the tradition of using this type of intersection, etc. Individual values in the regulation are usually recommended in a wide range (this is the case in Croatian regulation), and the final design of a roundabout largely depends on the designer's experience and his/her choice of design elements. Therefore, before-after analyses are a good way to monitor the performance of roundabouts and possibly improve the recommendations of the regulation. This paper presents a comprehensive before-after analysis of a roundabout on the country road network near Rijeka, Croatia. The analysis is based on a thorough collection of traffic data (operating speeds and traffic load) and design elements data, both before and after the reconstruction into a roundabout. At the chosen location, the roundabout solution aimed to improve capacity and traffic safety. Therefore, the paper analyzed the collected data to see if the roundabout achieved the expected effect. A traffic microsimulation model (VISSIM) of the roundabout was created based on the real collected data, and the influence of the increase of traffic load and different traffic structures, as well as of the selected design elements on the capacity of the roundabout, were analyzed. Also, through the analysis of operating speeds and potential conflicts by application of the Surrogate Safety Assessment Model (SSAM), the traffic safety effect of the roundabout was analyzed. The results of this research show the practical value of before-after analysis as an indicator of roundabout effectiveness at a specific location. The application of a microsimulation model provides a practical method for analyzing intersection functionality from a capacity and safety perspective in present and changed traffic and design conditions.

Keywords: before-after analysis, operating speed, capacity, design.

Procedia PDF Downloads 22
1750 Iranian Processed Cheese under Effect of Emulsifier Salts and Cooking Time in Process

Authors: M. Dezyani, R. Ezzati bbelvirdi, M. Shakerian, H. Mirzaei

Abstract:

Sodium Hexametaphosphate (SHMP) is commonly used as an Emulsifying Salt (ES) in process cheese, although rarely as the sole ES. It appears that no published studies exist on the effect of SHMP concentration on the properties of process cheese when pH is kept constant; pH is well known to affect process cheese functionality. The detailed interactions between the added phosphate, Casein (CN), and indigenous Ca phosphate are poorly understood. We studied the effect of the concentration of SHMP (0.25-2.75%) and holding time (0-20 min) on the textural and Rheological properties of pasteurized process Cheddar cheese using a central composite rotatable design. All cheeses were adjusted to pH 5.6. The meltability of process cheese (as indicated by the decrease in loss tangent parameter from small amplitude oscillatory rheology, degree of flow, and melt area from the Schreiber test) decreased with an increase in the concentration of SHMP. Holding time also led to a slight reduction in meltability. Hardness of process cheese increased as the concentration of SHMP increased. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is attributable to residual colloidal Ca phosphate, was shifted to lower pH values with increasing concentration of SHMP. The insoluble Ca and total and insoluble P contents increased as concentration of SHMP increased. The proportion of insoluble P as a percentage of total (indigenous and added) P decreased with an increase in ES concentration because of some of the (added) SHMP formed soluble salts. The results of this study suggest that SHMP chelated the residual colloidal Ca phosphate content and dispersed CN; the newly formed Ca-phosphate complex remained trapped within the process cheese matrix, probably by cross-linking CN. Increasing the concentration of SHMP helped to improve fat emulsification and CN dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.

Keywords: Iranian processed cheese, emulsifying salt, rheology, texture

Procedia PDF Downloads 432