Search results for: housing prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2861

Search results for: housing prediction

371 Electroencephalography Correlates of Memorability While Viewing Advertising Content

Authors: Victor N. Anisimov, Igor E. Serov, Ksenia M. Kolkova, Natalia V. Galkina

Abstract:

The problem of memorability of the advertising content is closely connected with the key issues of neuromarketing. The memorability of the advertising content contributes to the marketing effectiveness of the promoted product. Significant directions of studying the phenomenon of memorability are the memorability of the brand (detected through the memorability of the logo) and the memorability of the product offer (detected through the memorization of dynamic audiovisual advertising content - commercial). The aim of this work is to reveal the predictors of memorization of static and dynamic audiovisual stimuli (logos and commercials). An important direction of the research was revealing differences in psychophysiological correlates of memorability between static and dynamic audiovisual stimuli. We assumed that static and dynamic images are perceived in different ways and may have a difference in the memorization process. Objective methods of recording psychophysiological parameters while watching static and dynamic audiovisual materials are well suited to achieve the aim. The electroencephalography (EEG) method was performed with the aim of identifying correlates of the memorability of various stimuli in the electrical activity of the cerebral cortex. All stimuli (in the groups of statics and dynamics separately) were divided into 2 groups – remembered and not remembered based on the results of the questioning method. The questionnaires were filled out by survey participants after viewing the stimuli not immediately, but after a time interval (for detecting stimuli recorded through long-term memorization). Using statistical method, we developed the classifier (statistical model) that predicts which group (remembered or not remembered) stimuli gets, based on psychophysiological perception. The result of the statistical model was compared with the results of the questionnaire. Conclusions: Predictors of the memorability of static and dynamic stimuli have been identified, which allows prediction of which stimuli will have a higher probability of remembering. Further developments of this study will be the creation of stimulus memory model with the possibility of recognizing the stimulus as previously seen or new. Thus, in the process of remembering the stimulus, it is planned to take into account the stimulus recognition factor, which is one of the most important tasks for neuromarketing.

Keywords: memory, commercials, neuromarketing, EEG, branding

Procedia PDF Downloads 230
370 Calculational-Experimental Approach of Radiation Damage Parameters on VVER Equipment Evaluation

Authors: Pavel Borodkin, Nikolay Khrennikov, Azamat Gazetdinov

Abstract:

The problem of ensuring of VVER type reactor equipment integrity is now most actual in connection with justification of safety of the NPP Units and extension of their service life to 60 years and more. First of all, it concerns old units with VVER-440 and VVER-1000. The justification of the VVER equipment integrity depends on the reliability of estimation of the degree of the equipment damage. One of the mandatory requirements, providing the reliability of such estimation, and also evaluation of VVER equipment lifetime, is the monitoring of equipment radiation loading parameters. In this connection, there is a problem of justification of such normative parameters, used for an estimation of the pressure vessel metal embrittlement, as the fluence and fluence rate (FR) of fast neutrons above 0.5 MeV. From the point of view of regulatory practice, a comparison of displacement per atom (DPA) and fast neutron fluence (FNF) above 0.5 MeV has a practical concern. In accordance with the Russian regulatory rules, neutron fluence F(E > 0.5 MeV) is a radiation exposure parameter used in steel embrittlement prediction under neutron irradiation. However, the DPA parameter is a more physically legitimate quantity of neutron damage of Fe based materials. If DPA distribution in reactor structures is more conservative as neutron fluence, this case should attract the attention of the regulatory authority. The purpose of this work was to show what radiation load parameters (fluence, DPA) on all VVER equipment should be under control, and give the reasonable estimations of such parameters in the volume of all equipment. The second task is to give the conservative estimation of each parameter including its uncertainty. Results of recently received investigations allow to test the conservatism of calculational predictions, and, as it has been shown in the paper, combination of ex-vessel measured data with calculated ones allows to assess unpredicted uncertainties which are results of specific unique features of individual equipment for VVER reactor. Some results of calculational-experimental investigations are presented in this paper.

Keywords: equipment integrity, fluence, displacement per atom, nuclear power plant, neutron activation measurements, neutron transport calculations

Procedia PDF Downloads 139
369 Urban Security through Urban Transformation: Case of Saraycik District

Authors: Emir Sunguroglu, Merve Sunguroglu, Yesim Aliefendioglu, Harun Tanrivermis

Abstract:

Basic human needs range from physiological needs such as food, water and shelter to safety needs such as security, protection from natural disasters and even urban terrorism which are extant and not fulfilled even in urban areas where people live civilly in large communities. These basic needs when arose in urban life lead to a different kind of crime set defined as urban crimes. Urban crimes mostly result from differences between socioeconomic conditions in society. Income inequality increases tendency towards urban crimes. Especially in slum areas and suburbs, urban crimes not only threaten public security but they also affect deliverance of public services. It is highlighted that, construction of urban security against problems caused by urban crimes is not only achieved by involvement of urban security in security of the community but also comprises juridical development and staying above a level of legal standards concurrently. The idea of urban transformation emerged as interventions to demolishment and rebuilding of built environment to solve the unhealthy urban environment, inadequate infrastructure and socioeconomic problems came up during the industrialization process. Considering the probability of urbanization process driving citizens to commit crimes, The United Nations Commission on Human Security’s focus on this theme is conferred to be a proper approach. In this study, the analysis and change in security before, through and after urban transformation, which is one of the tools related to urbanization process, is strived to be discussed through the case of Sincan County Saraycik District. The study also aims to suggest improvements to current legislation on public safety, urban resilience, and urban transformation. In spite of Saraycik District residing in a developing County in Ankara, Turkey, from urbanization perspective as well as socioeconomic and demographic indicators the District exhibits a negative view throughout the County and the country. When related to the county, rates of intentional harm reports, burglary reports, the offense of libel and threat reports and narcotic crime reports are higher. The District is defined as ‘crime hotspot’. Interviews with residents of Saraycik claim that the greatest issue of the neighborhood is Public Order and Security (82.44 %). The District becomes prominent with negative aspects, especially with the presence of unlicensed constructions, occurrence of important social issues such as crime and insecurity and complicated lives of inhabitants from poverty and low standard conditions of living. Additionally, the social structure and demographic properties and crime and insecurity of the field have been addressed in this study. Consequently, it is claimed that urban crime rates were related to level of education, employment and household income, poverty trap, physical condition of housing and structuration, accessibility of public services, security, migration, safety in terms of disasters and emphasized that urban transformation is one of the most important tools in order to provide urban security.

Keywords: urban security, urban crimes, urban transformation, Saraycik district

Procedia PDF Downloads 278
368 An Analysis of the Regression Hypothesis from a Shona Broca’s Aphasci Perspective

Authors: Esther Mafunda, Simbarashe Muparangi

Abstract:

The present paper tests the applicability of the Regression Hypothesis on the pathological language dissolution of a Shona male adult with Broca’s aphasia. It particularly assesses the prediction of the Regression Hypothesis, which states that the process according to which language is forgotten will be the reversal of the process according to which it will be acquired. The main aim of the paper is to find out whether mirror symmetries between L1 acquisition and L1 dissolution of tense in Shona and, if so, what might cause these regression patterns. The paper also sought to highlight the practical contributions that Linguistic theory can make to solving language-related problems. Data was collected from a 46-year-old male adult with Broca’s aphasia who was receiving speech therapy at St Giles Rehabilitation Centre in Harare, Zimbabwe. The primary data elicitation method was experimental, using the probe technique. The TART (Test for Assessing Reference Time) Shona version in the form of sequencing pictures was used to access tense by Broca’s aphasic and 3.5-year-old child. Using the SPSS (Statistical Package for Social Studies) and Excel analysis, it was established that the use of the future tense was impaired in Shona Broca’s aphasic whilst the present and past tense was intact. However, though the past tense was intact in the male adult with Broca’s aphasic, a reference to the remote past was made. The use of the future tense was also found to be difficult for the 3,5-year-old speaking child. No difficulties were encountered in using the present and past tenses. This means that mirror symmetries were found between L1 acquisition and L1 dissolution of tense in Shona. On the basis of the results of this research, it can be concluded that the use of tense in a Shona adult with Broca’s aphasia supports the Regression Hypothesis. The findings of this study are important in terms of speech therapy in the context of Zimbabwe. The study also contributes to Bantu linguistics in general and to Shona linguistics in particular. Further studies could also be done focusing on the rest of the Bantu language varieties in terms of aphasia.

Keywords: Broca’s Aphasia, regression hypothesis, Shona, language dissolution

Procedia PDF Downloads 73
367 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions

Authors: Oscar E. Cariceo, Claudia V. Casal

Abstract:

Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.

Keywords: cyberbullying, evidence based practice, machine learning, social work research

Procedia PDF Downloads 151
366 An Investigation into the Crystallization Tendency/Kinetics of Amorphous Active Pharmaceutical Ingredients: A Case Study with Dipyridamole and Cinnarizine

Authors: Shrawan Baghel, Helen Cathcart, Biall J. O'Reilly

Abstract:

Amorphous drug formulations have great potential to enhance solubility and thus bioavailability of BCS class II drugs. However, the higher free energy and molecular mobility of the amorphous form lowers the activation energy barrier for crystallization and thermodynamically drives it towards the crystalline state which makes them unstable. Accurate determination of the crystallization tendency/kinetics is the key to the successful design and development of such systems. In this study, dipyridamole (DPM) and cinnarizine (CNZ) has been selected as model compounds. Thermodynamic fragility (m_T) is measured from the heat capacity change at the glass transition temperature (Tg) whereas dynamic fragility (m_D) is evaluated using methods based on extrapolation of configurational entropy to zero 〖(m〗_(D_CE )), and heating rate dependence of Tg 〖(m〗_(D_Tg)). The mean relaxation time of amorphous drugs was calculated from Vogel-Tammann-Fulcher (VTF) equation. Furthermore, the correlation between fragility and glass forming ability (GFA) of model drugs has been established and the relevance of these parameters to crystallization of amorphous drugs is also assessed. Moreover, the crystallization kinetics of model drugs under isothermal conditions has been studied using Johnson-Mehl-Avrami (JMA) approach to determine the Avrami constant ‘n’ which provides an insight into the mechanism of crystallization. To further probe into the crystallization mechanism, the non-isothermal crystallization kinetics of model systems was also analysed by statistically fitting the crystallization data to 15 different kinetic models and the relevance of model-free kinetic approach has been established. In addition, the crystallization mechanism for DPM and CNZ at each extent of transformation has been predicted. The calculated fragility, glass forming ability (GFA) and crystallization kinetics is found to be in good correlation with the stability prediction of amorphous solid dispersions. Thus, this research work involves a multidisciplinary approach to establish fragility, GFA and crystallization kinetics as stability predictors for amorphous drug formulations.

Keywords: amorphous, fragility, glass forming ability, molecular mobility, mean relaxation time, crystallization kinetics, stability

Procedia PDF Downloads 329
365 Aerodynamic Optimization of Oblique Biplane by Using Supercritical Airfoil

Authors: Asma Abdullah, Awais Khan, Reem Al-Ghumlasi, Pritam Kumari, Yasir Nawaz

Abstract:

Introduction: This study verified the potential applications of two Oblique Wing configurations that were initiated by the Germans Aerodynamicists during the WWII. Due to the end of the war, this project was not completed and in this research is targeting the revival of German Oblique biplane configuration. The research draws upon the use of two Oblique wings mounted on the top and bottom of the fuselage through a single pivot. The wings are capable of sweeping at different angles ranging from 0° at takeoff to 60° at cruising Altitude. The top wing, right half, behaves like a forward swept wing and the left half, behaves like a backward swept wing. Vice Versa applies to the lower wing. This opposite deflection of the top and lower wing cancel out the rotary moment created by each wing and the aircraft remains stable. Problem to better understand or solve: The purpose of this research is to investigate the potential of achieving improved aerodynamic performance and efficiency of flight at a wide range of sweep angles. This will help examine the most accurate value for the sweep angle at which the aircraft will possess both stability and better aerodynamics. Explaining the methods used: The Aircraft configuration is designed using Solidworks after which a series of Aerodynamic prediction are conducted, both in the subsonic and the supersonic flow regime. Computations are carried on Ansys Fluent. The results are then compared to theoretical and flight data of different Supersonic fighter aircraft of the same category (AD-1) and with the Wind tunnel testing model at subsonic speed. Results: At zero sweep angle, the aircraft has an excellent lift coefficient value with almost double that found for fighter jets. In acquiring of supersonic speed the sweep angle is increased to maximum 60 degrees depending on the mission profile. General findings: Oblique biplane can be the future fighter jet aircraft because of its high value performance in terms of aerodynamics, cost, structural design and weight.

Keywords: biplane, oblique wing, sweep angle, supercritical airfoil

Procedia PDF Downloads 254
364 A Systematic Review on Development of a Cost Estimation Framework: A Case Study of Nigeria

Authors: Babatunde Dosumu, Obuks Ejohwomu, Akilu Yunusa-Kaltungo

Abstract:

Cost estimation in construction is often difficult, particularly when dealing with risks and uncertainties, which are inevitable and peculiar to developing countries like Nigeria. Direct consequences of these are major deviations in cost, duration, and quality. The fundamental aim of this study is to develop a framework for assessing the impacts of risk on cost estimation, which in turn causes variabilities between contract sum and final account. This is very important, as initial estimates given to clients should reflect the certain magnitude of consistency and accuracy, which the client builds other planning-related activities upon, and also enhance the capabilities of construction industry professionals by enabling better prediction of the final account from the contract sum. In achieving this, a systematic literature review was conducted with cost variability and construction projects as search string within three databases: Scopus, Web of science, and Ebsco (Business source premium), which are further analyzed and gap(s) in knowledge or research discovered. From the extensive review, it was found that factors causing deviation between final accounts and contract sum ranged between 1 and 45. Besides, it was discovered that a cost estimation framework similar to Building Cost Information Services (BCIS) is unavailable in Nigeria, which is a major reason why initial estimates are very often inconsistent, leading to project delay, abandonment, or determination at the expense of the huge sum of money invested. It was concluded that the development of a cost estimation framework that is adjudged an important tool in risk shedding rather than risk-sharing in project risk management would be a panacea to cost estimation problems, leading to cost variability in the Nigerian construction industry by the time this ongoing Ph.D. research is completed. It was recommended that practitioners in the construction industry should always take into account risk in order to facilitate the rapid development of the construction industry in Nigeria, which should give stakeholders a more in-depth understanding of the estimation effectiveness and efficiency to be adopted by stakeholders in both the private and public sectors.

Keywords: cost variability, construction projects, future studies, Nigeria

Procedia PDF Downloads 174
363 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 158
362 Preventative Programs for At-Risk Families of Child Maltreatment: Using Home Visiting and Intergenerational Relationships

Authors: Kristina Gordon

Abstract:

One in three children in the United States is a victim of a maltreatment investigation, and about one in nine children has a substantiated investigation. Home visiting is one of several preventative strategies rooted in an early childhood approach that fosters maternal, infant, and early childhood health, protection, and growth. In the United States, 88% of states report administering home visiting programs or state-designed models. The purpose of this study was to conduct a systematic review on home visiting programs in the United States focused on the prevention of child abuse and neglect. This systematic review included 17 articles which found that most of the studies reported optimistic results. Common across studies was program content related to (1) typical child development, (2) parenting education, and (3) child physical health. Although several factors common to home visiting and parenting interventions have been identified, no research has examined the common components of manualized home visiting programs to prevent child maltreatment. Child maltreatment can be addressed with home visiting programs with evidence-based components and cultural adaptations that increase prevention by assisting families in tackling the risk factors they face. An innovative approach to child maltreatment prevention is bringing together at-risk families with the aging community. This innovative approach was prompted due to existing home visitation programs only focusing on improving skillsets and providing temporary relationships. This innovative approach can provide the opportunity for families to build a relationship with an aging individual who can share their wisdom, skills, compassion, love, and guidance, to support families in their well-being and decrease child maltreatment occurrence. Families would be identified if they experience any of the risk factors, including parental substance abuse, parental mental illness, domestic violence, and poverty. Families would also be identified as at risk if they lack supportive relationships such as grandparents or relatives. Families would be referred by local agencies such as medical clinics, hospitals, schools, etc., that have interactions with families regularly. The aging community would be recruited at local housing communities and community centers. An aging individual would be identified by the elderly community when there is a need or interest in a relationship by or for the individual. Cultural considerations would be made when assessing for compatibility between the families and aging individuals. The pilot program will consist of a small group of participants to allow manageable results to evaluate the efficacy of the program. The pilot will include pre-and post-surveys to evaluate the impact of the program. From the results, data would be created to determine the efficacy as well as the sufficiency of the details of the pilot. The pilot would also be evaluated on whether families were referred to Child Protective Services during the pilot as it relates to the goal of decreasing child maltreatment. The ideal findings will display a decrease in child maltreatment and an increase in family well-being for participants.

Keywords: child maltreatment, home visiting, neglect, preventative, abuse

Procedia PDF Downloads 94
361 Electrochemical Bioassay for Haptoglobin Quantification: Application in Bovine Mastitis Diagnosis

Authors: Soledad Carinelli, Iñigo Fernández, José Luis González-Mora, Pedro A. Salazar-Carballo

Abstract:

Mastitis is the most relevant inflammatory disease in cattle, affecting the animal health and causing important economic losses on dairy farms. This disease takes place in the mammary gland or udder when some opportunistic microorganisms, such as Staphylococcus aureus, Streptococcus agalactiae, Corynebacterium bovis, etc., invade the teat canal. According to the severity of the inflammation, mastitis can be classified as sub-clinical, clinical and chronic. Standard methods for mastitis detection include counts of somatic cells, cell culture, electrical conductivity of the milk, and California test (evaluation of “gel-like” matrix consistency after cell lysed with detergents). However, these assays present some limitations for accurate detection of subclinical mastitis. Currently, haptoglobin, an acute phase protein, has been proposed as novel and effective biomarker for mastitis detection. In this work, an electrochemical biosensor based on polydopamine-modified magnetic nanoparticles (MNPs@pDA) for haptoglobin detection is reported. Thus, MNPs@pDA has been synthesized by our group and functionalized with hemoglobin due to its high affinity to haptoglobin protein. The protein was labeled with specific antibodies modified with alkaline phosphatase enzyme for its electrochemical detection using an electroactive substrate (1-naphthyl phosphate) by differential pulse voltammetry. After the optimization of assay parameters, the haptoglobin determination was evaluated in milk. The strategy presented in this work shows a wide range of detection, achieving a limit of detection of 43 ng/mL. The accuracy of the strategy was determined by recovery assays, being of 84 and 94.5% for two Hp levels around the cut off value. Milk real samples were tested and the prediction capacity of the electrochemical biosensor was compared with a Haptoglobin commercial ELISA kit. The performance of the assay has demonstrated this strategy is an excellent and real alternative as screen method for sub-clinical bovine mastitis detection.

Keywords: bovine mastitis, haptoglobin, electrochemistry, magnetic nanoparticles, polydopamine

Procedia PDF Downloads 138
360 Modelling Dengue Disease With Climate Variables Using Geospatial Data For Mekong River Delta Region of Vietnam

Authors: Thi Thanh Nga Pham, Damien Philippon, Alexis Drogoul, Thi Thu Thuy Nguyen, Tien Cong Nguyen

Abstract:

Mekong River Delta region of Vietnam is recognized as one of the most vulnerable to climate change due to flooding and seawater rise and therefore an increased burden of climate change-related diseases. Changes in temperature and precipitation are likely to alter the incidence and distribution of vector-borne diseases such as dengue fever. In this region, the peak of the dengue epidemic period is around July to September during the rainy season. It is believed that climate is an important factor for dengue transmission. This study aims to enhance the capacity of dengue prediction by the relationship of dengue incidences with climate and environmental variables for Mekong River Delta of Vietnam during 2005-2015. Mathematical models for vector-host infectious disease, including larva, mosquito, and human being were used to calculate the impacts of climate to the dengue transmission with incorporating geospatial data for model input. Monthly dengue incidence data were collected at provincial level. Precipitation data were extracted from satellite observations of GSMaP (Global Satellite Mapping of Precipitation), land surface temperature and land cover data were from MODIS. The value of seasonal reproduction number was estimated to evaluate the potential, severity and persistence of dengue infection, while the final infected number was derived to check the outbreak of dengue. The result shows that the dengue infection depends on the seasonal variation of climate variables with the peak during the rainy season and predicted dengue incidence follows well with this dynamic for the whole studied region. However, the highest outbreak of 2007 dengue was not captured by the model reflecting nonlinear dependences of transmission on climate. Other possible effects will be discussed to address the limitation of the model. This suggested the need of considering of both climate variables and another variability across temporal and spatial scales.

Keywords: infectious disease, dengue, geospatial data, climate

Procedia PDF Downloads 358
359 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients

Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming

Abstract:

Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.

Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry

Procedia PDF Downloads 272
358 Investigation of Contact Pressure Distribution at Expanded Polystyrene Geofoam Interfaces Using Tactile Sensors

Authors: Chen Liu, Dawit Negussey

Abstract:

EPS (Expanded Polystyrene) geofoam as light-weight material in geotechnical applications are made of pre-expanded resin beads that form fused cellular micro-structures. The strength and deformation properties of geofoam blocks are determined by unconfined compression of small test samples between rigid loading plates. Applied loads are presumed to be supported uniformly over the entire mating end areas. Predictions of field performance on the basis of such laboratory tests widely over-estimate actual post-construction settlements and exaggerate predictions of long-term creep deformations. This investigation examined the development of contact pressures at a large number of discrete points at low and large strain levels for different densities of geofoam. Development of pressure patterns for fine and coarse interface material textures as well as for molding skin and hot wire cut geofoam surfaces were examined. The lab testing showed that I-Scan tactile sensors are useful for detailed observation of contact pressures at a large number of discrete points simultaneously. At low strain level (1%), the lower density EPS block presents low variations in localized stress distribution compared to higher density EPS. At high strain level (10%), the dense geofoam reached the sensor cut-off limit. The imprint and pressure patterns for different interface textures can be distinguished with tactile sensing. The pressure sensing system can be used in many fields with real-time pressure detection. The research findings provide a better understanding of EPS geofoam behavior for improvement of design methods and performance prediction of critical infrastructures, which will be anticipated to guide future improvements in design and rapid construction of critical transportation infrastructures with geofoam in geotechnical applications.

Keywords: geofoam, pressure distribution, tactile pressure sensors, interface

Procedia PDF Downloads 150
357 Evolutionary Prediction of the Viral RNA-Dependent RNA Polymerase of Chandipura vesiculovirus and Related Viral Species

Authors: Maneesh Kumar, Roshan Kamal Topno, Manas Ranjan Dikhit, Vahab Ali, Ganesh Chandra Sahoo, Bhawana, Major Madhukar, Rishikesh Kumar, Krishna Pandey, Pradeep Das

Abstract:

Chandipura vesiculovirus is an emerging (-) ssRNA viral entity belonging to the genus Vesiculovirus of the family Rhabdoviridae, associated with fatal encephalitis in tropical regions. The multi-functionally active viral RNA-dependent RNA polymerase (vRdRp) that has been incorporated with conserved amino acid residues in the pathogens, assigned to synthesize distinct viral polypeptides. The lack of proofreading ability of the vRdRp produces many mutated variants. Here, we have performed the evolutionary analysis of 20 viral protein sequences of vRdRp of different strains of Chandipura vesiculovirus along with other viral species from genus Vesiculovirus inferred in MEGA6.06, employing the Neighbour-Joining method. The p-distance algorithmic method has been used to calculate the optimum tree which showed the sum of branch length of about 1.436. The percentage of replicate trees in which the associated taxa are clustered together in the bootstrap test (1000 replicates), is shown next to the branches. No mutation was observed in the Indian strains of Chandipura vesiculovirus. In vRdRp, 1230(His) and 1231(Arg) are actively participated in catalysis and, are found conserved in different strains of Chandipura vesiculovirus. Both amino acid residues were also conserved in the other viral species from genus Vesiculovirus. Many isolates exhibited maximum number of mutations in catalytic regions in strains of Chandipura vesiculovirus at position 26(Ser→Ala), 47 (Ser→Ala), 90(Ser→Tyr), 172(Gly→Ile, Val), 172(Ser→Tyr), 387(Asn→Ser), 1301(Thr→Ala), 1330(Ala→Glu), 2015(Phe→Ser) and 2065(Thr→Val) which make them variants under different tropical conditions from where they evolved. The result clarifies the actual concept of RNA evolution using vRdRp to develop as an evolutionary marker. Although, a limited number of vRdRp protein sequence similarities for Chandipura vesiculovirus and other species. This might endow with possibilities to identify the virulence level during viral multiplication in a host.

Keywords: Chandipura, (-) ssRNA, viral RNA-dependent RNA polymerase, neighbour-joining method, p-distance algorithmic, evolutionary marker

Procedia PDF Downloads 170
356 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity

Authors: Shivdayal Patel, Suhail Ahmad

Abstract:

Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.

Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling

Procedia PDF Downloads 257
355 Assessing of Social Comfort of the Russian Population with Big Data

Authors: Marina Shakleina, Konstantin Shaklein, Stanislav Yakiro

Abstract:

The digitalization of modern human life over the last decade has facilitated the acquisition, storage, and processing of data, which are used to detect changes in consumer preferences and to improve the internal efficiency of the production process. This emerging trend has attracted academic interest in the use of big data in research. The study focuses on modeling the social comfort of the Russian population for the period 2010-2021 using big data. Big data provides enormous opportunities for understanding human interactions at the scale of society with plenty of space and time dynamics. One of the most popular big data sources is Google Trends. The methodology for assessing social comfort using big data involves several steps: 1. 574 words were selected based on the Harvard IV-4 Dictionary adjusted to fit the reality of everyday Russian life. The set of keywords was further cleansed by excluding queries consisting of verbs and words with several lexical meanings. 2. Search queries were processed to ensure comparability of results: the transformation of data to a 10-point scale, elimination of popularity peaks, detrending, and deseasoning. The proposed methodology for keyword search and Google Trends processing was implemented in the form of a script in the Python programming language. 3. Block and summary integral indicators of social comfort were constructed using the first modified principal component resulting in weighting coefficients values of block components. According to the study, social comfort is described by 12 blocks: ‘health’, ‘education’, ‘social support’, ‘financial situation’, ‘employment’, ‘housing’, ‘ethical norms’, ‘security’, ‘political stability’, ‘leisure’, ‘environment’, ‘infrastructure’. According to the model, the summary integral indicator increased by 54% and was 4.631 points; the average annual rate was 3.6%, which is higher than the rate of economic growth by 2.7 p.p. The value of the indicator describing social comfort in Russia is determined by 26% by ‘social support’, 24% by ‘education’, 12% by ‘infrastructure’, 10% by ‘leisure’, and the remaining 28% by others. Among 25% of the most popular searches, 85% are of negative nature and are mainly related to the blocks ‘security’, ‘political stability’, ‘health’, for example, ‘crime rate’, ‘vulnerability’. Among the 25% most unpopular queries, 99% of the queries were positive and mostly related to the blocks ‘ethical norms’, ‘education’, ‘employment’, for example, ‘social package’, ‘recycling’. In conclusion, the introduction of the latent category ‘social comfort’ into the scientific vocabulary deepens the theory of the quality of life of the population in terms of the study of the involvement of an individual in the society and expanding the subjective aspect of the measurements of various indicators. Integral assessment of social comfort demonstrates the overall picture of the development of the phenomenon over time and space and quantitatively evaluates ongoing socio-economic policy. The application of big data in the assessment of latent categories gives stable results, which opens up possibilities for their practical implementation.

Keywords: big data, Google trends, integral indicator, social comfort

Procedia PDF Downloads 178
354 Computational Modelling of Epoxy-Graphene Composite Adhesive towards the Development of Cryosorption Pump

Authors: Ravi Verma

Abstract:

Cryosorption pump is the best solution to achieve clean, vibration free ultra-high vacuum. Furthermore, the operation of cryosorption pump is free from the influence of electric and magnetic fields. Due to these attributes, this pump is used in the space simulation chamber to create the ultra-high vacuum. The cryosorption pump comprises of three parts (a) panel which is cooled with the help of cryogen or cryocooler, (b) an adsorbent which is used to adsorb the gas molecules, (c) an epoxy which holds the adsorbent and the panel together thereby aiding in heat transfer from adsorbent to the panel. The performance of cryosorption pump depends on the temperature of the adsorbent and hence, on the thermal conductivity of the epoxy. Therefore we have made an attempt to increase the thermal conductivity of epoxy adhesive by mixing nano-sized graphene filler particles. The thermal conductivity of epoxy-graphene composite adhesive is measured with the help of indigenously developed experimental setup in the temperature range from 4.5 K to 7 K, which is generally the operating temperature range of cryosorption pump for efficiently pumping of hydrogen and helium gas. In this article, we have presented the experimental results of epoxy-graphene composite adhesive in the temperature range from 4.5 K to 7 K. We have also proposed an analytical heat conduction model to find the thermal conductivity of the composite. In this case, the filler particles, such as graphene, are randomly distributed in a base matrix of epoxy. The developed model considers the complete spatial random distribution of filler particles and this distribution is explained by Binomial distribution. The results obtained by the model have been compared with the experimental results as well as with the other established models. The developed model is able to predict the thermal conductivity in both isotropic regions as well as in anisotropic region over the required temperature range from 4.5 K to 7 K. Due to the non-empirical nature of the proposed model, it will be useful for the prediction of other properties of composite materials involving the filler in a base matrix. The present studies will aid in the understanding of low temperature heat transfer which in turn will be useful towards the development of high performance cryosorption pump.

Keywords: composite adhesive, computational modelling, cryosorption pump, thermal conductivity

Procedia PDF Downloads 71
353 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks

Authors: Van Trieu, Shouhuai Xu, Yusheng Feng

Abstract:

Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.

Keywords: causality, multilevel graph, cyber-attacks, prediction

Procedia PDF Downloads 143
352 Structural Health Monitoring using Fibre Bragg Grating Sensors in Slab and Beams

Authors: Pierre van Tonder, Dinesh Muthoo, Kim twiname

Abstract:

Many existing and newly built structures are constructed on the design basis of the engineer and the workmanship of the construction company. However, when considering larger structures where more people are exposed to the building, its structural integrity is of great importance considering the safety of its occupants (Raghu, 2013). But how can the structural integrity of a building be monitored efficiently and effectively. This is where the fourth industrial revolution step in, and with minimal human interaction, data can be collected, analysed, and stored, which could also give an indication of any inconsistencies found in the data collected, this is where the Fibre Bragg Grating (FBG) monitoring system is introduced. This paper illustrates how data can be collected and converted to develop stress – strain behaviour and to produce bending moment diagrams for the utilisation and prediction of the structure’s integrity. Embedded fibre optic sensors were used in this study– fibre Bragg grating sensors in particular. The procedure entailed making use of the shift in wavelength demodulation technique and an inscription process of the phase mask technique. The fibre optic sensors considered in this report were photosensitive and embedded in the slab and beams for data collection and analysis. Two sets of fibre cables have been inserted, one purposely to collect temperature recordings and the other to collect strain and temperature. The data was collected over a time period and analysed used to produce bending moment diagrams to make predictions of the structure’s integrity. The data indicated the fibre Bragg grating sensing system proved to be useful and can be used for structural health monitoring in any environment. From the experimental data for the slab and beams, the moments were found to be64.33 kN.m, 64.35 kN.m and 45.20 kN.m (from the experimental bending moment diagram), and as per the idealistic (Ultimate Limit State), the data of 133 kN.m and 226.2 kN.m were obtained. The difference in values gave room for an early warning system, in other words, a reserve capacity of approximately 50% to failure.

Keywords: fibre bragg grating, structural health monitoring, fibre optic sensors, beams

Procedia PDF Downloads 106
351 Road Accident Blackspot Analysis: Development of Decision Criteria for Accident Blackspot Safety Strategies

Authors: Tania Viju, Bimal P., Naseer M. A.

Abstract:

This study aims to develop a conceptual framework for the decision support system (DSS), that helps the decision-makers to dynamically choose appropriate safety measures for each identified accident blackspot. An accident blackspot is a segment of road where the frequency of accident occurrence is disproportionately greater than other sections on roadways. According to a report by the World Bank, India accounts for the highest, that is, eleven percent of the global death in road accidents with just one percent of the world’s vehicles. Hence in 2015, the Ministry of Road Transport and Highways of India gave prime importance to the rectification of accident blackspots. To enhance road traffic safety and reduce the traffic accident rate, effectively identifying and rectifying accident blackspots is of great importance. This study helps to understand and evaluate the existing methods in accident blackspot identification and prediction that are used around the world and their application in Indian roadways. The decision support system, with the help of IoT, ICT and smart systems, acts as a management and planning tool for the government for employing efficient and cost-effective rectification strategies. In order to develop a decision criterion, several factors in terms of quantitative as well as qualitative data that influence the safety conditions of the road are analyzed. Factors include past accident severity data, occurrence time, light, weather and road conditions, visibility, driver conditions, junction type, land use, road markings and signs, road geometry, etc. The framework conceptualizes decision-making by classifying blackspot stretches based on factors like accident occurrence time, different climatic and road conditions and suggesting mitigation measures based on these identified factors. The decision support system will help the public administration dynamically manage and plan the necessary safety interventions required to enhance the safety of the road network.

Keywords: decision support system, dynamic management, road accident blackspots, road safety

Procedia PDF Downloads 116
350 An Experimental Investigation on Explosive Phase Change of Liquefied Propane During a Bleve Event

Authors: Frederic Heymes, Michael Albrecht Birk, Roland Eyssette

Abstract:

Boiling Liquid Expanding Vapor Explosion (BLEVE) has been a well know industrial accident for over 6 decades now, and yet it is still poorly predicted and avoided. BLEVE is created when a vessel containing a pressure liquefied gas (PLG) is engulfed in a fire until the tank rupture. At this time, the pressure drops suddenly, leading the liquid to be in a superheated state. The vapor expansion and the violent boiling of the liquid produce several shock waves. This works aimed at understanding the contribution of vapor ad liquid phases in the overpressure generation in the near field. An experimental work was undertaken at a small scale to reproduce realistic BLEVE explosions. Key parameters were controlled through the experiments, such as failure pressure, fluid mass in the vessel, and weakened length of the vessel. Thirty-four propane BLEVEs were then performed to collect data on scenarios similar to common industrial cases. The aerial overpressure was recorded all around the vessel, and also the internal pressure changed during the explosion and ground loading under the vessel. Several high-speed cameras were used to see the vessel explosion and the blast creation by shadowgraph. Results highlight how the pressure field is anisotropic around the cylindrical vessel and highlights a strong dependency between vapor content and maximum overpressure from the lead shock. The time chronology of events reveals that the vapor phase is the main contributor to the aerial overpressure peak. A prediction model is built upon this assumption. Secondary flow patterns are observed after the lead. A theory on how the second shock observed in experiments forms is exposed thanks to an analogy with numerical simulation. The phase change dynamics are also discussed thanks to a window in the vessel. Ground loading measurements are finally presented and discussed to give insight into the order of magnitude of the force.

Keywords: phase change, superheated state, explosion, vapor expansion, blast, shock wave, pressure liquefied gas

Procedia PDF Downloads 52
349 The Role of Motivational Beliefs and Self-Regulated Learning Strategies in The Prediction of Mathematics Teacher Candidates' Technological Pedagogical And Content Knowledge (TPACK) Perceptions

Authors: Ahmet Erdoğan, Şahin Kesici, Mustafa Baloğlu

Abstract:

Information technologies have lead to changes in the areas of communication, learning, and teaching. Besides offering many opportunities to the learners, these technologies have changed the teaching methods and beliefs of teachers. What the Technological Pedagogical Content Knowledge (TPACK) means to the teachers is considerably important to integrate technology successfully into teaching processes. It is necessary to understand how to plan and apply teacher training programs in order to balance students’ pedagogical and technological knowledge. Because of many inefficient teacher training programs, teachers have difficulties in relating technology, pedagogy and content knowledge each other. While providing an efficient training supported with technology, understanding the three main components (technology, pedagogy and content knowledge) and their relationship are very crucial. The purpose of this study is to determine whether motivational beliefs and self-regulated learning strategies are significant predictors of mathematics teacher candidates' TPACK perceptions. A hundred seventy five Turkish mathematics teachers candidates responded to the Motivated Strategies for Learning Questionnaire (MSLQ) and the Technological Pedagogical And Content Knowledge (TPACK) Scale. Of the group, 129 (73.7%) were women and 46 (26.3%) were men. Participants' ages ranged from 20 to 31 years with a mean of 23.04 years (SD = 2.001). In this study, a multiple linear regression analysis was used. In multiple linear regression analysis, the relationship between the predictor variables, mathematics teacher candidates' motivational beliefs, and self-regulated learning strategies, and the dependent variable, TPACK perceptions, were tested. It was determined that self-efficacy for learning and performance and intrinsic goal orientation are significant predictors of mathematics teacher candidates' TPACK perceptions. Additionally, mathematics teacher candidates' critical thinking, metacognitive self-regulation, organisation, time and study environment management, and help-seeking were found to be significant predictors for their TPACK perceptions.

Keywords: candidate mathematics teachers, motivational beliefs, self-regulated learning strategies, technological and pedagogical knowledge, content knowledge

Procedia PDF Downloads 458
348 Geospatial Analysis of Hydrological Response to Forest Fires in Small Mediterranean Catchments

Authors: Bojana Horvat, Barbara Karleusa, Goran Volf, Nevenka Ozanic, Ivica Kisic

Abstract:

Forest fire is a major threat in many regions in Croatia, especially in coastal areas. Although they are often caused by natural processes, the most common cause is the human factor, intentional or unintentional. Forest fires drastically transform landscapes and influence natural processes. The main goal of the presented research is to analyse and quantify the impact of the forest fire on hydrological processes and propose the model that best describes changes in hydrological patterns in the analysed catchments. Keeping in mind the spatial component of the processes, geospatial analysis is performed to gain better insight into the spatial variability of the hydrological response to disastrous events. In that respect, two catchments that experienced severe forest fire were delineated, and various hydrological and meteorological data were collected both attribute and spatial. The major drawback is certainly the lack of hydrological data, common in small torrential karstic streams; hence modelling results should be validated with the data collected in the catchment that has similar characteristics and established hydrological monitoring. The event chosen for the modelling is the forest fire that occurred in July 2019 and burned nearly 10% of the analysed area. Surface (land use/land cover) conditions before and after the event were derived from the two Sentinel-2 images. The mapping of the burnt area is based on a comparison of the Normalized Burn Index (NBR) computed from both images. To estimate and compare hydrological behaviour before and after the event, curve number (CN) values are assigned to the land use/land cover classes derived from the satellite images. Hydrological modelling resulted in surface runoff generation and hence prediction of hydrological responses in the catchments to a forest fire event. The research was supported by the Croatian Science Foundation through the project 'Influence of Open Fires on Water and Soil Quality' (IP-2018-01-1645).

Keywords: Croatia, forest fire, geospatial analysis, hydrological response

Procedia PDF Downloads 104
347 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)

Authors: Tarek Duzan

Abstract:

Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.

Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data

Procedia PDF Downloads 79
346 Evaluation of the Effect of Milk Recording Intervals on the Accuracy of an Empirical Model Fitted to Dairy Sheep Lactations

Authors: L. Guevara, Glória L. S., Corea E. E, A. Ramírez-Zamora M., Salinas-Martinez J. A., Angeles-Hernandez J. C.

Abstract:

Mathematical models are useful for identifying the characteristics of sheep lactation curves to develop and implement improved strategies. However, the accuracy of these models is influenced by factors such as the recording regime, mainly the intervals between test day records (TDR). The current study aimed to evaluate the effect of different TDR intervals on the goodness of fit of the Wood model (WM) applied to dairy sheep lactations. A total of 4,494 weekly TDRs from 156 lactations of dairy crossbred sheep were analyzed. Three new databases were generated from the original weekly TDR data (7D), comprising intervals of 14(14D), 21(21D), and 28(28D) days. The parameters of WM were estimated using the “minpack.lm” package in the R software. The shape of the lactation curve (typical and atypical) was defined based on the WM parameters. The goodness of fit was evaluated using the mean square of prediction error (MSPE), Root of MSPE (RMSPE), Akaike´s Information Criterion (AIC), Bayesian´s Information Criterion (BIC), and the coefficient of correlation (r) between the actual and estimated total milk yield (TMY). WM showed an adequate estimate of TMY regardless of the TDR interval (P=0.21) and shape of the lactation curve (P=0.42). However, we found higher values of r for typical curves compared to atypical curves (0.9vs.0.74), with the highest values for the 28D interval (r=0.95). In the same way, we observed an overestimated peak yield (0.92vs.6.6 l) and underestimated time of peak yield (21.5vs.1.46) in atypical curves. The best values of RMSPE were observed for the 28D interval in both lactation curve shapes. The significant lowest values of AIC (P=0.001) and BIC (P=0.001) were shown by the 7D interval for typical and atypical curves. These results represent the first approach to define the adequate interval to record the regime of dairy sheep in Latin America and showed a better fitting for the Wood model using a 7D interval. However, it is possible to obtain good estimates of TMY using a 28D interval, which reduces the sampling frequency and would save additional costs to dairy sheep producers.

Keywords: gamma incomplete, ewes, shape curves, modeling

Procedia PDF Downloads 48
345 Identification of the Expression of Top Deregulated MiRNAs in Rheumatoid Arthritis and Osteoarthritis

Authors: Hala Raslan, Noha Eltaweel, Hanaa Rasmi, Solaf Kamel, May Magdy, Sherif Ismail, Khalda Amr

Abstract:

Introduction: Rheumatoid arthritis (RA) is an inflammatory, autoimmune disorder with progressive joint damage. Osteoarthritis (OA) is a degenerative disease of the articular cartilage that shows multiple clinical manifestations or symptoms resembling those of RA. Genetic predisposition is believed to be a principal etiological factor for RA and OA. In this study, we aimed to measure the expression of the top deregulated miRNAs that might be the cause of pathogenesis in both diseases, according to our latest NGS analysis. Six of the deregulated miRNAs were selected as they had multiple target genes in the RA pathway, so they are more likely to affect the RA pathogenesis.Methods: Eighty cases were recruited in this study; 45 rheumatoid arthiritis (RA), 30 osteoarthiritis (OA) patients, as well as 20 healthy controls. The selection of the miRNAs from our latest NGS study was done using miRwalk according to the number of their target genes that are members in the KEGG RA pathway. Total RNA was isolated from plasma of all recruited cases. The cDNA was generated by the miRcury RT Kit then used as a template for real-time PCR with miRcury Primer Assays and the miRcury SYBR Green PCR Kit. Fold changes were calculated from CT values using the ΔΔCT method of relative quantification. Results were compared RA vs Controls and OA vs Controls. Target gene prediction and functional annotation of the deregulated miRNAs was done using Mienturnet. Results: Six miRNAs were selected. They were miR-15b-3p, -128-3p, -194-3p, -328-3p, -542-3p and -3180-5p. In RA samples, three of the measured miRNAs were upregulated (miR-194, -542, and -3180; mean Rq= 2.6, 3.8 and 8.05; P-value= 0.07, 0.05 and 0.01; respectively) while the remaining 3 were downregulated (miR-15b, -128 and -328; mean Rq= 0.21, 0.39 and 0.6; P-value= <0.0001, <0.0001 and 0.02; respectively) all with high statistical significance except miR-194. While in OA samples, two of the measured miRNAs were upregulated (miR-194 and -3180; mean Rq= 2.6 and 7.7; P-value= 0.1 and 0.03; respectively) while the remaining 4 were downregulated (miR-15b, -128, -328 and -542; mean Rq= 0.5, 0.03, 0.08 and 0.5; P-value= 0.0008, 0.003, 0.006 and 0.4; respectively) with statistical significance compared to controls except miR-194 and miR-542. The functional enrichment of the selected top deregulated miRNAs revealed the highly enriched KEGG pathways and GO terms. Conclusion: Five of the studied miRNAs were greatly deregulated in RA and OA, they might be highly involved in the disease pathogenesis and so might be future therapeutic targets. Further functional studies are crucial to assess their roles and actual target genes.

Keywords: MiRNAs, expression, rheumatoid arthritis, osteoarthritis

Procedia PDF Downloads 54
344 Importance of Prostate Volume, Prostate Specific Antigen Density and Free/Total Prostate Specific Antigen Ratio for Prediction of Prostate Cancer

Authors: Aliseydi Bozkurt

Abstract:

Objectives: Benign prostatic hyperplasia (BPH) is the most common benign disease, and prostate cancer (PC) is malign disease of the prostate gland. Transrectal ultrasound-guided biopsy (TRUS-bx) is one of the most important diagnostic tools in PC diagnosis. Identifying men at increased risk for having a biopsy detectable prostate cancer should consider prostate specific antigen density (PSAD), f/t PSA Ratio, an estimate of prostate volume. Method: We retrospectively studied 269 patients who had a prostate specific antigen (PSA) score of 4 or who had suspected rectal examination at any PSA level and received TRUS-bx between January 2015 and June 2018 in our clinic. TRUS-bx was received by 12 experienced urologists with 12 quadrants. Prostate volume was calculated prior to biopsy together with TRUS. Patients were classified as malignant and benign at the end of pathology. Age, PSA value, prostate volume in transrectal ultrasonography, corpuscle biopsy, biopsy pathology result, the number of cancer core and Gleason score were evaluated in the study. The success rates of PV, PSAD, and f/tPSA were compared in all patients and those with PSA 2.5-10 ng/mL and 10.1-30 ng/mL tp foresee prostate cancer. Result: In the present study, in patients with PSA 2.5-10 ng/ml, PV cut-off value was 43,5 mL (n=42 < 43,5 mL and n=102 > 43,5 mL) while in those with PSA 10.1-30 ng/mL prostate volüme (PV) cut-off value was found 61,5 mL (n=31 < 61,5 mL and n=36 > 61,5 mL). Total PSA values in the group with PSA 2.5-10 ng/ml were found lower (6.0 ± 1.3 vs 6.7 ± 1.7) than that with PV < 43,5 mL, this value was nearly significant (p=0,043). In the group with PSA value 10.1-30 ng/mL, no significant difference was found (p=0,117) in terms of total PSA values between the group with PV < 61,5 mL and that with PV > 61,5 mL. In the group with PSA 2.5-10 ng/ml, in patients with PV < 43,5 mL, f/t PSA value was found significantly lower compared to the group with PV > 43,5 mL (0.21 ± 0.09 vs 0.26 ± 0.09 p < 0.001 ). Similarly, in the group with PSA value of 10.1-30 ng/mL, f/t PSA value was found significantly lower in patients with PV < 61,5 mL (0.16 ± 0.08 vs 0.23 ± 0.10 p=0,003). In the group with PSA 2.5-10 ng/ml, PSAD value in patients with PV < 43,5 mL was found significantly higher compared to those with PV > 43,5 mL (0.17 ± 0.06 vs 0.10 ± 0.03 p < 0.001). Similarly, in the group with PSA value 10.1-30 ng/mL PSAD value was found significantly higher in patients with PV < 61,5 mL (0.47 ± 0.23 vs 0.17 ± 0.08 p < 0.001 ). The biopsy results suggest that in the group with PSA 2.5-10 ng/ml, in 29 of the patients with PV < 43,5 mL (69%) cancer was detected while in 13 patients (31%) no cancer was detected. While in 19 patients with PV > 43,5 mL (18,6%) cancer was found, in 83 patients (81,4%) no cancer was detected (p < 0.001). In the group with PSA value 10.1-30 ng/mL, in 21 patients with PV < 61,5 mL (67.7%) cancer was observed while only in10 patients (32.3%) no cancer was seen. In 5 patients with PV > 61,5 mL (13.9%) cancer was found while in 31 patients (86.1%) no cancer was observed (p < 0.001). Conclusions: Identifying men at increased risk for having a biopsy detectable prostate cancer should consider PSA, f/t PSA Ratio, an estimate of prostate volume. Prostate volume in PC was found lower.

Keywords: prostate cancer, prostate volume, prostate specific antigen, free/total PSA ratio

Procedia PDF Downloads 129
343 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri

Authors: Shishay Kidanu, Abdullah Alhaj

Abstract:

Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.

Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri

Procedia PDF Downloads 50
342 Clothing Features of Greek Orthodox Woman Immigrants in Konya (Iconium)

Authors: Kenan Saatcioglu, Fatma Koc

Abstract:

When the immigration is considered, it has been found that communities were continuously influenced by the immigrations from the date of the emergence of mankind until the day. The political, social and economic reasons seen at the various periods caused the communities go to new places from where they have lived before. Immigrations have occurred as a result of unequal opportunities among communities, social exclusion and imposition, compulsory homeland emerging politically, exile and war. Immigration is a social tool that is defined as a geographical relocation of people from a housing unit (city, village etc.) to another to spend all or part of their future lives. Immigrations have an effect on the history of humanity directly or indirectly, revealing new dimensions for communities to evaluate the concept of homeland. With these immigrations, communities carried their cultural values to their new settlements leading to a new interaction process. With this interaction process both migrant and native community cultures were reshaped and richer cultural values emerged. The clothes of these communities are amongst the most important visual evidence of this rich cultural interaction. As a result of these immigrations, communities affected each other culture’s clothing mutually and they started adding features of other cultures to the garments of its own, resulting new clothing cultures in time. The cultural and historical differences between these communities are seem to be the most influential factors of keeping the clothing cultures of the people alive. The most important and tragic of these immigrations took place after the Turkish War of Independence that was fought against Greece in 1922. The concept of forced immigration was a result of Lausanne Peace Treaty, which was signed between Turkish and Greek governments on 30th January 1923. As a result Greek Orthodoxes, who lived in Turkey (Anatolia and Thrace) and Muslim Turks, who lived in Greece were forced to immigrate. In this study, clothing features of Greek Orthodox woman immigrants who emigrated from Turkey to Greece in the period of the ‘1923 Greek-Turkish Population Exchange’ are aimed to be examined. In the study using the descriptive research method, before the ‘1923 Greek-Turkish Population Exchange’, the clothings belong to Greek Orthodox woman immigrants who lived in ‘Konya (Iconium)’ region in the Ottoman Empire, are discussed. In the study that is based on two different clothings belonging to ‘Konya (Iconium)’ region in the clothing collection archive at the ‘National Historical Museum’ in Greece, clothings of the Greek Orthodox woman immigrants are discussed with cultural norms, beliefs, values as well as in terms of form, ornamentation and dressing styles. Technical drawings are provided demonstrating formal features of the clothing parts that formed clothing integrity and their properties are described with the use of related literature in this study. This study is of importance that that it contains Greek Orthodox refugees’ clothings that are found in the clothing collection archive at the ‘National Historical Museum’ in Greece reflecting the cultural identities, providing information and documentation on the clothing features of the ‘1923 Greek-Turkish Population Exchange’.

Keywords: clothing, Greece, Greek Orthodoxes, immigration, national historical museum, Turkey

Procedia PDF Downloads 231