Search results for: bi-spectral methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14873

Search results for: bi-spectral methods

12203 The Effect of Observational Practice on the Volleyball Service Learning with Emphasis on the Role of Self–Efficacy

Authors: Majed Zobairy, Payam Mohammadpanahi

Abstract:

Introduction: Skill movement education is one of extremely important duty for sport coaches and sport teachers. Researchers have done lots of studies in this filed to gain the best methodology in movement learning. One of the essential aspects in skill movement education is observational learning. Observational learning, or learning by watching demonstrations, has been characterized as one of the most important methods by which people learn variety of skill and behaviours.The purpose of this study was determined the effect of observational practice on the volleyball service learning with emphasis on the Role of Self–Efficacy. Methods: The Sample consisted of100 male students was assigned accessible sampling technique and homogeneous manner with emphasis on the Role of Self–Efficacy level to 4 groups. The first group performed physical training, the second group performed observational practice task, the third practiced physically and observationally and the fourth group served as the control group. The experimental groups practiced in a one day acquisition and performed the retention task, after 72 hours. Kolmogorov-Smirnov test and independent t-test were used for Statistical analyses. Results and Discussion: Results shows that observation practice task group can significantly improve volleyball services skills acquisition (T=7.73). Also mixed group (physically and observationally) is significantly better than control group regarding to volleyball services skills acquisition (T=7.04). Conclusion: Results have shown observation practice task group and mixed group are significantly better than control group in acquisition test. The present results are in line with previous studies, suggesting that observation learning can improve performance. On the other hand, results shows that self-efficacy level significantly effect on acquisition movement skill. In other words, high self-efficacy is important factor in skill learning level in volleyball service.

Keywords: observational practice, volleyball service, self–efficacy, sport science

Procedia PDF Downloads 381
12202 Re-Thinking and Practicing Critical Pedagogy in Education through Art

Authors: Dalya Markovich

Abstract:

In the last decade art-educators strive to integrate critical pedagogy within the art classroom. Critical pedagogy aims to deconstruct the oppressive social reality and the false consciousness in which learners from both privileged and underprivileged groups are caught. Understanding oppression as a product of socio-political conditions seeks to instigate processes of change anchored in the student's views. Yet, growing empirical evidence show that these efforts often has resulted in art projects in which art teachers play an active role in the process of critical teaching, while the students remain passive listeners. In this common scenario, the teachers/artists become authoritarian moral guides of critical thinking and acting, while the students are often found to be indifferent or play along to satisfy the teachers'/artists aspirations. These responses indicate that the message of critical pedagogy – transforming the students' way of thinking and acting – mostly do not fulfill its emancipation goals. The study analyses the critical praxis embedded in new art projects and their influence on the participants. This type of projects replaces the individual producer with a collaborative work; switch the finite work with an ongoing project; and transforms the passive learner to an engaged co-producer. The research delves into the pedagogical framework of two of these art projects by using qualitative methods. In-depth interviews were conducted with 4 of the projects' initiator and managers, in order to access understandings of the art projects goals and pedagogical methods. Field work included 4 participant observation (two in each project) during social encounters in the project's settings, focusing on how critical thinking is enacted (or not) by the participants. The analysis exposes how the new art projects avoid the prepackaged "critical" assumptions and praxis, thus turning the participants from passive carriers of critical thinking to agents that actively use criticism. Findings invite researchers to explore new avenues for understanding critical pedagogy and developing various ways to implement critical pedagogy during art education, in view of the growing need of critical thinking and acting in school/society.

Keywords: critical pedagogy, education through art, collaborative work, agency

Procedia PDF Downloads 131
12201 Chemotrophic Signal Exchange between the Host Plant Helianthemum sessiliflorum and Terfezia boudieri

Authors: S. Ben-Shabat, T. Turgeman, O. Leubinski, N. Roth-Bejerano, V. Kagan-Zur, Y. Sitrit

Abstract:

The ectomycorrhizal (ECM) desert truffle Terfezia boudieri produces edible fruit bodies and forms symbiosis with its host plant Helianthemum sessiliflorum (Cistaceae) in the Negev desert of Israel. The symbiosis is vital for both partners' survival under desert conditions. Under desert habitat conditions, ECMs must form symbiosis before entering the dry season. To secure a successful encounter, in the course of evolution, both partners have responded by evolving special signals exchange that facilitates recognition. Members of the Cistaceae family serve as host plants for many important truffles. Conceivably, during evolution a common molecule present in Cistaceae plants was recruited to facilitate successful encounter with ectomycorrhizas. Arbuscular vesicular fungi (AM) are promiscuous in host preferences, in contrast, ECM fungi show specificity to host plants. Accordingly, we hypothesize that H. sessiliflorum secretes a chemotrophic-signaling, which is common to plants hosting ECM fungi belonging to the Pezizales. However, thus far no signaling molecules have been identified in ECM fungi. We developed a bioassay for chemotrophic activity. Fractionation of root exudates revealed a substance with chemotrophic activity and molecular mass of 534. Following the above concept, screening the transcriptome of Terfezia, grown under chemoattraction, discovered genes showing high homology to G proteins-coupled receptors of plant pathogens involved in positive chemotaxis and chemotaxis suppression. This study aimed to identify the active molecule using analytical methods (LC-MS, NMR etc.). This should contribute to our understanding of how ECM fungi communicate with their hosts in the rhizosphere. In line with the ability of Terfezia to form also endomycorrhizal symbiosis like AM fungi, analysis of the mechanisms may likewise be applicable to AM fungi. Developing methods to manipulate fungal growth by the chemoattractant can open new ways to improve inoculation of plants.

Keywords: chemotrophic signal, Helianthemum sessiliflorum, Terfezia boudieri, ECM

Procedia PDF Downloads 394
12200 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu

Abstract:

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.

Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging

Procedia PDF Downloads 143
12199 Evaluation of Duncan-Chang Deformation Parameters of Granular Fill Materials Using Non-Invasive Seismic Wave Methods

Authors: Ehsan Pegah, Huabei Liu

Abstract:

Characterizing the deformation properties of fill materials in a wide stress range always has been an important issue in geotechnical engineering. The hyperbolic Duncan-Chang model is a very popular model of stress-strain relationship that captures the nonlinear deformation of granular geomaterials in a very tractable manner. It consists of a particular set of the model parameters, which are generally measured from an extensive series of laboratory triaxial tests. This practice is both time-consuming and costly, especially in large projects. In addition, undesired effects caused by soil disturbance during the sampling procedure also may yield a large degree of uncertainty in the results. Accordingly, non-invasive geophysical seismic approaches may be utilized as the appropriate alternative surveys for measuring the model parameters based on the seismic wave velocities. To this end, the conventional seismic refraction profiles were carried out in the test sites with the granular fill materials to collect the seismic waves information. The acquired shot gathers are processed, from which the P- and S-wave velocities can be derived. The P-wave velocities are extracted from the Seismic Refraction Tomography (SRT) technique while S-wave velocities are obtained by the Multichannel Analysis of Surface Waves (MASW) method. The velocity values were then utilized with the equations resulting from the rigorous theories of elasticity and soil mechanics to evaluate the Duncan-Chang model parameters. The derived parameters were finally compared with those from laboratory tests to validate the reliability of the results. The findings of this study may confidently serve as the useful references for determination of nonlinear deformation parameters of granular fill geomaterials. Those are environmentally friendly and quite economic, which can yield accurate results under the actual in-situ conditions using the surface seismic methods.

Keywords: Duncan-Chang deformation parameters, granular fill materials, seismic waves velocity, multichannel analysis of surface waves, seismic refraction tomography

Procedia PDF Downloads 172
12198 The Youth Employment Peculiarities in Post-Soviet Georgia

Authors: M. Lobzhanidze, N. Damenia

Abstract:

The article analyzes the current structural changes in the economy of Georgia, liberalization and integration processes of the economy. In accordance with this analysis, the peculiarities and the problems of youth employment are revealed. In the paper, the Georgian labor market and its contradictions are studied. Based on the analysis of materials, the socio-economic losses caused by the long-term and mass unemployment of young people are revealed, the objective and subjective circumstances of getting higher education are studied. The youth employment and unemployment rates are analyzed. Based on the research, the factors that increase unemployment are identified. According to the analysis of the youth employment, it has appeared that the unemployment share in the number of economically active population has increased in the younger age group. It demonstrates the high requirements of the labour market in terms of the quality of the workforce. Also, it is highlighted that young people are exposed to a highly paid job. The following research methods are applied in the presented paper: statistical (selection, grouping, observation, trend, etc.) and qualitative research (in-depth interview), as well as analysis, induction and comparison methods. The article presents the data by the National Statistics Office of Georgia and the Ministry of Agriculture of Georgia, policy documents of the Parliament of Georgia, scientific papers by Georgian and foreign scientists, analytical reports, publications and EU research materials on similar issues. The work estimates the students and graduates employment problems existing in the state development strategy and priorities. The measures to overcome the challenges are defined. The article describes the mechanisms of state regulation of youth employment and the ways of improving this regulatory base. As for major findings, it should be highlighted that the main problems are: lack of experience and incompatibility of youth qualification with the requirements of the labor market. Accordingly, it is concluded that the unemployment rate of young people in Georgia is increasing.

Keywords: migration of youth, youth employment, migration management, youth employment and unemployment

Procedia PDF Downloads 132
12197 Acoustic Finite Element Analysis of a Slit Model with Consideration of Air Viscosity

Authors: M. Sasajima, M. Watanabe, T. Yamaguchi Y. Kurosawa, Y. Koike

Abstract:

In very narrow pathways, the speed of sound propagation and the phase of sound waves change due to the air viscosity. We have developed a new Finite Element Method (FEM) that includes the effects of air viscosity for modeling a narrow sound pathway. This method is developed as an extension of the existing FEM for porous sound-absorbing materials. The numerical calculation results for several three-dimensional slit models using the proposed FEM are validated against existing calculation methods.

Keywords: simulation, FEM, air viscosity, slit

Procedia PDF Downloads 359
12196 A Mixed-Method Exploration of the Interrelationship between Corporate Governance and Firm Performance

Authors: Chen Xiatong

Abstract:

The study aims to explore the interrelationship between corporate governance factors and firm performance in Mainland China using a mixed-method approach. To clarify the current effectiveness of corporate governance, uncover the complex interrelationships between governance factors and firm performance, and enhance understanding of corporate governance strategies in Mainland China. The research involves quantitative methods like statistical analysis of governance factors and firm performance data, as well as qualitative approaches including policy research, case studies, and interviews with staff members. The study aims to reveal the current effectiveness of corporate governance in Mainland China, identify complex interrelationships between governance factors and firm performance, and provide suggestions for companies to enhance their governance practices. The research contributes to enriching the literature on corporate governance by providing insights into the effectiveness of governance practices in Mainland China and offering suggestions for improvement. Quantitative data will be gathered through surveys and sampling methods, focusing on governance factors and firm performance indicators. Qualitative data will be collected through policy research, case studies, and interviews with staff members. Quantitative data will be analyzed using statistical, mathematical, and computational techniques. Qualitative data will be analyzed through thematic analysis and interpretation of policy documents, case study findings, and interview responses. The study addresses the effectiveness of corporate governance in Mainland China, the interrelationship between governance factors and firm performance, and staff members' perceptions of corporate governance strategies. The research aims to enhance understanding of corporate governance effectiveness, enrich the literature on governance practices, and contribute to the field of business management and human resources management in Mainland China.

Keywords: corporate governance, business management, human resources management, board of directors

Procedia PDF Downloads 44
12195 The Role of Home Composting in Waste Management Cost Reduction

Authors: Nahid Hassanshahi, Ayoub Karimi-Jashni, Nasser Talebbeydokhti

Abstract:

Due to the economic and environmental benefits of producing less waste, the US Environmental Protection Agency (EPA) introduces source reduction as one of the most important means to deal with the problems caused by increased landfills and pollution. Waste reduction involves all waste management methods, including source reduction, recycling, and composting, which reduce waste flow to landfills or other disposal facilities. Source reduction of waste can be studied from two perspectives: avoiding waste production, or reducing per capita waste production, and waste deviation that indicates the reduction of waste transfer to landfills. The present paper has investigated home composting as a managerial solution for reduction of waste transfer to landfills. Home composting has many benefits. The use of household waste for the production of compost will result in a much smaller amount of waste being sent to landfills, which in turn will reduce the costs of waste collection, transportation and burial. Reducing the volume of waste for disposal and using them for the production of compost and plant fertilizer might help to recycle the material in a shorter time and to use them effectively in order to preserve the environment and reduce contamination. Producing compost in a home-based manner requires very small piece of land for preparation and recycling compared with other methods. The final product of home-made compost is valuable and helps to grow crops and garden plants. It is also used for modifying the soil structure and maintaining its moisture. The food that is transferred to landfills will spoil and produce leachate after a while. It will also release methane and greenhouse gases. But, composting these materials at home is the best way to manage degradable materials, use them efficiently and reduce environmental pollution. Studies have shown that the benefits of the sale of produced compost and the reduced costs of collecting, transporting, and burying waste can well be responsive to the costs of purchasing home compost machine and the cost of related trainings. Moreover, the process of producing home compost may be profitable within 4 to 5 years and as a result, it will have a major role in reducing waste management.

Keywords: compost, home compost, reducing waste, waste management

Procedia PDF Downloads 410
12194 Investigating a Crack in Care: Assessing Long-Term Impacts of Child Abuse and Neglect

Authors: Remya Radhakrishnan, Hema Perinbanathan, Anukriti Rath, Reshmi Ramachandran, Rohith Thazhathuvetil Sasindrababu, Maria Karizhenskaia

Abstract:

Childhood adversities have lasting effects on health and well-being. This abstract explores the connection between adverse childhood experiences (ACEs) and health consequences, including substance abuse and obesity. Understanding the impact of childhood trauma and emphasizing the importance of culturally sensitive treatments and focused interventions help to mitigate these effects. Research consistently shows a strong link between ACEs and poor health outcomes. Our team conducted a comprehensive literature review of depression and anxiety in Canadian children and youth, exploring diverse treatment methods, including medical, psychotherapy, and alternative therapies like art and music therapy. We searched Medline, Google Scholar, and St. Lawrence College Library. Only original research papers, published between 2012 and 2023, peer-reviewed, and reporting on childhood adversities on health and its treatment methods in children and youth in Canada were considered. We focused on their significance in treating depression and anxiety. According to the study's findings, the prevalence of adverse childhood experiences (ACEs) is still a significant concern. In Canada, 40% of people report having had multiple ACEs, and 78% report having had at least one ACE, highlighting the persistence of childhood adversity and indicating that the issue is unlikely to fade off in the near future. Likewise, findings revealed that individuals who experienced abuse, neglect, or violence during childhood are likelier to engage in harmful behaviors like polydrug use, suicidal ideation, and victimization and suffer from mental health problems such as depression and post-traumatic stress disorder (PTSD).

Keywords: adverse childhood experiences (ACEs), obesity, post-traumatic stress disorder (PTSD), resilience, substance abuse, trauma-informed care

Procedia PDF Downloads 104
12193 Exploring the Influence of Climate Change on Food Behavior in Medieval France: A Multi-Method Analysis of Human-Animal Interactions

Authors: Unsain Dianne, Roussel Audrey, Goude Gwenaëlle, Magniez Pierre, Storå Jan

Abstract:

This paper aims to investigate the changes in husbandry practices and meat consumption during the transition from the Medieval Climate Anomaly to the Little Ice Age in the South of France. More precisely, we will investigate breeding strategies, animal size and health status, carcass exploitation strategies, and the impact of socioeconomic status on human-environment interactions. For that purpose, we will analyze faunal remains from ten sites equally distributed between the two periods. Those include consumers from different socio-economic backgrounds (peasants, city dwellers, soldiers, lords, and the Popes). The research will employ different methods used in zooarchaeology: comparative anatomy, biometry, pathologies analyses, traceology, and utility indices, as well as experimental archaeology, to reconstruct and understand the changes in animal breeding and consumption practices. Their analysis will allow the determination of modifications in the animal production chain, with the composition of the flocks (species, size), their management (age, sex, health status), culinary practices (strategies for the exploitation of carcasses, cooking, tastes) or the importance of trade (butchers, sales of processed animal products). The focus will also be on the social extraction of consumers. The aim will be to determine whether climate change has had a greater impact on the most modest groups (such as peasants), whether the consequences have been global and have also affected the highest levels of society, or whether the social and economic factors have been sufficient to balance out the climatic hazards, leading to no significant changes. This study will contribute to our understanding of the impact of climate change on breeding and consumption strategies in medieval society from a historical and social point of view. It combines various research methods to provide a comprehensive analysis of the changes in human-animal interactions during different climatic periods.

Keywords: archaeology, animal economy, cooking, husbandry practices, climate change, France

Procedia PDF Downloads 45
12192 Ethnobotanical Medicines for Treating Snakebites among the Indigenous Maya Populations of Belize

Authors: Kerry Hull, Mark Wright

Abstract:

This paper brings light to ethnobotanical medicines used by the Maya of Belize to treat snake bites. The varying ecological zones of Belize boast over fifty species of snakes, nine of which are poisonous and dangerous to humans. Two distinct Maya groups occupy neighboring regions of Belize, the Q’eqchi’ and the Mopan. With Western medical care often far from their villages, what traditional methods are used to treat poisonous snake bites? Based primarily on data gathered with native consultants during the authors’ fieldwork with both groups, this paper details the ethnobotanical resources used by the Q’eqchi’ and Mopan traditional healers. The Q’eqchi’ and Mopan most commonly rely on traditional ‘bush doctors’ (ilmaj in Mopan), both male and female, and specialized ‘snake doctors’ to heal bites from venomous snakes. First, this paper presents each plant employed by healers for bites for the nine poisonous snakes in Belize along with the specific botanical recipes and methods of application for each remedy. Individual chemical and therapeutic qualities of some of those plants are investigated in an effort to explain their possible medicinal value for different toxins or the symptoms caused by those toxins. In addition, this paper explores mythological associations with certain snakes that inform local understanding regarding which plants are considered efficacious in each case, arguing that numerous oral traditions (recorded by the authors) help to link botanical medicines to episodes within their mythic traditions. Finally, the use of plants to counteract snakebites brought about through sorcery is discussed inasmuch as some snakes are seen as ‘helpers’ of sorcerers. Snake bites given under these circumstances can only be cured by those who know both the proper corresponding plant(s) and ritual prayer(s). This paper provides detailed documentation of traditional ethnomedicines and practices from the dying art of traditional Maya healers and argues for multi-faceted diagnostic techniques to determine toxin severity, the presence or absence of sorcery, and the appropriate botanical remedy.

Keywords: ethnobotany, Maya, medicine, snake bites

Procedia PDF Downloads 224
12191 Review of Concepts and Tools Applied to Assess Risks Associated with Food Imports

Authors: A. Falenski, A. Kaesbohrer, M. Filter

Abstract:

Introduction: Risk assessments can be performed in various ways and in different degrees of complexity. In order to assess risks associated with imported foods additional information needs to be taken into account compared to a risk assessment on regional products. The present review is an overview on currently available best practise approaches and data sources used for food import risk assessments (IRAs). Methods: A literature review has been performed. PubMed was searched for articles about food IRAs published in the years 2004 to 2014 (English and German texts only, search string “(English [la] OR German [la]) (2004:2014 [dp]) import [ti] risk”). Titles and abstracts were screened for import risks in the context of IRAs. The finally selected publications were analysed according to a predefined questionnaire extracting the following information: risk assessment guidelines followed, modelling methods used, data and software applied, existence of an analysis of uncertainty and variability. IRAs cited in these publications were also included in the analysis. Results: The PubMed search resulted in 49 publications, 17 of which contained information about import risks and risk assessments. Within these 19 cross references were identified to be of interest for the present study. These included original articles, reviews and guidelines. At least one of the guidelines of the World Organisation for Animal Health (OIE) and the Codex Alimentarius Commission were referenced in any of the IRAs, either for import of animals or for imports concerning foods, respectively. Interestingly, also a combination of both was used to assess the risk associated with the import of live animals serving as the source of food. Methods ranged from full quantitative IRAs using probabilistic models and dose-response models to qualitative IRA in which decision trees or severity tables were set up using parameter estimations based on expert opinions. Calculations were done using @Risk, R or Excel. Most heterogeneous was the type of data used, ranging from general information on imported goods (food, live animals) to pathogen prevalence in the country of origin. These data were either publicly available in databases or lists (e.g., OIE WAHID and Handystatus II, FAOSTAT, Eurostat, TRACES), accessible on a national level (e.g., herd information) or only open to a small group of people (flight passenger import data at national airport customs office). In the IRAs, an uncertainty analysis has been mentioned in some cases, but calculations have been performed only in a few cases. Conclusion: The current state-of-the-art in the assessment of risks of imported foods is characterized by a great heterogeneity in relation to general methodology and data used. Often information is gathered on a case-by-case basis and reformatted by hand in order to perform the IRA. This analysis therefore illustrates the need for a flexible, modular framework supporting the connection of existing data sources with data analysis and modelling tools. Such an infrastructure could pave the way to IRA workflows applicable ad-hoc, e.g. in case of a crisis situation.

Keywords: import risk assessment, review, tools, food import

Procedia PDF Downloads 292
12190 Polymer-Layered Gold Nanoparticles: Preparation, Properties and Uses of a New Class of Materials

Authors: S. M. Chabane sari S. Zargou, A.R. Senoudi, F. Benmouna

Abstract:

Immobilization of nano particles (NPs) is the subject of numerous studies pertaining to the design of polymer nano composites, supported catalysts, bioactive colloidal crystals, inverse opals for novel optical materials, latex templated-hollow inorganic capsules, immunodiagnostic assays; “Pickering” emulsion polymerization for making latex particles and film-forming composites or Janus particles; chemo- and biosensors, tunable plasmonic nano structures, hybrid porous monoliths for separation science and technology, biocidal polymer/metal nano particle composite coatings, and so on. Particularly, in the recent years, the literature has witnessed an impressive progress of investigations on polymer coatings, grafts and particles as supports for anchoring nano particles. This is actually due to several factors: polymer chains are flexible and may contain a variety of functional groups that are able to efficiently immobilize nano particles and their precursors by dispersive or van der Waals, electrostatic, hydrogen or covalent bonds. We review methods to prepare polymer-immobilized nano particles through a plethora of strategies in view of developing systems for separation, sensing, extraction and catalysis. The emphasis is on methods to provide (i) polymer brushes and grafts; (ii) monoliths and porous polymer systems; (iii) natural polymers and (iv) conjugated polymers as platforms for anchoring nano particles. The latter range from soft bio macromolecular species (proteins, DNA) to metallic, C60, semiconductor and oxide nano particles; they can be attached through electrostatic interactions or covalent bonding. It is very clear that physicochemical properties of polymers (e.g. sensing and separation) are enhanced by anchored nano particles, while polymers provide excellent platforms for dispersing nano particles for e.g. high catalytic performances. We thus anticipate that the synergetic role of polymeric supports and anchored particles will increasingly be exploited in view of designing unique hybrid systems with unprecedented properties.

Keywords: gold, layer, polymer, macromolecular

Procedia PDF Downloads 382
12189 The Observable Method for the Regularization of Shock-Interface Interactions

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique that is capable of regularizing the shocks and sharp interfaces simultaneously in the shock-interface interaction simulations. The direct numerical simulation of flows involving shocks has been investigated for many years and a lot of numerical methods were developed to capture the shocks. However, most of these methods rely on the numerical dissipation to regularize the shocks. Moreover, in high Reynolds number flows, the nonlinear terms in hyperbolic Partial Differential Equations (PDE) dominates, constantly generating small scale features. This makes direct numerical simulation of shocks even harder. The same difficulty happens in two-phase flow with sharp interfaces where the nonlinear terms in the governing equations keep sharpening the interfaces to discontinuities. The main idea of the proposed technique is to average out the small scales that is below the resolution (observable scale) of the computational grid by filtering the convective velocity in the nonlinear terms in the governing PDE. This technique is named “observable method” and it results in a set of hyperbolic equations called observable equations, namely, observable Navier-Stokes or Euler equations. The observable method has been applied to the flow simulations involving shocks, turbulence, and two-phase flows, and the results are promising. In the current paper, the observable method is examined on the performance of regularizing shocks and interfaces at the same time in shock-interface interaction problems. Bubble-shock interactions and Richtmyer-Meshkov instability are particularly chosen to be studied. Observable Euler equations will be numerically solved with pseudo-spectral discretization in space and third order Total Variation Diminishing (TVD) Runge Kutta method in time. Results are presented and compared with existing publications. The interface acceleration and deformation and shock reflection are particularly examined.

Keywords: compressible flow simulation, inviscid regularization, Richtmyer-Meshkov instability, shock-bubble interactions.

Procedia PDF Downloads 339
12188 Threat Modeling Methodology for Supporting Industrial Control Systems Device Manufacturers and System Integrators

Authors: Raluca Ana Maria Viziteu, Anna Prudnikova

Abstract:

Industrial control systems (ICS) have received much attention in recent years due to the convergence of information technology (IT) and operational technology (OT) that has increased the interdependence of safety and security issues to be considered. These issues require ICS-tailored solutions. That led to the need to creation of a methodology for supporting ICS device manufacturers and system integrators in carrying out threat modeling of embedded ICS devices in a way that guarantees the quality of the identified threats and minimizes subjectivity in the threat identification process. To research, the possibility of creating such a methodology, a set of existing standards, regulations, papers, and publications related to threat modeling in the ICS sector and other sectors was reviewed to identify various existing methodologies and methods used in threat modeling. Furthermore, the most popular ones were tested in an exploratory phase on a specific PLC device. The outcome of this exploratory phase has been used as a basis for defining specific characteristics of ICS embedded devices and their deployment scenarios, identifying the factors that introduce subjectivity in the threat modeling process of such devices, and defining metrics for evaluating the minimum quality requirements of identified threats associated to the deployment of the devices in existing infrastructures. Furthermore, the threat modeling methodology was created based on the previous steps' results. The usability of the methodology was evaluated through a set of standardized threat modeling requirements and a standardized comparison method for threat modeling methodologies. The outcomes of these verification methods confirm that the methodology is effective. The full paper includes the outcome of research on different threat modeling methodologies that can be used in OT, their comparison, and the results of implementing each of them in practice on a PLC device. This research is further used to build a threat modeling methodology tailored to OT environments; a detailed description is included. Moreover, the paper includes results of the evaluation of created methodology based on a set of parameters specifically created to rate threat modeling methodologies.

Keywords: device manufacturers, embedded devices, industrial control systems, threat modeling

Procedia PDF Downloads 66
12187 Modification of Newton Method in Two Points Block Differentiation Formula

Authors: Khairil Iskandar Othman, Nadhirah Kamal, Zarina Bibi Ibrahim

Abstract:

Block methods for solving stiff systems of ordinary differential equations (ODEs) are based on backward differential formulas (BDF) with PE(CE)2 and Newton method. In this paper, we introduce Modified Newton as a new strategy to get more efficient result. The derivation of BBDF using modified block Newton method is presented. This new block method with predictor-corrector gives more accurate result when compared to the existing BBDF.

Keywords: modified Newton, stiff, BBDF, Jacobian matrix

Procedia PDF Downloads 359
12186 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 114
12185 A Mixed-Methods Design and Implementation Study of ‘the Attach Project’: An Attachment-Based Educational Intervention for Looked after Children in Northern Ireland

Authors: Hannah M. Russell

Abstract:

‘The Attach Project’ (TAP), is an educational intervention aimed at improving educational and socio-emotional outcomes for children who are looked after. TAP is underpinned by Attachment Theory and is adapted from Dyadic Developmental Psychotherapy (DDP), which is a treatment for children and young people impacted by complex trauma and disorders of attachment. TAP has been implemented in primary schools in Northern Ireland throughout the 2018/19 academic year. During this time, a design and implementation study has been conducted to assess the promise of effectiveness for the future dissemination and ‘scaling-up’ of the programme for a larger, randomised control trial. TAP has been designed specifically for implementation in a school setting and is comprised of a whole school element and a more individualised Key Adult-Key Child pairing. This design and implementation study utilises a mixed-methods research design consisting of quantitative, qualitative, and observational measures with stakeholder input and involvement being considered an integral component. The use of quantitative measures, such as self-report questionnaires prior to and eight months following the implementation of TAP, enabled the analysis of the strengths and direction of relations between the various components of the programme, as well as the influence of implementation factors. The use of qualitative measures, incorporating semi-structured interviews and focus groups, enabled the assessment of implementation factors, identification of implementation barriers, and potential methods of addressing these issues. Observational measures facilitated the continual development and improvement of ‘TAP training’ for school staff. Preliminary findings have provided evidence of promise for the effectiveness of TAP and indicate the potential benefits of introducing this type of attachment-based intervention across other educational settings. This type of intervention could benefit not only children who are looked after but all children who may be impacted by complex trauma or disorders of attachment. Furthermore, findings from this study demonstrate that it is possible for children to form a secondary attachment relationship with a significant adult in school. However, various implementation factors which should be addressed were identified throughout the study, such as the necessity of protected time being introduced to facilitate the development of a positive Key Adult- Key Child relationship. Furthermore, additional ‘re-cap’ training is required in future dissemination of the programme, to maximise ‘attachment friendly practice’ in the whole staff team. Qualitative findings have also indicated that there is a general opinion across school staff that this type of Key Adult- Key Child pairing could be more effective if it was introduced as soon as children begin primary school. This research has provided ample evidence for the need to introduce relationally based interventions in schools, to help to ensure that children who are looked after, or who are impacted by complex trauma or disorders of attachment, can thrive in the school environment. In addition, this research has facilitated the identification of important implementation factors and barriers to implementation, which can be addressed prior to the ‘scaling-up’ of TAP for a robust, randomised controlled trial.

Keywords: attachment, complex trauma, educational interventions, implementation

Procedia PDF Downloads 168
12184 Maturity Classification of Oil Palm Fresh Fruit Bunches Using Thermal Imaging Technique

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Reza Ehsani, Hawa Ze Jaffar, Ishak Aris

Abstract:

Ripeness estimation of oil palm fresh fruit is important processes that affect the profitableness and salability of oil palm fruits. The adulthood or ripeness of the oil palm fruits influences the quality of oil palm. Conventional procedure includes physical grading of Fresh Fruit Bunches (FFB) maturity by calculating the number of loose fruits per bunch. This physical classification of oil palm FFB is costly, time consuming and the results may have human error. Hence, many researchers try to develop the methods for ascertaining the maturity of oil palm fruits and thereby, deviously the oil content of distinct palm fruits without the need for exhausting oil extraction and analysis. This research investigates the potential of infrared images (Thermal Images) as a predictor to classify the oil palm FFB ripeness. A total of 270 oil palm fresh fruit bunches from most common cultivar of oil palm bunches Nigresens according to three maturity categories: under ripe, ripe and over ripe were collected. Each sample was scanned by the thermal imaging cameras FLIR E60 and FLIR T440. The average temperature of each bunches were calculated by using image processing in FLIR Tools and FLIR ThermaCAM researcher pro 2.10 environment software. The results show that temperature content decreased from immature to over mature oil palm FFBs. An overall analysis-of-variance (ANOVA) test was proved that this predictor gave significant difference between underripe, ripe and overripe maturity categories. This shows that the temperature as predictors can be good indicators to classify oil palm FFB. Classification analysis was performed by using the temperature of the FFB as predictors through Linear Discriminant Analysis (LDA), Mahalanobis Discriminant Analysis (MDA), Artificial Neural Network (ANN) and K- Nearest Neighbor (KNN) methods. The highest overall classification accuracy was 88.2% by using Artificial Neural Network. This research proves that thermal imaging and neural network method can be used as predictors of oil palm maturity classification.

Keywords: artificial neural network, maturity classification, oil palm FFB, thermal imaging

Procedia PDF Downloads 342
12183 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning

Authors: Pei Yi Lin

Abstract:

Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.

Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model

Procedia PDF Downloads 54
12182 A Strategic Approach in Utilising Limited Resources to Achieve High Organisational Performance

Authors: Collen Tebogo Masilo, Erik Schmikl

Abstract:

The demand for the DataMiner product by customers has presented a great challenge for the vendor in Skyline Communications in deploying its limited resources in the form of human resources, financial resources, and office space, to achieve high organisational performance in all its international operations. The rapid growth of the organisation has been unable to efficiently support its existing customers across the globe, and provide services to new customers, due to the limited number of approximately one hundred employees in its employ. The combined descriptive and explanatory case study research methods were selected as research design, making use of a survey questionnaire which was distributed to a sample of 100 respondents. A sample return of 89 respondents was achieved. The sampling method employed was non-probability sampling, using the convenient sampling method. Frequency analysis and correlation between the subscales (the four themes) were used for statistical analysis to interpret the data. The investigation was conducted into mechanisms that can be deployed to balance the high demand for products and the limited production capacity of the company’s Belgian operations across four aspects: demand management strategies, capacity management strategies, communication methods that can be used to align a sales management department, and reward systems in use to improve employee performance. The conclusions derived from the theme ‘demand management strategies’ are that the company is fully aware of the future market demand for its products. However, there seems to be no evidence that there is proper demand forecasting conducted within the organisation. The conclusions derived from the theme 'capacity management strategies' are that employees always have a lot of work to complete during office hours, and, also, employees seem to need help from colleagues with urgent tasks. This indicates that employees often work on unplanned tasks and multiple projects. Conclusions derived from the theme 'communication methods used to align sales management department with operations' are that communication is not good throughout the organisation. This means that information often stays with management, and does not reach non-management employees. This also means that there is a lack of smooth synergy as expected and a lack of good communication between the sales department and the projects office. This has a direct impact on the delivery of projects to customers by the operations department. The conclusions derived from the theme ‘employee reward systems’ are that employees are motivated, and feel that they add value in their current functions. There are currently no measures in place to identify unhappy employees, and there are also no proper reward systems in place which are linked to a performance management system. The research has made a contribution to the body of research by exploring the impact of the four sub-variables and their interaction on the challenges of organisational productivity, in particular where an organisation experiences a capacity problem during its growth stage during tough economic conditions. Recommendations were made which, if implemented by management, could further enhance the organisation’s sustained competitive operations.

Keywords: high demand for products, high organisational performance, limited production capacity, limited resources

Procedia PDF Downloads 133
12181 Assessing the Survival Time of Hospitalized Patients in Eastern Ethiopia During 2019–2020 Using the Bayesian Approach: A Retrospective Cohort Study

Authors: Chalachew Gashu, Yoseph Kassa, Habtamu Geremew, Mengestie Mulugeta

Abstract:

Background and Aims: Severe acute malnutrition remains a significant health challenge, particularly in low‐ and middle‐income countries. The aim of this study was to determine the survival time of under‐five children with severe acute malnutrition. Methods: A retrospective cohort study was conducted at a hospital, focusing on under‐five children with severe acute malnutrition. The study included 322 inpatients admitted to the Chiro hospital in Chiro, Ethiopia, between September 2019 and August 2020, whose data was obtained from medical records. Survival functions were analyzed using Kaplan‒Meier plots and log‐rank tests. The survival time of severe acute malnutrition was further analyzed using the Cox proportional hazards model and Bayesian parametric survival models, employing integrated nested Laplace approximation methods. Results: Among the 322 patients, 118 (36.6%) died as a result of severe acute malnutrition. The estimated median survival time for inpatients was found to be 2 weeks. Model selection criteria favored the Bayesian Weibull accelerated failure time model, which demonstrated that age, body temperature, pulse rate, nasogastric (NG) tube usage, hypoglycemia, anemia, diarrhea, dehydration, malaria, and pneumonia significantly influenced the survival time of severe acute malnutrition. Conclusions: This study revealed that children below 24 months, those with altered body temperature and pulse rate, NG tube usage, hypoglycemia, and comorbidities such as anemia, diarrhea, dehydration, malaria, and pneumonia had a shorter survival time when affected by severe acute malnutrition under the age of five. To reduce the death rate of children under 5 years of age, it is necessary to design community management for acute malnutrition to ensure early detection and improve access to and coverage for children who are malnourished.

Keywords: Bayesian analysis, severe acute malnutrition, survival data analysis, survival time

Procedia PDF Downloads 22
12180 Religion, Health and Ageing: A Geroanthropological Study on Spiritual Dimensions of Well-Being among the Elderly Residing in Old Age Homes in Jallandher Punjab, India

Authors: A. Rohit Kumar, B. R. K. Pathak

Abstract:

Background: Geroanthropology or the anthropology of ageing is a term which can be understood in terms of the anthropology of old age, old age within anthropology, and the anthropology of age. India is known as the land of spirituality and philosophy and is the birthplace of four major religions of the world namely Hinduasim, Buddhisim, Jainisim, and Sikhism. The most dominant religion in India today is Hinduism. About 80% of Indians are Hindus. Hinduism is a religion with a large number of Gods and Goddesses. Religion in India plays an important role at all life stages i.e. at birth, adulthood and particularly during old age. India is the second largest country in the world with 72 million elder persons above 60 years of age in 2001 as compared to china 127 million. The very concept of old age homes in India is new. The elderly people staying away from their homes, from their children or left to them is not considered to be a very happy situation. This paper deals with anthropology of ageing, religion and spirituality among the elderly residing in old age homes and tries to explain that how religion plays a vital role in the health of the elderly during old age. Methods: The data for the present paper was collected through both Qualitative and Quantitative methods. Old age homes located in Jallandher (Punjab) were selected for the present study. Age sixty was considered as a cut off age. Narratives, case studies were collected from 100 respondents residing in old age homes. The dominant religion in Punjab was found to be Sikhism and Hinduism while Jainism and Buddhism were found to be in minority. It was found that as one grows older the religiosity increases. Religiosity and sprituality was found to be directly proportional to ageing. Therefore religiosity and health were found to be connected. Results and Conclusion: Religion was found out to be a coping mechanism during ill health. The elderly living in old age homes were purposely selected for the study as the elderly in old age homes gets medical attention provided only by the old age home authorities. Moreover, the inmates in old age homes were of low socio-economic status couldn’t afford medical attention on their own. It was found that elderly who firmly believed in religion were found to be more satisfied with their health as compare to elderly who does not believe in religion at all. Belief in particular religion, God and godess had an impact on the health of the elderly.

Keywords: ageing, geroanthropology, religion, spirituality

Procedia PDF Downloads 324
12179 Numerical Modelling of Hydrodynamic Drag and Supercavitation Parameters for Supercavitating Torpedoes

Authors: Sezer Kefeli, Sertaç Arslan

Abstract:

In this paper, supercavitationphenomena, and parameters are explained, and hydrodynamic design approaches are investigated for supercavitating torpedoes. In addition, drag force calculation methods ofsupercavitatingvehicles are obtained. Basically, conventional heavyweight torpedoes reach up to ~50 knots by classic hydrodynamic techniques, on the other hand super cavitating torpedoes may reach up to ~200 knots, theoretically. However, in order to reachhigh speeds, hydrodynamic viscous forces have to be reduced or eliminated completely. This necessity is revived the supercavitation phenomena that is implemented to conventional torpedoes. Supercavitation is a type of cavitation, after all, it is more stable and continuous than other cavitation types. The general principle of supercavitation is to separate the underwater vehicle from water phase by surrounding the vehicle with cavitation bubbles. This situation allows the torpedo to operate at high speeds through the water being fully developed cavitation. Conventional torpedoes are entitled as supercavitating torpedoes when the torpedo moves in a cavity envelope due to cavitator in the nose section and solid fuel rocket engine in the rear section. There are two types of supercavitation phase, these are natural and artificial cavitation phases. In this study, natural cavitation is investigated on the disk cavitators based on numerical methods. Once the supercavitation characteristics and drag reduction of natural cavitationare studied on CFD platform, results are verified with the empirical equations. As supercavitation parameters cavitation number (), pressure distribution along axial axes, drag coefficient (C_?) and drag force (D), cavity wall velocity (U_?) and dimensionless cavity shape parameters, which are cavity length (L_?/d_?), cavity diameter(d_ₘ/d_?) and cavity fineness ratio (〖L_?/d〗_ₘ) are investigated and compared with empirical results. This paper has the characteristics of feasibility study to carry out numerical solutions of the supercavitation phenomena comparing with empirical equations.

Keywords: CFD, cavity envelope, high speed underwater vehicles, supercavitating flows, supercavitation, drag reduction, supercavitation parameters

Procedia PDF Downloads 155
12178 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods

Authors: Matthew D. Baffa

Abstract:

Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.

Keywords: emissivity, heat loss, infrared thermography, thermal conductance

Procedia PDF Downloads 298
12177 A Review of Critical Framework Assessment Matrices for Data Analysis on Overheating in Buildings Impact

Authors: Martin Adlington, Boris Ceranic, Sally Shazhad

Abstract:

In an effort to reduce carbon emissions, changes in UK regulations, such as Part L Conservation of heat and power, dictates improved thermal insulation and enhanced air tightness. These changes were a direct response to the UK Government being fully committed to achieving its carbon targets under the Climate Change Act 2008. The goal is to reduce emissions by at least 80% by 2050. Factors such as climate change are likely to exacerbate the problem of overheating, as this phenomenon expects to increase the frequency of extreme heat events exemplified by stagnant air masses and successive high minimum overnight temperatures. However, climate change is not the only concern relevant to overheating, as research signifies, location, design, and occupation; construction type and layout can also play a part. Because of this growing problem, research shows the possibility of health effects on occupants of buildings could be an issue. Increases in temperature can perhaps have a direct impact on the human body’s ability to retain thermoregulation and therefore the effects of heat-related illnesses such as heat stroke, heat exhaustion, heat syncope and even death can be imminent. This review paper presents a comprehensive evaluation of the current literature on the causes and health effects of overheating in buildings and has examined the differing applied assessment approaches used to measure the concept. Firstly, an overview of the topic was presented followed by an examination of overheating research work from the last decade. These papers form the body of the article and are grouped into a framework matrix summarizing the source material identifying the differing methods of analysis of overheating. Cross case evaluation has identified systematic relationships between different variables within the matrix. Key areas focused on include, building types and country, occupants behavior, health effects, simulation tools, computational methods.

Keywords: overheating, climate change, thermal comfort, health

Procedia PDF Downloads 339
12176 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model

Procedia PDF Downloads 191
12175 Infusing Social Business Skills into the Curriculum of Higher Learning Institutions with Special Reference to Albukhari International University

Authors: Abdi Omar Shuriye

Abstract:

A social business is a business designed to address socio-economic problems to enhance the welfare of the communities involved. Lately, social business, with its focus on innovative ideas, is capturing the interest of educational institutions, governments, and non-governmental organizations. Social business uses a business model to achieve a social goal, and in the last few decades, the idea of imbuing social business into the education system of higher learning institutions has spurred much excitement. This is due to the belief that it will lead to job creation and increased social resilience. One of the higher learning institutions which have invested immensely in the idea is Albukhari International University; it is a private education institution, on a state-of-the-art campus, providing an advantageous learning ecosystem. The niche area of this institution is social business, and it graduates job creators, not job seekers; this Malaysian institution is unique and one of its kind. The objective of this paper is to develop a work plan, direction, and milestone as well as the focus area for the infusion of social business into higher learning institutions with special reference to Al-Bukhari International University. The purpose is to develop a prototype and model full-scale to enable higher learning education institutions to construct the desired curriculum fermented with social business. With this model, major predicaments faced by these institutions could be overcome. The paper sets forth an educational plan and will spell out the basic tenets of social business, focusing on the nature and implementational aspects of the curriculum. It will also evaluate the mechanisms applied by these educational institutions. Currently, since research in this area remains scarce, institutions adopt the process of experimenting with various methods to find the best way to reach the desired result on the matter. The author is of the opinion that social business in education is the main tool to educate holistic future leaders; hence educational institutions should inspire students in the classroom to start up their own businesses by adopting creative and proactive teaching methods. This proposed model is a contribution in that direction.

Keywords: social business, curriculum, skills, university

Procedia PDF Downloads 77
12174 Molecular Detection of Acute Virus Infection in Children Hospitalized with Diarrhea in North India during 2014-2016

Authors: Ali Ilter Akdag, Pratima Ray

Abstract:

Background:This acute gastroenteritis viruses such as rotavirus, astrovirus, and adenovirus are mainly responsible for diarrhea in children below < 5 years old. Molecular detection of these viruses is crucially important to the understand development of the effective cure. This study aimed to determine the prevalence of common these viruses in children < 5 years old presented with diarrhea from Lala Lajpat Rai Memorial Medical College (LLRM) centre (Meerut) North India, India Methods: Total 312 fecal samples were collected from diarrheal children duration 3 years: in year 2014 (n = 118), 2015 (n = 128) and 2016 (n = 66) ,< 5 years of age who presented with acute diarrhea at the Lala Lajpat Rai Memorial Medical College (LLRM) centre(Meerut) North India, India. All samples were the first detection by EIA/RT-PCR for rotaviruses, adenovirus and astrovirus. Results: In 312 samples from children with acute diarrhea in sample viral agent was found, rotavirus A was the most frequent virus identified (57 cases; 18.2%), followed by Astrovirus in 28 cases (8.9%), adenovirus in 21 cases (6.7%). Mixed infections were found in 14 cases, all of which presented with acute diarrhea (14/312; 4.48%). Conclusions: These viruses are a major cause of diarrhea in children <5 years old in North India. Rotavirus A is the most common etiological agent, follow by astrovirus. This surveillance is important to vaccine development of the entire population. There is variation detection of virus year wise due to differences in the season of sampling, method of sampling, hygiene condition, socioeconomic level of the entire people, enrolment criteria, and virus detection methods. It was found Astrovirus higher then Rotavirus in 2015, but overall three years study Rotavirus A is mainly responsible for causing severe diarrhea in children <5 years old in North India. It emphasizes the required for cost-effective diagnostic assays for Rotaviruses which would help to determine the disease burden.

Keywords: adenovirus, Astrovirus, hospitalized children, Rotavirus

Procedia PDF Downloads 124