Search results for: random forest
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2744

Search results for: random forest

1874 Hypoglycemic and Hypolipidemic Effects of Aqueous Flower Extract from Nyctanthes arbor-tristis L.

Authors: Brahmanage S. Rangika, Dinithi C. Peiris

Abstract:

Boiled Aqueous Flower Extract (AFE) of Nyctanthes arbor-tristis L. (Family: Oleaceae) is used in traditional Sri Lankan medicinal system to treat diabetes. However, this is not scientifically proven and the mechanisms by which the flowers reduce diabetes have not been investigated. The present study was carried out to examine the hypoglycemic potential and toxicity effects of aqueous flower extract of N. arbor-tristis. AFE was prepared and mice were treated orally either with 250, 500, and 750 mg/kg of AFE or distilled water (Control). Fasting and random blood glucose levels were determined. In addition, the toxicity of AFE was determined using chronic oral administration. In normoglycemic mice, mid dose (500mg/kg) of AFE significantly (p < 0.01) reduced fasting blood glucose levels by 49% at 4h post treatment. Further, 500mg/kg of AFE significantly (p < 0.01) lowered random blood glucose level of non-fasted normoglycemic mice. AFE significantly lowered total cholesterol and triglyceride levels while increasing the HDL levels in the serum. Further, AFE significantly inhibited the glucose absorption from the lumen of the intestine and it increases the diaphragm uptake of glucose. Alpha-amylase inhibitory activity was also evident. However, AFE did not induce any overt signs of toxicity or hepatotoxicity. There were no adverse effects on food and water intake and body weight of mice during the experimental period. It can be concluded that AFE of N. arbor-tristis posses safe oral anti diabetic potentials mediated via multiple mechanisms. Results of the present study scientifically proved the claims made about the uses of N. arbor-tristis in the treatment of diabetes mellitus in traditional Sri Lankan medicinal system. Further, flowers can also be used for as a remedy to improve blood lipid profile.

Keywords: aqueous extract, hypoglycemic hypolipidemic, Nyctanthes arbor-tristis flowers, hepatotoxicity

Procedia PDF Downloads 358
1873 Role of Indigenous Peoples in Climate Change

Authors: Neelam Kadyan, Pratima Ranga, Yogender

Abstract:

Indigenous people are the One who are affected by the climate change the most, although there have contributed little to its causes. This is largely a result of their historic dependence on local biological diversity, ecosystem services and cultural landscapes as a source of their sustenance and well-being. Comprising only four percent of the world’s population they utilize 22 percent of the world’s land surface. Despite their high exposure-sensitivity indigenous peoples and local communities are actively responding to changing climatic conditions and have demonstrated their resourcefulness and resilience in the face of climate change. Traditional Indigenous territories encompass up to 22 percent of the world’s land surface and they coincide with areas that hold 80 percent of the planet’s biodiversity. Also, the greatest diversity of indigenous groups coincides with the world’s largest tropical forest wilderness areas in the Americas (including Amazon), Africa, and Asia, and 11 percent of world forest lands are legally owned by Indigenous Peoples and communities. This convergence of biodiversity-significant areas and indigenous territories presents an enormous opportunity to expand efforts to conserve biodiversity beyond parks, which tend to benefit from most of the funding for biodiversity conservation. Tapping on Ancestral Knowledge Indigenous Peoples are carriers of ancestral knowledge and wisdom about this biodiversity. Their effective participation in biodiversity conservation programs as experts in protecting and managing biodiversity and natural resources would result in more comprehensive and cost effective conservation and management of biodiversity worldwide. Addressing the Climate Change Agenda Indigenous Peoples has played a key role in climate change mitigation and adaptation. The territories of indigenous groups who have been given the rights to their lands have been better conserved than the adjacent lands (i.e., Brazil, Colombia, Nicaragua, etc.). Preserving large extensions of forests would not only support the climate change objectives, but it would respect the rights of Indigenous Peoples and conserve biodiversity as well. A climate change agenda fully involving Indigenous Peoples has many more benefits than if only government and/or the private sector are involved. Indigenous peoples are some of the most vulnerable groups to the negative effects of climate change. Also, they are a source of knowledge to the many solutions that will be needed to avoid or ameliorate those effects. For example, ancestral territories often provide excellent examples of a landscape design that can resist the negatives effects of climate change. Over the millennia, Indigenous Peoples have developed adaptation models to climate change. They have also developed genetic varieties of medicinal and useful plants and animal breeds with a wider natural range of resistance to climatic and ecological variability.

Keywords: ancestral knowledge, cost effective conservation, management, indigenous peoples, climate change

Procedia PDF Downloads 667
1872 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models

Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach

Abstract:

In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.

Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model

Procedia PDF Downloads 176
1871 Strategic Management Methods in Non-Profit Making Organization

Authors: P. Řehoř, D. Holátová, V. Doležalová

Abstract:

Paper deals with analysis of strategic management methods in non-profit making organization in the Czech Republic. Strategic management represents an aggregate of methods and approaches that can be applied for managing organizations - in this article the organizations which associate owners and keepers of non-state forest properties. Authors use these methods of strategic management: analysis of stakeholders, SWOT analysis and questionnaire inquiries. The questionnaire was distributed electronically via e-mail. In October 2013 we obtained data from a total of 84 questionnaires. Based on the results the authors recommend the using of confrontation strategy which improves the competitiveness of non-profit making organizations.

Keywords: strategic management, non-profit making organization, strategy analysis, SWOT analysis, strategy, competitiveness

Procedia PDF Downloads 471
1870 Impact of Alkaline Activator Composition and Precursor Types on Properties and Durability of Alkali-Activated Cements Mortars

Authors: Sebastiano Candamano, Antonio Iorfida, Patrizia Frontera, Anastasia Macario, Fortunato Crea

Abstract:

Alkali-activated materials are promising binders obtained by an alkaline attack on fly-ashes, metakaolin, blast slag among others. In order to guarantee the highest ecological and cost efficiency, a proper selection of precursors and alkaline activators has to be carried out. These choices deeply affect the microstructure, chemistry and performances of this class of materials. Even if, in the last years, several researches have been focused on mix designs and curing conditions, the lack of exhaustive activation models, standardized mix design and curing conditions and an insufficient investigation on shrinkage behavior, efflorescence, additives and durability prevent them from being perceived as an effective and reliable alternative to Portland. The aim of this study is to develop alkali-activated cements mortars containing high amounts of industrial by-products and waste, such as ground granulated blast furnace slag (GGBFS) and ashes obtained from the combustion process of forest biomass in thermal power plants. In particular, the experimental campaign was performed in two steps. In the first step, research was focused on elucidating how the workability, mechanical properties and shrinkage behavior of produced mortars are affected by the type and fraction of each precursor as well as by the composition of the activator solutions. In order to investigate the microstructures and reaction products, SEM and diffractometric analyses have been carried out. In the second step, their durability in harsh environments has been evaluated. Mortars obtained using only GGBFS as binder showed mechanical properties development and shrinkage behavior strictly dependent on SiO2/Na2O molar ratio of the activator solutions. Compressive strengths were in the range of 40-60 MPa after 28 days of curing at ambient temperature. Mortars obtained by partial replacement of GGBFS with metakaolin and forest biomass ash showed lower compressive strengths (≈35 MPa) and shrinkage values when higher amount of ashes were used. By varying the activator solutions and binder composition, compressive strength up to 70 MPa associated with shrinkage values of about 4200 microstrains were measured. Durability tests were conducted to assess the acid and thermal resistance of the different mortars. They all showed good resistance in a solution of 5%wt of H2SO4 also after 60 days of immersion, while they showed a decrease of mechanical properties in the range of 60-90% when exposed to thermal cycles up to 700°C.

Keywords: alkali activated cement, biomass ash, durability, shrinkage, slag

Procedia PDF Downloads 314
1869 Genetic Instabilities in Marine Bivalve Following Benzo(α)pyrene Exposure: Utilization of Combined Random Amplified Polymorphic DNA and Comet Assay

Authors: Mengjie Qu, Yi Wang, Jiawei Ding, Siyu Chen, Yanan Di

Abstract:

Marine ecosystem is facing intensified multiple stresses caused by environmental contaminants from human activities. Xenobiotics, such as benzo(α)pyrene (BaP) have been discharged into marine environment and cause hazardous impacts on both marine organisms and human beings. As a filter-feeder, marine mussels, Mytilus spp., has been extensively used to monitor the marine environment. However, their genomic alterations induced by such xenobiotics are still kept unknown. In the present study, gills, as the first defense barrier in mussels, were selected to evaluate the genetic instability alterations induced by the exposure to BaP both in vivo and in vitro. Both random amplified polymorphic DNA (RAPD) assay and comet assay were applied as the rapid tools to assess the environmental stresses due to their low money- and time-consumption. All mussels were identified to be the single species of Mytilus coruscus before used in BaP exposure at the concentration of 56 μg/l for 1 & 3 days (in vivo exposure) or 1 & 3 hours (in vitro). Both RAPD and comet assay results were showed significantly increased genomic instability with time-specific altering pattern. After the recovery period in 'in vivo' exposure, the genomic status was as same as control condition. However, the relative higher genomic instabilities were still observed in gill cells after the recovery from in vitro exposure condition. Different repair mechanisms or signaling pathway might be involved in the isolated gill cells in the comparison with intact tissues. The study provides the robust and rapid techniques to exam the genomic stability in marine organisms in response to marine environmental changes and provide basic information for further mechanism research in stress responses in marine organisms.

Keywords: genotoxic impacts, in vivo/vitro exposure, marine mussels, RAPD and comet assay

Procedia PDF Downloads 263
1868 Assessment the Implications of Regional Transport and Local Emission Sources for Mitigating Particulate Matter in Thailand

Authors: Ruchirek Ratchaburi, W. Kevin. Hicks, Christopher S. Malley, Lisa D. Emberson

Abstract:

Air pollution problems in Thailand have improved over the last few decades, but in some areas, concentrations of coarse particulate matter (PM₁₀) are above health and regulatory guidelines. It is, therefore, useful to investigate how PM₁₀ varies across Thailand, what conditions cause this variation, and how could PM₁₀ concentrations be reduced. This research uses data collected by the Thailand Pollution Control Department (PCD) from 17 monitoring sites, located across 12 provinces, and obtained between 2011 and 2015 to assess PM₁₀ concentrations and the conditions that lead to different levels of pollution. This is achieved through exploration of air mass pathways using trajectory analysis, used in conjunction with the monitoring data, to understand the contribution of different months, an hour of the day and source regions to annual PM₁₀ concentrations in Thailand. A focus is placed on locations that exceed the national standard for the protection of human health. The analysis shows how this approach can be used to explore the influence of biomass burning on annual average PM₁₀ concentration and the difference in air pollution conditions between Northern and Southern Thailand. The results demonstrate the substantial contribution that open biomass burning from agriculture and forest fires in Thailand and neighboring countries make annual average PM₁₀ concentrations. The analysis of PM₁₀ measurements at monitoring sites in Northern Thailand show that in general, high concentrations tend to occur in March and that these particularly high monthly concentrations make a substantial contribution to the overall annual average concentration. In 2011, a > 75% reduction in the extent of biomass burning in Northern Thailand and in neighboring countries resulted in a substantial reduction not only in the magnitude and frequency of peak PM₁₀ concentrations but also in annual average PM₁₀ concentrations at sites across Northern Thailand. In Southern Thailand, the annual average PM₁₀ concentrations for individual years between 2011 and 2015 did not exceed the human health standard at any site. The highest peak concentrations in Southern Thailand were much lower than for Northern Thailand for all sites. The peak concentrations at sites in Southern Thailand generally occurred between June and October and were associated with air mass back trajectories that spent a substantial proportion of time over the sea, Indonesia, Malaysia, and Thailand prior to arrival at the monitoring sites. The results show that emissions reductions from biomass burning and forest fires require action on national and international scales, in both Thailand and neighboring countries, such action could contribute to ensuring compliance with Thailand air quality standards.

Keywords: annual average concentration, long-range transport, open biomass burning, particulate matter

Procedia PDF Downloads 171
1867 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions

Authors: Valerii Dashuk

Abstract:

The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.

Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function

Procedia PDF Downloads 167
1866 Multilevel Regression Model - Evaluate Relationship Between Early Years’ Activities of Daily Living and Alzheimer’s Disease Onset Accounting for Influence of Key Sociodemographic Factors Using a Longitudinal Household Survey Data

Authors: Linyi Fan, C.J. Schumaker

Abstract:

Background: Biomedical efforts to treat Alzheimer’s disease (AD) have typically produced mixed to poor results, while more lifestyle-focused treatments such as exercise may fare better than existing biomedical treatments. A few promising studies have indicated that activities of daily life (ADL) may be a useful way of predicting AD. However, the existing cross-sectional studies fail to show how functional-related issues such as ADL in early years predict AD and how social factors influence health either in addition to or in interaction with individual risk factors. This study would helpbetterscreening and early treatments for the elderly population and healthcare practice. The findings have significance academically and practically in terms of creating positive social change. Methodology: The purpose of this quantitative historical, correlational study was to examine the relationship between early years’ ADL and the development of AD in later years. The studyincluded 4,526participantsderived fromRAND HRS dataset. The Health and Retirement Study (HRS) is a longitudinal household survey data set that is available forresearchof retirement and health among the elderly in the United States. The sample was selected by the completion of survey questionnaire about AD and dementia. The variablethat indicates whether the participant has been diagnosed with AD was the dependent variable. The ADL indices and changes in ADL were the independent variables. A four-step multilevel regression model approach was utilized to address the research questions. Results: Amongst 4,526 patients who completed the AD and dementia questionnaire, 144 (3.1%) were diagnosed with AD. Of the 4,526 participants, 3,465 (76.6%) have high school and upper education degrees,4,074 (90.0%) were above poverty threshold. The model evaluatedthe effect of ADL and change in ADL on onset of AD in late years while allowing the intercept of the model to vary by level of education. The results suggested that the only significant predictor of the onset of AD was changes in early years’ ADL (b = 20.253, z = 2.761, p < .05). However, the result of the sensitivity analysis (b = 7.562, z = 1.900, p =.058), which included more control variables and increased the observation period of ADL, are not supported this finding. The model also estimated whether the variances of random effect vary by Level-2 variables. The results suggested that the variances associated with random slopes were approximately zero, suggesting that the relationship between early years’ ADL were not influenced bysociodemographic factors. Conclusion: The finding indicated that an increase in changes in ADL leads to an increase in the probability of onset AD in the future. However, this finding is not support in a broad observation period model. The study also failed to reject the hypothesis that the sociodemographic factors explained significant amounts of variance in random effect. Recommendations were then made for future research and practice based on these limitations and the significance of the findings.

Keywords: alzheimer’s disease, epidemiology, moderation, multilevel modeling

Procedia PDF Downloads 122
1865 Synthesis and Characterization of Anti-Psychotic Drugs Based DNA Aptamers

Authors: Shringika Soni, Utkarsh Jain, Nidhi Chauhan

Abstract:

Aptamers are recently discovered ~80-100 bp long artificial oligonucleotides that not only demonstrated their applications in therapeutics; it is tremendously used in diagnostic and sensing application to detect different biomarkers and drugs. Synthesizing aptamers for proteins or genomic template is comparatively feasible in laboratory, but drugs or other chemical target based aptamers require major specification and proper optimization and validation. One has to optimize all selection, amplification, and characterization steps of the end product, which is extremely time-consuming. Therefore, we performed asymmetric PCR (polymerase chain reaction) for random oligonucleotides pool synthesis, and further use them in Systematic evolution of ligands by exponential enrichment (SELEX) for anti-psychotic drugs based aptamers synthesis. Anti-psychotic drugs are major tranquilizers to control psychosis for proper cognitive functions. Though their low medical use, their misuse may lead to severe medical condition as addiction and can promote crime in social and economical impact. In this work, we have approached the in-vitro SELEX method for ssDNA synthesis for anti-psychotic drugs (in this case ‘target’) based aptamer synthesis. The study was performed in three stages, where first stage included synthesis of random oligonucleotides pool via asymmetric PCR where end product was analyzed with electrophoresis and purified for further stages. The purified oligonucleotide pool was incubated in SELEX buffer, and further partition was performed in the next stage to obtain target specific aptamers. The isolated oligonucleotides are characterized and quantified after each round of partition, and significant results were obtained. After the repetitive partition and amplification steps of target-specific oligonucleotides, final stage included sequencing of end product. We can confirm the specific sequence for anti-psychoactive drugs, which will be further used in diagnostic application in clinical and forensic set-up.

Keywords: anti-psychotic drugs, aptamer, biosensor, ssDNA, SELEX

Procedia PDF Downloads 123
1864 Self-Image of Police Officers

Authors: Leo Carlo B. Rondina

Abstract:

Self-image is an important factor to improve the self-esteem of the personnel. The purpose of the study is to determine the self-image of the police. The respondents were the 503 policemen assigned in different Police Station in Davao City, and they were chosen with the used of random sampling. With the used of Exploratory Factor Analysis (EFA), latent construct variables of police image were identified as follows; professionalism, obedience, morality and justice and fairness. Further, ordinal regression indicates statistical characteristics on ages 21-40 which means the age of the respondent statistically improves self-image.

Keywords: police image, exploratory factor analysis, ordinal regression, Galatea effect

Procedia PDF Downloads 272
1863 Preliminary Result on the Impact of Anthropogenic Noise on Understory Bird Population in Primary Forest of Gaya Island

Authors: Emily A. Gilbert, Jephte Sompud, Andy R. Mojiol, Cynthia B. Sompud, Alim Biun

Abstract:

Gaya Island of Sabah is known for its wildlife and marine biodiversity. It has marks itself as one of the hot destinations of tourists from all around the world. Gaya Island tourism activities have contributed to Sabah’s economy revenue with the high number of tourists visiting the island. However, it has led to the increased anthropogenic noise derived from tourism activities. This may greatly interfere with the animals such as understory birds that rely on acoustic signals as a tool for communication. Many studies in other parts of the regions reveal that anthropogenic noise does decrease species richness of avian community. However, in Malaysia, published research regarding the impact of anthropogenic noise on the understory birds is still very lacking. This study was conducted in order to fill up this gap. This study aims to investigate the anthropogenic noise’s impact towards understory bird population. There were three sites within the Primary forest of Gaya Island that were chosen to sample the level of anthropogenic noise in relation to the understory bird population. Noise mapping method was used to measure the anthropogenic noise level and identify the zone with high anthropogenic noise level (> 60dB) and zone with low anthropogenic noise level (< 60dB) based on the standard threshold of noise level. The methods that were used for this study was solely mist netting and ring banding. This method was chosen as it can determine the diversity of the understory bird population in Gaya Island. The preliminary study was conducted from 15th to 26th April and 5th to 10th May 2015 whereby there were 2 mist nets that were set up at each of the zones within the selected site. The data was analyzed by using the descriptive analysis, presence and absence analysis, diversity indices and diversity t-test. Meanwhile, PAST software was used to analyze the obtain data. The results from this study present a total of 60 individuals that consisted of 12 species from 7 families of understory birds were recorded in three of the sites in Gaya Island. The Shannon-Wiener index shows that diversity of species in high anthropogenic noise zone and low anthropogenic noise zone were 1.573 and 2.009, respectively. However, the statistical analysis shows that there was no significant difference between these zones. Nevertheless, based on the presence and absence analysis, it shows that the species at the low anthropogenic noise zone was higher as compared to the high anthropogenic noise zone. Thus, this result indicates that there is an impact of anthropogenic noise on the population diversity of understory birds. There is still an urgent need to conduct an in-depth study by increasing the sample size in the selected sites in order to fully understand the impact of anthropogenic noise towards the understory birds population so that it can then be in cooperated into the wildlife management for a sustainable environment in Gaya Island.

Keywords: anthropogenic noise, biodiversity, Gaya Island, understory bird

Procedia PDF Downloads 353
1862 Implementation and Challenges of Assessment Methods in the Case of Physical Education Class in Some Selected Preparatory Schools of Kirkos Sub-City

Authors: Kibreab Alene Fenite

Abstract:

The purpose of this study is to investigate the implementation and challenges of different assessment methods for physical education class in some selected preparatory schools of kirkos sub city. The participants in this study are teachers, students, department heads and school principals from 4 selected schools. Of the total 8 schools offering in kirkos sub city 4 schools (Dandi Boru, Abiyot Kirse, Assay, and Adey Ababa) are selected by using simple random sampling techniques and from these schools all (100%) of teachers, 100% of department heads and school principals are taken as a sample as their number is manageable. From the total 2520 students, 252 (10%) of students are selected using simple random sampling. Accordingly, 13 teachers, 252 students, 4 department heads and 4 school principals are taken as a sample from the 4 selected schools purposefully. As a method of data gathering tools; questionnaire and interview are employed. To analyze the collected data, both quantitative and qualitative methods are used. The result of the study revealed that assessment in physical education does not implement properly: lack of sufficient materials, inadequate time allotment, large class size, and lack of collaboration and working together of teachers towards assessing the performance of students, absence of guidelines to assess the physical education subject, no different assessment method that is implementing on students with disabilities in line with their special need are found as major challenges in implementing the current assessment method of physical education. To overcome these problems the following recommendations have been forwarded. These are: the necessary facilities and equipment should be available; In order to make reliable, accurate, objective and relevant assessment, teachers of physical education should be familiarized with different assessment techniques; Physical education assessment guidelines should be prepared, and guidelines should include different types of assessment methods; qualified teachers should be employed, and different teaching room must be build.

Keywords: assessment, challenges, equipment, guidelines, implementation, performance

Procedia PDF Downloads 271
1861 A Geospatial Analysis of Residential Conservation-Attitude, Intention and Behavior

Authors: Prami Sengupta, Randall A. Cantrell, Tracy Johns

Abstract:

A typical US household consumes more energy than households in other countries and is directly responsible for a considerable proportion of the atmospheric concentration of the greenhouse gases. This makes U.S. household a vital target group for energy conservation studies. Positive household behavior is central to residential energy conservation. However, for individuals to conserve energy they must not only know how to conserve energy but be also willing to do so. That is, a positive attitude towards residential conservation and an intention to conserve energy are two of the most important psychological determinants for energy conservation behavior. Most social science studies, to date, have studied the relationships between attitude, intention, and behavior by building upon socio-psychological theories of behavior. However, these frameworks, including the widely used Theory of Planned Behavior and Social Cognitive Theory, lack a spatial component. That is, these studies fail to capture the impact of the geographical locations of homeowners’ residences on their residential energy consumption and conservation practices. Therefore, the purpose of this study is to explore geospatial relationships between homeowners’ residential energy conservation-attitudes, conservation-intentions, and consumption behavior. The study analyzes residential conservation-attitudes and conservation-intentions of homeowners across 63 counties in Florida and compares it with quantifiable measures of residential energy consumption. Empirical findings revealed that the spatial distribution of high and/or low values of homeowners’ mean-score values of conservation-attitudes and conservation-intentions are more spatially clustered than would be expected if the underlying spatial processes were random. On the contrary, the spatial distribution of high and/or low values of households’ carbon footprints was found to be more spatially dispersed than assumed if the underlying spatial process were random. The study also examined the influence of potential spatial variables, such as urban or rural setting and presence of educational institutions and/or extension program, on the conservation-attitudes, intentions, and behaviors of homeowners.

Keywords: conservation-attitude, conservation-intention, geospatial analysis, residential energy consumption, spatial autocorrelation

Procedia PDF Downloads 177
1860 Fire Risk Information Harmonization for Transboundary Fire Events between Portugal and Spain

Authors: Domingos Viegas, Miguel Almeida, Carmen Rocha, Ilda Novo, Yolanda Luna

Abstract:

Forest fires along the more than 1200km of the Spanish-Portuguese border are more and more frequent, currently achieving around 2000 fire events per year. Some of these events develop to large international wildfire requiring concerted operations based on shared information between the two countries. The fire event of Valencia de Alcantara (2003) causing several fatalities and more than 13000ha burnt, is a reference example of these international events. Currently, Portugal and Spain have a specific cross-border cooperation protocol on wildfires response for a strip of about 30km (15 km for each side). It is recognized by public authorities the successfulness of this collaboration however it is also assumed that this cooperation should include more functionalities such as the development of a common risk information system for transboundary fire events. Since Portuguese and Spanish authorities use different approaches to determine the fire risk indexes inputs and different methodologies to assess the fire risk, sometimes the conjoint firefighting operations are jeopardized since the information is not harmonized and the understanding of the situation by the civil protection agents from both countries is not unique. Thus, a methodology aiming the harmonization of the fire risk calculation and perception by Portuguese and Spanish Civil protection authorities is hereby presented. The final results are presented as well. The fire risk index used in this work is the Canadian Fire Weather Index (FWI), which is based on meteorological data. The FWI is limited on its application as it does not take into account other important factors with great effect on the fire appearance and development. The combination of these factors is very complex since, besides the meteorology, it addresses several parameters of different topics, namely: sociology, topography, vegetation and soil cover. Therefore, the meaning of FWI values is different from region to region, according the specific characteristics of each region. In this work, a methodology for FWI calibration based on the number of fire occurrences and on the burnt area in the transboundary regions of Portugal and Spain, in order to assess the fire risk based on calibrated FWI values, is proposed. As previously mentioned, the cooperative firefighting operations require a common perception of the information shared. Therefore, a common classification of the fire risk for the fire events occurred in the transboundary strip is proposed with the objective of harmonizing this type of information. This work is integrated in the ECHO project SpitFire - Spanish-Portuguese Meteorological Information System for Transboundary Operations in Forest Fires, which aims the development of a web platform for the sharing of information and supporting decision tools to be used in international fire events involving Portugal and Spain.

Keywords: data harmonization, FWI, international collaboration, transboundary wildfires

Procedia PDF Downloads 239
1859 Combining Shallow and Deep Unsupervised Machine Learning Techniques to Detect Bad Actors in Complex Datasets

Authors: Jun Ming Moey, Zhiyaun Chen, David Nicholson

Abstract:

Bad actors are often hard to detect in data that imprints their behaviour patterns because they are comparatively rare events embedded in non-bad actor data. An unsupervised machine learning framework is applied here to detect bad actors in financial crime datasets that record millions of transactions undertaken by hundreds of actors (<0.01% bad). Specifically, the framework combines ‘shallow’ (PCA, Isolation Forest) and ‘deep’ (Autoencoder) methods to detect outlier patterns. Detection performance analysis for both the individual methods and their combination is reported.

Keywords: detection, machine learning, deep learning, unsupervised, outlier analysis, data science, fraud, financial crime

Procedia PDF Downloads 80
1858 Labile and Humified Carbon Storage in Natural and Anthropogenically Affected Luvisols

Authors: Kristina Amaleviciute, Ieva Jokubauskaite, Alvyra Slepetiene, Jonas Volungevicius, Inga Liaudanskiene

Abstract:

The main task of this research was to investigate the chemical composition of the differently used soil in profiles. To identify the differences in the soil were investigated organic carbon (SOC) and its fractional composition: dissolved organic carbon (DOC), mobile humic acids (MHA) and C to N ratio of natural and anthropogenically affected Luvisols. Research object: natural and anthropogenically affected Luvisol, Akademija, Kedainiai, distr. Lithuania. Chemical analyses were carried out at the Chemical Research Laboratory of Institute of Agriculture, LAMMC. Soil samples for chemical analyses were taken from the genetics soil horizons. SOC was determined by the Tyurin method modified by Nikitin, measuring with spectrometer Cary 50 (VARIAN) in 590 nm wavelength using glucose standards. For mobile humic acids (MHA) determination the extraction procedure was carried out using 0.1 M NaOH solution. Dissolved organic carbon (DOC) was analyzed using an ion chromatograph SKALAR. pH was measured in 1M H2O. N total was determined by Kjeldahl method. Results: Based on the obtained results, it can be stated that transformation of chemical composition is going through the genetic soil horizons. Morphology of the upper layers of soil profile which is formed under natural conditions was changed by anthropomorphic (agrogenic, urbogenic, technogenic and others) structure. Anthropogenic activities, mechanical and biochemical disturbances destroy the natural characteristics of soil formation and complicates the interpretation of soil development. Due to the intensive cultivation, the pH values of the curve equals (disappears acidification characteristic for E horizon) with natural Luvisol. Luvisols affected by agricultural activities was characterized by a decrease in the absolute amount of humic substances in separate horizons. But there was observed more sustainable, higher carbon sequestration and thicker storage of humic horizon compared with forest Luvisol. However, the average content of humic substances in the soil profile was lower. Soil organic carbon content in anthropogenic Luvisols was lower compared with the natural forest soil, but there was more evenly spread over in the wider thickness of accumulative horizon. These data suggest that the organization of geo-ecological declines and agroecological increases in Luvisols. Acknowledgement: This work was supported by the National Science Program ‘The effect of long-term, different-intensity management of resources on the soils of different genesis and on other components of the agro-ecosystems’ [grant number SIT-9/2015] funded by the Research Council of Lithuania.

Keywords: agrogenization, dissolved organic carbon, luvisol, mobile humic acids, soil organic carbon

Procedia PDF Downloads 221
1857 The Role of Innovative Marketing on Achieving Quality in Petroleum Company

Authors: Malki Fatima Zahra Nadia, Kellal Chaimaa, Brahimi Houria

Abstract:

The following research aims to measure the impact of innovative marketing in achieving product quality in the Algerian Petroleum Company. In order to achieve the aim of the study, a random sample of 60 individuals was selected and the answers were analyzed using structural equation modeling to test the study hypotheses. The research concluded that there is a strong relationship between innovative marketing and the quality of petroleum products.

Keywords: marketing, innovation, quality, petroleum products

Procedia PDF Downloads 67
1856 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube

Authors: Dan Kanmegne

Abstract:

Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.

Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification

Procedia PDF Downloads 131
1855 Simulation of a Control System for an Adaptive Suspension System for Passenger Vehicles

Authors: S. Gokul Prassad, S. Aakash, K. Malar Mohan

Abstract:

In the process to cope with the challenges faced by the automobile industry in providing ride comfort, the electronics and control systems play a vital role. The control systems in an automobile monitor various parameters, controls the performances of the systems, thereby providing better handling characteristics. The automobile suspension system is one of the main systems that ensure the safety, stability and comfort of the passengers. The system is solely responsible for the isolation of the entire automobile from harmful road vibrations. Thus, integration of the control systems in the automobile suspension system would enhance its performance. The diverse road conditions of India demand the need of an efficient suspension system which can provide optimum ride comfort in all road conditions. For any passenger vehicle, the design of the suspension system plays a very important role in assuring the ride comfort and handling characteristics. In recent years, the air suspension system is preferred over the conventional suspension systems to ensure ride comfort. In this article, the ride comfort of the adaptive suspension system is compared with that of the passive suspension system. The schema is created in MATLAB/Simulink environment. The system is controlled by a proportional integral differential controller. Tuning of the controller was done with the Particle Swarm Optimization (PSO) algorithm, since it suited the problem best. Ziegler-Nichols and Modified Ziegler-Nichols tuning methods were also tried and compared. Both the static responses and dynamic responses of the systems were calculated. Various random road profiles as per ISO 8608 standard are modelled in the MATLAB environment and their responses plotted. Open-loop and closed loop responses of the random roads, various bumps and pot holes are also plotted. The simulation results of the proposed design are compared with the available passive suspension system. The obtained results show that the proposed adaptive suspension system is efficient in controlling the maximum over shoot and the settling time of the system is reduced enormously.

Keywords: automobile suspension, MATLAB, control system, PID, PSO

Procedia PDF Downloads 287
1854 Analysis of the Impact of Suez Canal on the Robustness of Global Shipping Networks

Authors: Zimu Li, Zheng Wan

Abstract:

The Suez Canal plays an important role in global shipping networks and is one of the most frequently used waterways in the world. The 2021 canal obstruction by ship Ever Given in March 2021, however, completed blocked the Suez Canal for a week and caused significant disruption to world trade. Therefore, it is very important to quantitatively analyze the impact of the accident on the robustness of the global shipping network. However, the current research on maritime transportation networks is usually limited to local or small-scale networks in a certain region. Based on the complex network theory, this study establishes a global shipping complex network covering 2713 nodes and 137830 edges by using the real trajectory data of the global marine transport ship automatic identification system in 2018. At the same time, two attack modes, deliberate (Suez Canal Blocking) and random, are defined to calculate the changes in network node degree, eccentricity, clustering coefficient, network density, network isolated nodes, betweenness centrality, and closeness centrality under the two attack modes, and quantitatively analyze the actual impact of Suez Canal Blocking on the robustness of global shipping network. The results of the network robustness analysis show that Suez Canal blocking was more destructive to the shipping network than random attacks of the same scale. The network connectivity and accessibility decreased significantly, and the decline decreased with the distance between the port and the canal, showing the phenomenon of distance attenuation. This study further analyzes the impact of the blocking of the Suez Canal on Chinese ports and finds that the blocking of the Suez Canal significantly interferes withChina's shipping network and seriously affects China's normal trade activities. Finally, the impact of the global supply chain is analyzed, and it is found that blocking the canal will seriously damage the normal operation of the global supply chain.

Keywords: global shipping networks, ship AIS trajectory data, main channel, complex network, eigenvalue change

Procedia PDF Downloads 169
1853 Tobacco Taxation and the Heterogeneity of Smokers' Responses to Price Increases

Authors: Simone Tedeschi, Francesco Crespi, Paolo Liberati, Massimo Paradiso, Antonio Sciala

Abstract:

This paper aims at contributing to the understanding of smokers’ responses to cigarette prices increases with a focus on heterogeneity, both across individuals and price levels. To do this, a stated preference quasi-experimental design grounded in a random utility framework is proposed to evaluate the effect on smokers’ utility of the price level and variation, along with social conditioning and health impact perception. The analysis is based on individual-level data drawn from a unique survey gathering very detailed information on Italian smokers’ habits. In particular, qualitative information on the individual reactions triggered by changes in prices of different magnitude and composition are exploited. The main findings stemming from the analysis are the following; the average price elasticity of cigarette consumption is comparable with previous estimates for advanced economies (-.32). However, the decomposition of this result across five latent-classes of smokers, reveals extreme heterogeneity in terms of price responsiveness, implying a potential price elasticity that ranges between 0.05 to almost 1. Such heterogeneity is in part explained by observable characteristics such as age, income, gender, education as well as (current and lagged) smoking intensity. Moreover, price responsiveness is far from being independent from the size of the prospected price increase. Finally, by comparing even and uneven price variations, it is shown that uniform across-brand price increases are able to limit the scope of product substitutions and downgrade. Estimated price-response heterogeneity has significant implications for tax policy. Among them, first, it provides evidence and a rationale for why the aggregate price elasticity is likely to follow a strictly increasing pattern as a function of the experienced price variation. This information is crucial for forecasting the effect of a given tax-driven price change on tax revenue. Second, it provides some guidance on how to design excise tax reforms to balance public health and revenue goals.

Keywords: smoking behaviour, preference heterogeneity, price responsiveness, cigarette taxation, random utility models

Procedia PDF Downloads 146
1852 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation

Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro

Abstract:

This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.

Keywords: acceptance, block size, mixed linear model, testing order, testing order

Procedia PDF Downloads 311
1851 Analysis of Spatial and Temporal Data Using Remote Sensing Technology

Authors: Kapil Pandey, Vishnu Goyal

Abstract:

Spatial and temporal data analysis is very well known in the field of satellite image processing. When spatial data are correlated with time, series analysis it gives the significant results in change detection studies. In this paper the GIS and Remote sensing techniques has been used to find the change detection using time series satellite imagery of Uttarakhand state during the years of 1990-2010. Natural vegetation, urban area, forest cover etc. were chosen as main landuse classes to study. Landuse/ landcover classes within several years were prepared using satellite images. Maximum likelihood supervised classification technique was adopted in this work and finally landuse change index has been generated and graphical models were used to present the changes.

Keywords: GIS, landuse/landcover, spatial and temporal data, remote sensing

Procedia PDF Downloads 420
1850 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 75
1849 Application of Fuzzy Multiple Criteria Decision Making for Flooded Risk Region Selection in Thailand

Authors: Waraporn Wimuktalop

Abstract:

This research will select regions which are vulnerable to flooding in different level. Mathematical principles will be systematically and rationally utilized as a tool to solve problems of selection the regions. Therefore the method called Multiple Criteria Decision Making (MCDM) has been chosen by having two analysis standards, TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and AHP (Analytic Hierarchy Process). There are three criterions that have been considered in this research. The first criterion is climate which is the rainfall. The second criterion is geography which is the height above mean sea level. The last criterion is the land utilization which both forest and agriculture use. The study found that the South has the highest risk of flooding, then the East, the Centre, the North-East, the West and the North, respectively.

Keywords: multiple criteria decision making, TOPSIS, analytic hierarchy process, flooding

Procedia PDF Downloads 217
1848 On the Bias and Predictability of Asylum Cases

Authors: Panagiota Katsikouli, William Hamilton Byrne, Thomas Gammeltoft-Hansen, Tijs Slaats

Abstract:

An individual who demonstrates a well-founded fear of persecution or faces real risk of being subjected to torture is eligible for asylum. In Danish law, the exact legal thresholds reflect those established by international conventions, notably the 1951 Refugee Convention and the 1950 European Convention for Human Rights. These international treaties, however, remain largely silent when it comes to how states should assess asylum claims. As a result, national authorities are typically left to determine an individual’s legal eligibility on a narrow basis consisting of an oral testimony, which may itself be hampered by several factors, including imprecise language interpretation, insecurity or lacking trust towards the authorities among applicants. The leaky ground, on which authorities must assess their subjective perceptions of asylum applicants' credibility, questions whether, in all cases, adjudicators make the correct decision. Moreover, the subjective element in these assessments raises questions on whether individual asylum cases could be afflicted by implicit biases or stereotyping amongst adjudicators. In fact, recent studies have uncovered significant correlations between decision outcomes and the experience and gender of the assigned judge, as well as correlations between asylum outcomes and entirely external events such as weather and political elections. In this study, we analyze a publicly available dataset containing approximately 8,000 summaries of asylum cases, initially rejected, and re-tried by the Refugee Appeals Board (RAB) in Denmark. First, we look for variations in the recognition rates, with regards to a number of applicants’ features: their country of origin/nationality, their identified gender, their identified religion, their ethnicity, whether torture was mentioned in their case and if so, whether it was supported or not, and the year the applicant entered Denmark. In order to extract those features from the text summaries, as well as the final decision of the RAB, we applied natural language processing and regular expressions, adjusting for the Danish language. We observed interesting variations in recognition rates related to the applicants’ country of origin, ethnicity, year of entry and the support or not of torture claims, whenever those were made in the case. The appearance (or not) of significant variations in the recognition rates, does not necessarily imply (or not) bias in the decision-making progress. None of the considered features, with the exception maybe of the torture claims, should be decisive factors for an asylum seeker’s fate. We therefore investigate whether the decision can be predicted on the basis of these features, and consequently, whether biases are likely to exist in the decisionmaking progress. We employed a number of machine learning classifiers, and found that when using the applicant’s country of origin, religion, ethnicity and year of entry with a random forest classifier, or a decision tree, the prediction accuracy is as high as 82% and 85% respectively. tentially predictive properties with regards to the outcome of an asylum case. Our analysis and findings call for further investigation on the predictability of the outcome, on a larger dataset of 17,000 cases, which is undergoing.

Keywords: asylum adjudications, automated decision-making, machine learning, text mining

Procedia PDF Downloads 83
1847 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning

Authors: Xingyu Gao, Qiang Wu

Abstract:

Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.

Keywords: patent influence, interpretable machine learning, predictive models, SHAP

Procedia PDF Downloads 33
1846 Non-Governmental Organisations and Human Development in Bauchi State, Nigeria

Authors: Sadeeq Launi

Abstract:

NGOs, the world over, have been recognized as part of the institutions that complement government activities in providing services to the people, particularly in respect of human development. This study examined the role played by the NGOs in human development in Bauchi State, Nigeria, between 2004 and 2013. The emphasis was on reproductive health and access to education role of the selected NGOs. All the research questions, objectives and hypotheses were stated in line with these variables. The theoretical framework that guided the study was the participatory development approach. Being a survey research, data were generated from both primary and secondary sources with questionnaires and interviews as the instruments for generating the primary data. The population of the study was made up of the staff of the selected NGOs, beneficiaries, health staff and school teachers in Bauchi State. The sample drawn from these categories were 90, 107 and 148 units respectively. Stratified random and simple random sampling techniques were adopted for NGOs staff, and Health staff and school teachers data were analyzed quantitatively and qualitatively and hypotheses were tested using Pearson Chi-square test through SPSS computer statistical package. The study revealed that despite the challenges facing NGOs operations in the study area, NGOs rendered services in the areas of health and education This research recommends among others that, both government and people should be more cooperative to NGOs to enable them provide more efficient and effective services. Governments at all levels should be more dedicated to increasing accessibility and affordability of basic education and reproductive health care facilities and services in Bauchi state through committing more resources to the Health and Education sectors, this would support and facilitate the complementary role of NGOs in providing teaching facilities, drugs, and other reproductive health services in the States. More enlightenment campaigns should be carried out by governments to sensitize the public, particularly women on the need to embrace immunization programmes for their children and antenatal care services being provided by both the government and NGOs.

Keywords: access to education, human development, NGOs, reproductive health

Procedia PDF Downloads 164
1845 On the Influence of Sleep Habits for Predicting Preterm Births: A Machine Learning Approach

Authors: C. Fernandez-Plaza, I. Abad, E. Diaz, I. Diaz

Abstract:

Births occurring before the 37th week of gestation are considered preterm births. A threat of preterm is defined as the beginning of regular uterine contractions, dilation and cervical effacement between 23 and 36 gestation weeks. To author's best knowledge, the factors that determine the beginning of the birth are not completely defined yet. In particular, the incidence of sleep habits on preterm births is weekly studied. The aim of this study is to develop a model to predict the factors affecting premature delivery on pregnancy, based on the above potential risk factors, including those derived from sleep habits and light exposure at night (introduced as 12 variables obtained by a telephone survey using two questionnaires previously used by other authors). Thus, three groups of variables were included in the study (maternal, fetal and sleep habits). The study was approved by Research Ethics Committee of the Principado of Asturias (Spain). An observational, retrospective and descriptive study was performed with 481 births between January 1, 2015 and May 10, 2016 in the University Central Hospital of Asturias (Spain). A statistical analysis using SPSS was carried out to compare qualitative and quantitative variables between preterm and term delivery. Chi-square test qualitative variable and t-test for quantitative variables were applied. Statistically significant differences (p < 0.05) between preterm vs. term births were found for primiparity, multi-parity, kind of conception, place of residence or premature rupture of membranes and interruption during nights. In addition to the statistical analysis, machine learning methods to look for a prediction model were tested. In particular, tree based models were applied as the trade-off between performance and interpretability is especially suitable for this study. C5.0, recursive partitioning, random forest and tree bag models were analysed using caret R-package. Cross validation with 10-folds and parameter tuning to optimize the methods were applied. In addition, different noise reduction methods were applied to the initial data using NoiseFiltersR package. The best performance was obtained by C5.0 method with Accuracy 0.91, Sensitivity 0.93, Specificity 0.89 and Precision 0.91. Some well known preterm birth factors were identified: Cervix Dilation, maternal BMI, Premature rupture of membranes or nuchal translucency analysis in the first trimester. The model also identifies other new factors related to sleep habits such as light through window, bedtime on working days, usage of electronic devices before sleeping from Mondays to Fridays or change of sleeping habits reflected in the number of hours, in the depth of sleep or in the lighting of the room. IF dilation < = 2.95 AND usage of electronic devices before sleeping from Mondays to Friday = YES and change of sleeping habits = YES, then preterm is one of the predicting rules obtained by C5.0. In this work a model for predicting preterm births is developed. It is based on machine learning together with noise reduction techniques. The method maximizing the performance is the one selected. This model shows the influence of variables related to sleep habits in preterm prediction.

Keywords: machine learning, noise reduction, preterm birth, sleep habit

Procedia PDF Downloads 132