Search results for: random key
56 A Shift in Approach from Cereal Based Diet to Dietary Diversity in India: A Case Study of Aligarh District
Authors: Abha Gupta, Deepak K. Mishra
Abstract:
Food security issue in India has surrounded over availability and accessibility of cereal which is regarded as the only food group to check hunger and improve nutrition. Significance of fruits, vegetables, meat and other food products have totally been neglected given the fact that they provide essential nutrients to the body. There is a need to shift the emphasis from cereal-based approach to a more diverse diet so that aim of achieving food security may change from just reducing hunger to an overall health. This paper attempts to analyse how far dietary diversity level has been achieved across different socio-economic groups in India. For this purpose, present paper sets objectives to determine (a) percentage share of different food groups to total food expenditure and consumption by background characteristics (b) source of and preference for all food items and, (c) diversity of diet across socio-economic groups. A cross sectional survey covering 304 households selected through proportional stratified random sampling was conducted in six villages of Aligarh district of Uttar Pradesh, India. Information on amount of food consumed, source of consumption and expenditure on food (74 food items grouped into 10 major food groups) was collected with a recall period of seven days. Per capita per day food consumption/expenditure was calculated through dividing consumption/expenditure by household size and number seven. Food variety score was estimated by giving 0 values to those food groups/items which had not been eaten and 1 to those which had been taken by households in last seven days. Addition of all food group/item score gave result of food variety score. Diversity of diet was computed using Herfindahl-Hirschman index. Findings of the paper show that cereal, milk, roots and tuber food groups contribute a major share in total consumption/expenditure. Consumption of these food groups vary across socio-economic groups whereas fruit, vegetables, meat and other food consumption remain low and same. Estimation of dietary diversity show higher concentration of diet due to higher consumption of cereals, milk, root and tuber products and dietary diversity slightly varies across background groups. Muslims, Scheduled caste, small farmers, lower income class, food insecure, below poverty line and labour families show higher concentration of diet as compared to their counterpart groups. These groups also evince lower mean intake of number of food item in a week due to poor economic constraints and resultant lower accessibility to number of expensive food items. Results advocate to make a shift from cereal based diet to dietary diversity which not only includes cereal and milk products but also nutrition rich food items such as fruits, vegetables, meat and other products. Integrating a dietary diversity approach in food security programmes of the country would help to achieve nutrition security as hidden hunger is widespread among the Indian population.Keywords: dietary diversity, food Security, India, socio-economic groups
Procedia PDF Downloads 34055 Knowledge and Attitude Towards Strabismus Among Adult Residents in Woreta Town, Northwest Ethiopia: A Community-Based Study
Authors: Henok Biruk Alemayehu, Kalkidan Berhane Tsegaye, Fozia Seid Ali, Nebiyat Feleke Adimassu, Getasew Alemu Mersha
Abstract:
Background: Strabismus is a visual disorder where the eyes are misaligned and point in different directions. Untreated strabismus can lead to amblyopia, loss of binocular vision, and social stigma due to its appearance. Since it is assumed that knowledge is pertinent for early screening and prevention of strabismus, the main objective of this study was to assess knowledge and attitudes toward strabismus in Woreta town, Northwest Ethiopia. Providing data in this area is important for planning health policies. Methods: A community-based cross-sectional study was done in Woreta town from April–May 2020. The sample size was determined using a single population proportion formula by taking a 50% proportion of good knowledge, 95% confidence level, 5% margin of errors, and 10% non- response rate. Accordingly, the final computed sample size was 424. All four kebeles were included in the study. There were 42,595 people in total, with 39,684 adults and 9229 house holds. A sample fraction ’’k’’ was obtained by dividing the number of the household by the calculated sample size of 424. Systematic random sampling with proportional allocation was used to select the participating households with a sampling fraction (K) of 21 i.e. each household was approached in every 21 households included in the study. One individual was selected ran- domly from each household with more than one adult, using the lottery method to obtain a final sample size. The data was collected through a face-to-face interview with a pretested and semi-structured questionnaire which was translated from English to Amharic and back to English to maintain its consistency. Data were entered using epi-data version 3.1, then processed and analyzed via SPSS version- 20. Descriptive and analytical statistics were employed to summarize the data. A p-value of less than 0.05 was used to declare statistical significance. Result: A total of 401 individuals aged over 18 years participated, with a response rate of 94.5%. Of those who responded, 56.6% were males. Of all the participants, 36.9% were illiterate. The proportion of people with poor knowledge of strabismus was 45.1%. It was shown that 53.9% of the respondents had a favorable attitude. Older age, higher educational level, having a history of eye examination, and a having a family history of strabismus were significantly associated with good knowledge of strabismus. A higher educational level, older age, and hearing about strabismus were significantly associated with a favorable attitude toward strabismus. Conclusion and recommendation: The proportion of good knowledge and favorable attitude towards strabismus were lower than previously reported in Gondar City, Northwest Ethiopia. There is a need to provide health education and promotion campaigns on strabismus to the community: what strabismus is, its’ possible treatments and the need to bring children to the eye care center for early diagnosis and treatment. it advocate for prospective research endeavors to employ qualitative study design.Additionally, it suggest the exploration of studies that investigate causal-effect relationship.Keywords: strabismus, knowledge, attitude, Woreta
Procedia PDF Downloads 6354 Resolving Urban Mobility Issues through Network Restructuring of Urban Mass Transport
Authors: Aditya Purohit, Neha Bansal
Abstract:
Unplanned urbanization and multidirectional sprawl of the cities have resulted in increased motorization and deteriorating transport conditions like traffic congestion, longer commuting, pollution, increased carbon footprint, and above all increased fatalities. In order to overcome these problems, various practices have been adopted including– promoting and implementing mass transport; traffic junction channelization; smart transport etc. However, these methods are found to be primarily focusing on vehicular mobility rather than people accessibility. With this research gap, this paper tries to resolve the mobility issues for Ahmedabad city in India, which being the economic capital Gujarat state has a huge commuter and visitor inflow. This research aims to resolve the traffic congestion and urban mobility issues focusing on Gujarat State Regional Transport Corporation (GSRTC) for the city of Ahmadabad by analyzing the existing operations and network structure of GSRTC followed by finding possibilities of integrating it with other modes of urban transport. The network restructuring (NR) methodology is used with appropriate variations, based on commuter demand and growth pattern of the city. To do these ‘scenarios’ based on priority issues (using 12 parameters) and their best possible solution, are established after route network analysis for 2700 population sample of 20 traffic junctions/nodes across the city. Approximately 5% sample (of passenger inflow) at each node is considered using random stratified sampling technique two scenarios are – Scenario 1: Resolving mobility issues by use of Special Purpose Vehicle (SPV) in joint venture to GSRTC and Private Operators for establishing feeder service, which shall provide a transfer service for passenger for movement from inner city area to identified peripheral terminals; and Scenario 2: Augmenting existing mass transport services such as BRTS and AMTS for using them as feeder service to the identified peripheral terminals. Each of these has now been analyzed for the best suitability/feasibility in network restructuring. A desire-line diagram is constructed using this analysis which indicated that on an average 62% of designated GSRTC routes are overlapping with mass transportation service routes of BRTS and AMTS in the city. This has resulted in duplication of bus services causing traffic congestion especially in the Central Bus Station (CBS). Terminating GSRTC services on the periphery of the city is found to be the best restructuring network proposal. This limits the GSRTC buses at city fringe area and prevents them from entering into the city core areas. These end-terminals of GSRTC are integrated with BRTS and AMTS services which help in segregating intra-state and inter-state bus services. The research concludes that absence of integrated multimodal transport network resulted in complexity of transport access to the commuters. As a further scope of research comparing and understanding of value of access time in total travel time and its implication on generalized cost on trip and how it varies city wise may be taken up.Keywords: mass transportation, multi-modal integration, network restructuring, travel behavior, urban transport
Procedia PDF Downloads 19853 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes
Authors: Igor A. Krichtafovitch
Abstract:
The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.Keywords: supercomputer, biological evolution, Darwinism, speciation
Procedia PDF Downloads 16652 Sustainable Recycling Practices to Reduce Health Hazards of Municipal Solid Waste in Patna, India
Authors: Anupama Singh, Papia Raj
Abstract:
Though Municipal Solid Waste (MSW) is a worldwide problem, yet its implications are enormous in developing countries, as they are unable to provide proper Municipal Solid Waste Management (MSWM) for the large volume of MSW. As a result, the collected wastes are dumped in open dumping at landfilling sites while the uncollected wastes remain strewn on the roadside, many-a-time clogging drainage. Such unsafe and inadequate management of MSW causes various public health hazards. For example, MSW directly on contact or by leachate contaminate the soil, surface water, and ground water; open burning causes air pollution; anaerobic digestion between the piles of MSW enhance the greenhouse gases i.e., carbon dioxide and methane (CO2 and CH4) into the atmosphere. Moreover, open dumping can cause spread of vector borne disease like cholera, typhoid, dysentery, and so on. Patna, the capital city of Bihar, one of the most underdeveloped provinces in India, is a unique representation of this situation. Patna has been identified as the ‘garbage city’. Over the last decade there has been an exponential increase in the quantity of MSW generation in Patna. Though a large proportion of such MSW is recyclable in nature, only a negligible portion is recycled. Plastic constitutes the major chunk of the recyclable waste. The chemical composition of plastic is versatile consisting of toxic compounds, such as, plasticizers, like adipates and phthalates. Pigmented plastic is highly toxic and it contains harmful metals such as copper, lead, chromium, cobalt, selenium, and cadmium. Human population becomes vulnerable to an array of health problems as they are exposed to these toxic chemicals multiple times a day through air, water, dust, and food. Based on analysis of health data it can be emphasized that in Patna there has been an increase in the incidence of specific diseases, such as, diarrhoea, dysentry, acute respiratory infection (ARI), asthma, and other chronic respiratory diseases (CRD). This trend can be attributed to improper MSWM. The results were reiterated through a survey (N=127) conducted during 2014-15 in selected areas of Patna. Random sampling method of data collection was used to better understand the relationship between different variables affecting public health due to exposure to MSW and lack of MSWM. The results derived through bivariate and logistic regression analysis of the survey data indicate that segregation of wastes at source, segregation behavior, collection bins in the area, distance of collection bins from residential area, and transportation of MSW are the major determinants of public health issues. Sustainable recycling is a robust method for MSWM with its pioneer concerns being environment, society, and economy. It thus ensures minimal threat to environment and ecology consequently improving public health conditions. Hence, this paper concludes that sustainable recycling would be the most viable approach to manage MSW in Patna and would eventually reduce public health hazards.Keywords: municipal solid waste, Patna, public health, sustainable recycling
Procedia PDF Downloads 32651 MEIOSIS: Museum Specimens Shed Light in Biodiversity Shrinkage
Authors: Zografou Konstantina, Anagnostellis Konstantinos, Brokaki Marina, Kaltsouni Eleftheria, Dimaki Maria, Kati Vassiliki
Abstract:
Body size is crucial to ecology, influencing everything from individual reproductive success to the dynamics of communities and ecosystems. Understanding how temperature affects variations in body size is vital for both theoretical and practical purposes, as changes in size can modify trophic interactions by altering predator-prey size ratios and changing the distribution and transfer of biomass, which ultimately impacts food web stability and ecosystem functioning. Notably, a decrease in body size is frequently mentioned as the third "universal" response to climate warming, alongside shifts in distribution and changes in phenology. This trend is backed by ecological theories like the temperature-size rule (TSR) and Bergmann's rule, which have been observed in numerous species, indicating that many species are likely to shrink in size as temperatures rise. However, the thermal responses related to body size are still contradictory, and further exploration is needed. To tackle this challenge, we developed the MEIOSIS project, aimed at providing valuable insights into the relationship between the body size of species, species’ traits, environmental factors, and their response to climate change. We combined a digitized collection of butterflies from the Swiss Federal Institute of Technology in Zürich with our newly digitized butterfly collection from Goulandris Natural History Museum in Greece to analyse trends in time. For a total of 23868 images, the length of the right forewing was measured using ImageJ software. Each forewing was measured from the point at which the wing meets the thorax to the apex of the wing. The forewing length of museum specimens has been shown to have a strong correlation with wing surface area and has been utilized in prior studies as a proxy for overall body size. Temperature data corresponding to the years of collection were also incorporated into the datasets. A second dataset was generated when a custom computer vision tool was implemented for the automated morphological measuring of samples for the digitized collection in Zürich. Using the second dataset, we corrected manual measurements with ImageJ, and a final dataset containing 31922 samples was used for analysis. Setting time as a smoother variable, species identity as a random factor, and the length of right-wing size (a proxy for body size) as the response variable, we ran a global model for a maximum period of 110 years (1900 – 2010). Then, we investigated functional variability between different terrestrial biomes in a second model. Both models confirmed our initial hypothesis and resulted in a decreasing trend in body size over the years. We expect that this first output can be provided as basic data for the next challenge, i.e., to identify the ecological traits that influence species' temperature-size responses, enabling us to predict the direction and intensity of a species' reaction to rising temperatures more accurately.Keywords: butterflies, shrinking body size, museum specimens, climate change
Procedia PDF Downloads 1350 Correlation of Hyperlipidemia with Platelet Parameters in Blood Donors
Authors: S. Nishat Fatima Rizvi, Tulika Chandra, Abbas Ali Mahdi, Devisha Agarwal
Abstract:
Introduction: Blood components are an unexplored area prone to numerous discoveries which influence patient’s care. Experiments at different levels will further change the present concept of blood banking. Hyperlipidemia is a condition of elevated plasma level of low-density lipoprotein (LDL) as well as decreased plasma level of high-density lipoprotein (HDL). Studies show that platelets play a vital role in the progression of atherosclerosis and thrombosis, a major cause of death worldwide. They are activated by many triggers like elevated LDL in the blood resulting in aggregation and formation of plaques. Hyperlipidemic platelets are frequently transfused to patients with various disorders. Screening the random donor platelets for hyperlipidemia and correlating the condition with other donor criteria such as lipid rich diet, oral contraceptive pills intake, weight, alcohol intake, smoking, sedentary lifestyle, family history of heart diseases will lead to further deciding the exclusion criteria for donor selection. This will help in making the patients safe as well as the donor deferral criteria more stringent to improve the quality of blood supply. Technical evaluation and assessment will enable blood bankers to supply safe blood and improve the guidelines for blood safety. Thus, we try to study the correlation between hyperlipidemic platelets with platelets parameters, weight, and specific history of the donors. Methodology: This case control study included 100 blood samples of Blood donors, out of 100 only 30 samples were found to be hyperlipidemic and were included as cases, while rest were taken as controls. Lipid Profile were measured by fully automated analyzer (TRIGL:triglycerides),(LDL-C:LDL –Cholesterol plus 2nd generation),CHOL 2: Cholesterol Gen 2), HDL C 3: HDL-Cholesterol plus 3rdgeneration)-(Cobas C311-Roche Diagnostic).And Platelets parameters were analyzed by the Sysmex KX21 automated hematology analyzer. Results: A significant correlation was found amongst hyperlipidemic level in single time donor. In which 80% donors have history of heart disease, 66.66% donors have sedentary life style, 83.3% donors were smokers, 50% donors were alcoholic, and 63.33% donors had taken lipid rich diet. Active physical activity was found amongst 40% donors. We divided donors sample in two groups based on their body weight. In group 1, hyperlipidemic samples: Platelet Parameters were 75% in normal 25% abnormal in >70Kg weight while in 50-70Kg weight 90% were normal 10% were abnormal. In-group 2, Non Hyperlipidemic samples: platelet Parameters were 95% normal and 5% abnormal in >70Kg weight, while in 50-70Kg Weight, 66.66% normal and 33.33% abnormal. Conclusion: The findings indicate that Hyperlipidemic status of donors may affect the platelet parameters and can be distinguished on history by their weight, Smoking, Alcoholic intake, Sedentary lifestyle, Active physical activity, Lipid rich diet, Oral contraceptive pills intake, and Family history of heart disease. However further studies on a large sample size will affirm this finding.Keywords: blood donors, hyperlipidemia, platelet, weight
Procedia PDF Downloads 31549 Extremism among College and High School Students in Moscow: Diagnostics Features
Authors: Puzanova Zhanna Vasilyevna, Larina Tatiana Igorevna, Tertyshnikova Anastasia Gennadyevna
Abstract:
In this day and age, extremism in various forms of its manifestation is a real threat to the world community, the national security of a state and its territorial integrity, as well as to the constitutional rights and freedoms of citizens. Extremism, as it is known, in general terms described as a commitment to extreme views and actions, radically denying the existing social norms and rules. Supporters of extremism in the ideological and political struggles often adopt methods and means of psychological warfare, appeal not to reason and logical arguments, but to emotions and instincts of the people, to prejudices, biases, and a variety of mythological designs. They are dissatisfied with the established order and aim at increasing this dissatisfaction among the masses. Youth extremism holds a specific place among the existing forms and types of extremism. In this context in 2015, we conducted a survey among Moscow college and high school students. The aim of this study was to determine how great or small is the difference in understanding and attitudes towards extremism manifestations, inclination and readiness to take part in extremist activities and what causes this predisposition, if it exists. We performed multivariate analysis and found the Russian college and high school students' opinion about the extremism and terrorism situation in our country and also their cognition on these topics. Among other things, we showed, that the level of aggressiveness of young people were not above the average for the whole population. The survey was conducted using the questionnaire method. The sample included college and high school students in Moscow (642 and 382, respectively) by method of random selection. The questionnaire was developed by specialists of RUDN University Sociological Laboratory and included both original questions (projective questions, the technique of incomplete sentences), and the standard test Dayhoff S. to determine the level of internal aggressiveness. It is also used as an experiment, the technique of study option using of FACS and SPAFF to determine the psychotypes and determination of non-verbal manifestations of emotions. The study confirmed the hypothesis that in respondents’ opinion, the level of aggression is higher today than a few years ago. Differences were found in the understanding of and respect for such social phenomena as extremism, terrorism, and their danger and appeal for the two age groups of young people. Theory of psychotypes, SPAFF (specific affect cording system) and FACS (facial action cording system) are considered as additional techniques for the diagnosis of a tendency to extreme views. Thus, it is established that diagnostics of acceptance of extreme views among young people is possible thanks to simultaneous use of knowledge from the different fields of socio-humanistic sciences. The results of the research can be used in a comparative context with other countries and as a starting point for further research in the field, taking into account its extreme relevance.Keywords: extremism, youth extremism, diagnostics of extremist manifestations, forecast of behavior, sociological polls, theory of psychotypes, FACS, SPAFF
Procedia PDF Downloads 33948 Low- and High-Temperature Methods of CNTs Synthesis for Medicine
Authors: Grzegorz Raniszewski, Zbigniew Kolacinski, Lukasz Szymanski, Slawomir Wiak, Lukasz Pietrzak, Dariusz Koza
Abstract:
One of the most promising area for carbon nanotubes (CNTs) application is medicine. One of the most devastating diseases is cancer. Carbon nanotubes may be used as carriers of a slowly released drug. It is possible to use of electromagnetic waves to destroy cancer cells by the carbon nanotubes (CNTs). In our research we focused on thermal ablation by ferromagnetic carbon nanotubes (Fe-CNTs). In the cancer cell hyperthermia functionalized carbon nanotubes are exposed to radio frequency electromagnetic field. Properly functionalized Fe-CNTs join the cancer cells. Heat generated in nanoparticles connected to nanotubes warm up nanotubes and then the target tissue. When the temperature in tumor tissue exceeds 316 K the necrosis of cancer cells may be observed. Several techniques can be used for Fe-CNTs synthesis. In our work, we use high-temperature methods where arc-discharge is applied. Low-temperature systems are microwave plasma with assisted chemical vapor deposition (MPCVD) and hybrid physical-chemical vapor deposition (HPCVD). In the arc discharge system, the plasma reactor works with a pressure of He up to 0,5 atm. The electric arc burns between two graphite rods. Vapors of carbon move from the anode, through a short arc column and forms CNTs which can be collected either from the reactor walls or cathode deposit. This method is suitable for the production of multi-wall and single-wall CNTs. A disadvantage of high-temperature methods is a low purification, short length, random size and multi-directional distribution. In MPCVD system plasma is generated in waveguide connected to the microwave generator. Then containing carbon and ferromagnetic elements plasma flux go to the quartz tube. The additional resistance heating can be applied to increase the reaction effectiveness and efficiency. CNTs nucleation occurs on the quartz tube walls. It is also possible to use substrates to improve carbon nanotubes growth. HPCVD system involves both chemical decomposition of carbon containing gases and vaporization of a solid or liquid source of catalyst. In this system, a tube furnace is applied. A mixture of working and carbon-containing gases go through the quartz tube placed inside the furnace. As a catalyst ferrocene vapors can be used. Fe-CNTs may be collected then either from the quartz tube walls or on the substrates. Low-temperature methods are characterized by higher purity product. Moreover, carbon nanotubes from tested CVD systems were partially filled with the iron. Regardless of the method of Fe-CNTs synthesis the final product always needs to be purified for applications in medicine. The simplest method of purification is an oxidation of the amorphous carbon. Carbon nanotubes dedicated for cancer cell thermal ablation need to be additionally treated by acids for defects amplification on the CNTs surface what facilitates biofunctionalization. Application of ferromagnetic nanotubes for cancer treatment is a promising method of fighting with cancer for the next decade. Acknowledgment: The research work has been financed from the budget of science as a research project No. PBS2/A5/31/2013Keywords: arc discharge, cancer, carbon nanotubes, CVD, thermal ablation
Procedia PDF Downloads 45047 Achieving Flow at Work: An Experience Sampling Study to Comprehend How Cognitive Task Characteristics and Work Environments Predict Flow Experiences
Authors: Jonas De Kerf, Rein De Cooman, Sara De Gieter
Abstract:
For many decades, scholars have aimed to understand how work can become more meaningful by maximizing both potential and enhancing feelings of satisfaction. One of the largest contributions towards such positive psychology was made with the introduction of the concept of ‘flow,’ which refers to a condition in which people feel intense engagement and effortless action. Since then, valuable research on work-related flow has indicated that this state of mind is related to positive outcomes for both organizations (e.g., social, supportive climates) and workers (e.g., job satisfaction). Yet, scholars still do not fully comprehend how such deep involvement at work is obtained, given the notion that flow is considered a short-term, complex, and dynamic experience. Most research neglects that people who experience flow ought to be optimally challenged so that intense concentration is required. Because attention is at the core of this enjoyable state of mind, this study aims to comprehend how elements that affect workers’ cognitive functioning impact flow at work. Research on cognitive performance suggests that working on mentally demanding tasks (e.g., information processing tasks) requires workers to concentrate deeply, as a result leading to flow experiences. Based on social facilitation theory, working on such tasks in an isolated environment eases concentration. Prior research has indicated that working at home (instead of working at the office) or in a closed office (rather than in an open-plan office) impacts employees’ overall functioning in terms of concentration and productivity. Consequently, we advance such knowledge and propose an interaction by combining cognitive task characteristics and work environments among part-time teleworkers. Hence, we not only aim to shed light on the relation between cognitive tasks and flow but also provide empirical evidence that workers performing such tasks achieve the highest states of flow while working either at home or in closed offices. In July 2022, an experience-sampling study will be conducted that uses a semi-random signal schedule to understand how task and environment predictors together impact part-time teleworkers’ flow. More precisely, about 150 knowledge workers will fill in multiple surveys a day for two consecutive workweeks to report their flow experiences, cognitive tasks, and work environments. Preliminary results from a pilot study indicate that on a between level, tasks high in information processing go along with high self-reported fluent productivity (i.e., making progress). As expected, evidence was found for higher fluency in productivity for workers performing information processing tasks both at home and in a closed office, compared to those performing the same tasks at the office or in open-plan offices. This study expands the current knowledge on work-related flow by looking at a task and environmental predictors that enable workers to obtain such a peak state. While doing so, our findings suggest that practitioners should strive for ideal alignments between tasks and work locations to work with both deep involvement and gratification.Keywords: cognitive work, office lay-out, work location, work-related flow
Procedia PDF Downloads 10246 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube
Authors: Dan Kanmegne
Abstract:
Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification
Procedia PDF Downloads 14645 Study of the Association between Salivary Microbiological Data, Oral Health Indicators, Behavioral Factors, and Social Determinants among Post-COVID Patients Aged 7 to 12 Years in Tbilisi City
Authors: Lia Mania, Ketevan Nanobashvili
Abstract:
Background: The coronavirus disease COVID-19 has become the cause of a global health crisis during the current pandemic. This study aims to fill the paucity of epidemiological studies on the impact of COVID-19 on the oral health of pediatric populations. Methods: It was conducted an observational, cross-sectional study in Georgia, in Tbilisi (capital of Georgia), among 7 to 12-year-old PCR or rapid test-confirmed post-Covid populations in all districts of Tbilisi (10 districts in total). 332 beneficiaries who were infected with Covid within one year were included in the study. The population was selected in schools of Tbilisi according to the principle of cluster selection. A simple random selection took place in the selected clusters. According to this principle, an equal number of beneficiaries were selected in all districts of Tbilisi. By July 1, 2022, according to National Center for Disease Control and Public Health data (NCDC.Ge), the number of test-confirmed cases in the population aged 0-18 in Tbilisi was 115137 children (17.7% of all confirmed cases). The number of patients to be examined was determined by the sample size. Oral screening, microbiological examination of saliva, and administration of oral health questionnaires to guardians were performed. Statistical processing of data was done with SPSS-23. Risk factors were estimated by odds ratio and logistic regression with 95% confidence interval. Results: Statistically reliable differences between the averages of oral health indicators in asymptomatic and symptomatic covid-infected groups are: for caries intensity (DMF+def) t=4.468 and p=0.000, for modified gingival index (MGI) t=3.048, p=0.002, for simplified oral hygiene index (S-OHI) t=4.853; p=0.000. Symptomatic covid-infection has a reliable effect on the oral microbiome (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis); (n=332; 77.3% vs n=332; 58.0%; OR=2.46, 95%CI: 1.318-4.617). According to the logistic regression, it was found that the severity of the covid infection has a significant effect on the frequency of pathogenic and conditionally pathogenic bacteria in the oral cavity B=0.903 AOR=2.467 (CL 1.318-4.617). Symptomatic covid-infection affects oral health indicators, regardless of the presence of other risk factors, such as parental employment status, tooth brushing behaviors, carbohydrate meal, fruit consumption. (p<0.05). Conclusion: Risk factors (parental employment status, tooth brushing behaviors, carbohydrate consumption) were associated with poorer oral health status in a post-Covid population of 7- to 12-year-old children. However, such a risk factor as symptomatic ongoing covid-infection affected the oral microbiome in terms of the abundant growth of pathogenic and conditionally pathogenic bacteria (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis) and further worsened oral health indicators. Thus, a close association was established between symptomatic covid-infection and microbiome changes in the post-covid period; also - between the variables of oral health indicators and the symptomatic course of covid-infection.Keywords: oral microbiome, COVID-19, population based research, oral health indicators
Procedia PDF Downloads 7044 EEG and DC-Potential Level Сhanges in the Elderly
Authors: Irina Deputat, Anatoly Gribanov, Yuliya Dzhos, Alexandra Nekhoroshkova, Tatyana Yemelianova, Irina Bolshevidtseva, Irina Deryabina, Yana Kereush, Larisa Startseva, Tatyana Bagretsova, Irina Ikonnikova
Abstract:
In the modern world the number of elderly people increases. Preservation of functionality of an organism in the elderly becomes very important now. During aging the higher cortical functions such as feelings, perception, attention, memory, and ideation are gradual decrease. It is expressed in the rate of information processing reduction, volume of random access memory loss, ability to training and storing of new information decrease. Perspective directions in studying of aging neurophysiological parameters are brain imaging: computer electroencephalography, neuroenergy mapping of a brain, and also methods of studying of a neurodynamic brain processes. Research aim – to study features of a brain aging in elderly people by electroencephalogram (EEG) and the DC-potential level. We examined 130 people aged 55 - 74 years that did not have psychiatric disorders and chronic states in a decompensation stage. EEG was recorded with a 128-channel GES-300 system (USA). EEG recordings are collected while the participant sits at rest with their eyes closed for 3 minutes. For a quantitative assessment of EEG we used the spectral analysis. The range was analyzed on delta (0,5–3,5 Hz), a theta - (3,5–7,0 Hz), an alpha 1-(7,0–11,0 Hz) an alpha 2-(11–13,0 Hz), beta1-(13–16,5 Hz) and beta2-(16,5–20 Hz) ranges. In each frequency range spectral power was estimated. The 12-channel hardware-software diagnostic ‘Neuroenergometr-KM’ complex was applied for registration, processing and the analysis of a brain constant potentials level. The DC-potential level registered in monopolar leads. It is revealed that the EEG of elderly people differ in higher rates of spectral power in the range delta (р < 0,01) and a theta - (р < 0,05) rhythms, especially in frontal areas in aging. By results of the comparative analysis it is noted that elderly people 60-64 aged differ in higher values of spectral power alfa-2 range in the left frontal and central areas (р < 0,05) and also higher values beta-1 range in frontal and parieto-occipital areas (р < 0,05). Study of a brain constant potential level distribution revealed increase of total energy consumption on the main areas of a brain. In frontal leads we registered the lowest values of constant potential level. Perhaps it indicates decrease in an energy metabolism in this area and difficulties of executive functions. The comparative analysis of a potential difference on the main assignments testifies to unevenness of a lateralization of a brain functions at elderly people. The results of a potential difference between right and left hemispheres testify to prevalence of the left hemisphere activity. Thus, higher rates of functional activity of a cerebral cortex are peculiar to people of early advanced age (60-64 years) that points to higher reserve opportunities of central nervous system. By 70 years there are age changes of a cerebral power exchange and level of electrogenesis of a brain which reflect deterioration of a condition of homeostatic mechanisms of self-control and the program of processing of the perceptual data current flow.Keywords: brain, DC-potential level, EEG, elderly people
Procedia PDF Downloads 48643 Recognizing Human Actions by Multi-Layer Growing Grid Architecture
Authors: Z. Gharaee
Abstract:
Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance
Procedia PDF Downloads 15842 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients' Cohorts: A Case Study in Scotland
Authors: Raptis Sotirios
Abstract:
Health and social care (HSc) services planning and scheduling are facing unprecedented challenges due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven can help to improve policies, plan and design services provision schedules using algorithms assist healthcare managers’ to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as CART, random forests (RF), and logistic regression (LGR). The significance tests Chi-Squared test and Student test are used on data over a 39 years span for which HSc services data exist for services delivered in Scotland. The demands are probabilistically associated through statistical hypotheses that assume that the target service’s demands are statistically dependent on other demands as a NULL hypothesis. This linkage can be confirmed or not by the data. Complementarily, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus groups of services. Statistical tests confirm ML couplings making the prediction also statistically meaningful and prove that a target service can be matched reliably to other services, and ML shows these indicated relationships can also be linear ones. Zero paddings were used for missing years records and illustrated better such relationships both for limited years and in the entire span offering long term data visualizations while limited years groups explained how well patients numbers can be related in short periods or can change over time as opposed to behaviors across more years. The prediction performance of the associations is measured using Receiver Operating Characteristic(ROC) AUC and ACC metrics as well as the statistical tests, Chi-Squared and Student. Co-plots and comparison tables for RF, CART, and LGR as well as p-values and Information Exchange(IE), are provided showing the specific behavior of the ML and of the statistical tests and the behavior using different learning ratios. The impact of k-NN and cross-correlation and C-Means first groupings is also studied over limited years and the entire span. It was found that CART was generally behind RF and LGR, but in some interesting cases, LGR reached an AUC=0 falling below CART, while the ACC was as high as 0.912, showing that ML methods can be confused padding or by data irregularities or outliers. On average, 3 linear predictors were sufficient, LGR was found competing RF well, and CART followed with the same performance at higher learning ratios. Services were packed only if when significance level(p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited years, across various services sectors, learning configurations, as confirmed using statistical hypotheses.Keywords: class, cohorts, data frames, grouping, prediction, prob-ability, services
Procedia PDF Downloads 23641 Cyber-Victimization among Higher Education Students as Related to Academic and Personal Factors
Authors: T. Heiman, D. Olenik-Shemesh
Abstract:
Over the past decade, with the rapid growth of electronic communication, the internet and, in particular, social networking has become an inseparable part of people's daily lives. Along with its benefits, a new type of online aggression has emerged, defined as cyber bullying, a form of interpersonal aggressive behavior that takes place through electronic means. Cyber-bullying is characterized by repetitive behavior over time of maladaptive authority and power usage using computers and cell phones via sending insulting messages and hurtful pictures. Preliminary findings suggest that the prevalence of involvement in cyber-bullying among higher education students varies between 10 and 35%. As to date, universities are facing an uphill effort in trying to restrain online misbehavior. As no studies examined the relationships between cyber-bullying involvement with personal aspects, and its impacts on academic achievement and work functioning, this present study examined the nature of cyber-bullying involvement among 1,052 undergraduate students (mean age = 27.25, S.D = 4.81; 66.2% female), coping with, as well as the effects of social support, perceived self-efficacy, well-being, and body-perception, in relation to cyber-victimization. We assume that students in higher education are a vulnerable population and at high risk of being cyber-victims. We hypothesize that social support might serve as a protective factor and will moderate the relationships between the socio-emotional variables and the occurrence of cyber- victimization. The findings of this study will present the relationships between cyber-victimization and the social-emotional aspects, which constitute risk and protective factors. After receiving approval from the Ethics Committee of the University, a Google Drive questionnaire was sent to a random sample of students, studying in the various University study centers. Students' participation was voluntary, and they completed the five questionnaires anonymously: Cyber-bullying, perceived self-efficacy, subjective well-being, social support and body perception. Results revealed that 11.6% of the students reported being cyber-victims during last year. Examining the emotional and behavioral reactions to cyber-victimization revealed that female emotional and behavioral reactions were significantly greater than the male reactions (p < .001). Moreover, females reported on a significant higher social support compared to men; male reported significantly on a lower social capability than female; and men's body perception was significantly more positive than women's scores. No gender differences were observed for subjective well-being scale. Significant positive correlations were found between cyber-victimization and fewer friends, lower grades, and work ineffectiveness (r = 0.37- .40, p < 0 .001). The results of the Hierarchical regression indicated significantly that cyber-victimization can be predicted by lower social support, lower body perception, and gender (female), that explained 5.6% of the variance (R2 = 0.056, F(5,1047) = 12.47, p < 0.001). The findings deepen our understanding of the students' involvement in cyber-bullying, and present the relationships of the social-emotional and academic aspects on cyber-victim students. In view of our findings, higher education policy could help facilitate coping with cyber-bullying incidents, and student support units could develop intervention programs aimed at reducing cyber-bullying and its impacts.Keywords: academic and personal factors, cyber-victimization, social support, higher education
Procedia PDF Downloads 28940 Delivering User Context-Sensitive Service in M-Commerce: An Empirical Assessment of the Impact of Urgency on Mobile Service Design for Transactional Apps
Authors: Daniela Stephanie Kuenstle
Abstract:
Complex industries such as banking or insurance experience slow growth in mobile sales. While today’s mobile applications are sophisticated and enable location based and personalized services, consumers prefer online or even face-to-face services to complete complex transactions. A possible reason for this reluctance is that the provided service within transactional mobile applications (apps) does not adequately correspond to users’ needs. Therefore, this paper examines the impact of the user context on mobile service (m-service) in m-commerce. Motivated by the potential which context-sensitive m-services hold for the future, the impact of temporal variations as a dimension of user context, on m-service design is examined. In particular, the research question asks: Does consumer urgency function as a determinant of m-service composition in transactional apps by moderating the relation between m-service type and m-service success? Thus, the aim is to explore the moderating influence of urgency on m-service types, which includes Technology Mediated Service and Technology Generated Service. While mobile applications generally comprise features of both service types, this thesis discusses whether unexpected urgency changes customer preferences for m-service types and how this consequently impacts the overall m-service success, represented by purchase intention, loyalty intention and service quality. An online experiment with a random sample of N=1311 participants was conducted. Participants were divided into four treatment groups varying in m-service types and urgency level. They were exposed to two different urgency scenarios (high/ low) and two different app versions conveying either technology mediated or technology generated service. Subsequently, participants completed a questionnaire to measure the effectiveness of the manipulation as well as the dependent variables. The research model was tested for direct and moderating effects of m-service type and urgency on m-service success. Three two-way analyses of variance confirmed the significance of main effects, but demonstrated no significant moderation of urgency on m-service types. The analysis of the gathered data did not confirm a moderating effect of urgency between m-service type and service success. Yet, the findings propose an additive effects model with the highest purchase and loyalty intention for Technology Generated Service and high urgency, while Technology Mediated Service and low urgency demonstrate the strongest effect for service quality. The results also indicate an antagonistic relation between service quality and purchase intention depending on the level of urgency. Although a confirmation of the significance of this finding is required, it suggests that only service convenience, as one dimension of mobile service quality, delivers conditional value under high urgency. This suggests a curvilinear pattern of service quality in e-commerce. Overall, the paper illustrates the complex interplay of technology, user variables, and service design. With this, it contributes to a finer-grained understanding of the relation between m-service design and situation dependency. Moreover, the importance of delivering situational value with apps depending on user context is emphasized. Finally, the present study raises the demand to continue researching the impact of situational variables on m-service design in order to develop more sophisticated m-services.Keywords: mobile consumer behavior, mobile service design, mobile service success, self-service technology, situation dependency, user-context sensitivity
Procedia PDF Downloads 26839 An Anthropological Insight into Farming Practices and Cultural Life of Farmers in Sarawan Village, District Faridkot, Punjab
Authors: Amandeep Kaur
Abstract:
Farming is one of the most influential traditions which started around 10000 BC and has revolutionized human civilization. It is believed that farming originated at a separate location. Thus it has a great impact on local culture, which in turn gave rise to diversified farming practices. Farming activities are influenced by the culture of a particular region or community as local people have their own knowledge and belief system about soil and crops. With the inception of the Green Revolution, 'a high tech machinery model' in Punjab, various traditional farming methods and techniques changed. The present research concentrates on the local knowledge of farmers and local farming systems from an anthropological perspective. In view of the prevailing agrarian crisis in Punjab, this research is focused on farmer’s experiences and their perception regarding farming practices. Thus an attempt has to be made to focus on the local knowledge, perception, and experience of farmers for eco-friendly and sustainable agricultural development. Farmers voices are used to understand the relationship between farming practices and socio-cultural life of farmers in Faridkot district, Punjab. The research aims to comprehend the nature of changes taking place in the socio-cultural life of people with the development of capitalism and agricultural modernization. The study is based on qualitative methods of ethnography in Sarawan village of Faridkot District. Inferences drawn from in-depth case studies collected from 60 agricultural households lead to the concept of the process of diffusion, innovation, and adoption of farming technology, a variety of crops and the dissemination of agricultural skills regarding various cultural farming practices. The data is based on random sampling; the respondents were both males and females above the age of 18 years to attain a holistic understanding across the generations. A Quasi-participant observation related to lifestyle, the standard of living, and various farming practices performed by them were done. Narratives derived from the fieldwork depicts that farmers usually oppose the restrictions imposed by the government on certain farming practices, especially ban on stubble burning. This paper presents the narratives of farmers regarding the dissemination of awareness about the use of new varieties of seeds, technology, fertilizers, pesticides, etc. The study reveals that farming systems have developed in ways reflecting the activities and choices of farmers influenced by environmental, socio-cultural, economic, and political situations. Modern farming practices have forced small farmers into debt as farmers feel pride in buying new machinery. It has also led to the loss of work culture and excessive use of drugs among youngsters. Even laborers did not want to work on the land with cultivating farmers primarily for social and political reasons. Due to lack of proper marketing of crops, there is a continuum of the wheat-rice cycle instead of crop diversification in Punjab. Change in the farming system also affects the social structure of society. Agricultural modernization has commercialized the socio-cultural relations in Punjab and is slowly urbanizing the rural landscape revolutionizing the traditional social relations to capitalistic relations.Keywords: agricultural modernization, capitalism, farming practices, narratives
Procedia PDF Downloads 15038 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra
Authors: Bitewulign Mekonnen
Abstract:
Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network
Procedia PDF Downloads 9537 The Role of Temples Redevelopment for Informal Sector Business Development in India
Authors: Prashant Gupta
Abstract:
Throughout India, temples have served as cultural centers, commerce hubs, art galleries, educational institutions, and social centers in addition to being places of worship since centuries. Across the country, there are over two million temples, which are crucial economic hubs, attracting devotees and tourists worldwide. In India, we have 53 temples per each 100,000 Indians. As per NSSO survey, the temple economy is worth about $40 billion and 2.32 per cent of GDP based on major temple’s survey, which only includes formal sector. It could be much larger as an actual estimation has not been done yet. In India, 43.1% of total economy represents informal sector. Over 10 billion domestic tourists visit to new destinations every year within India. Even 20 per cent of the 90 million foreign tourists visited Madurai and Mahabalipuram temples which became the most visited tourist spot in 2022. Recently the current central government in power have started revitalizing the ancient Indian civilization by reconstructing and beautifying the major temples of India i.e., Kashi Vishwanath Corridor, Mahakaleshwara Temple, Kedarnath, Ayodhya etc. The reason researcher chose Kashi as a case study because it is known as a Spiritual Capital of India, which is also the abode for the spread of Hinduism, Buddhism, Jainism and Sikkism, which are core Sanatan Dharmic practices. 17,800 Million INR Amount was spend to redevelop Kashi Vishwanath Corridor since 2019. RESEARCH OBJECTIVES 1. To assess historical contribution of temples in socio economic development and revival of Indic Civilization. 2. To examine the role of temples redevelopment for informal sector businesses. 3. To identify the sub-sectors of informal sector businesses 4. To identify products and services of informal businesses for investigation of marketing strategies and business development. PROPOSED METHODS AND PROCEDURES This study will follow a mixed approach, employing both qualitative and quantitative methods of research. To conduct the study, data will be collected from 500 informal business owners through structured questionnaire and interview instruments. The informal business owners will be selected using a systematic random sampling technique. In addition, documents from government offices of the last 10 years of tax collection will be reviewed to substantiate the study. To analyze the study, descriptive and econometric analysis techniques will be employed. EXPECTED CONTRIBUTION OF THE PROPOSED STUDY By studying the contribution of temple re-development on informal business creation and growth, the study will be beneficial to the informal business owners and the government. For the government, scientific and empirical evidence on the contribution of temple re-development for informal business creation and growth to give evidence the study will give based infrastructural development and boosting tax collection. For informal businesses, the study will give them a detailed insight on the nature of their business and the possible future growth potential of their business, and the alternative products and services supplying to their customers in the future. Studying informal businesses will help to identify the key products and services which are majorly profitable and possess potential to multiply and grow through correct product marketing strategies and business development.Keywords: business development, informal sector businesses, services and products marketing, temple economics
Procedia PDF Downloads 8336 Microsimulation of Potential Crashes as a Road Safety Indicator
Authors: Vittorio Astarita, Giuseppe Guido, Vincenzo Pasquale Giofre, Alessandro Vitale
Abstract:
Traffic microsimulation has been used extensively to evaluate consequences of different traffic planning and control policies in terms of travel time delays, queues, pollutant emissions, and every other common measured performance while at the same time traffic safety has not been considered in common traffic microsimulation packages as a measure of performance for different traffic scenarios. Vehicle conflict techniques that were introduced at intersections in the early traffic researches carried out at the General Motor laboratory in the USA and in the Swedish traffic conflict manual have been applied to vehicles trajectories simulated in microscopic traffic simulators. The concept is that microsimulation can be used as a base for calculating the number of conflicts that will define the safety level of a traffic scenario. This allows engineers to identify unsafe road traffic maneuvers and helps in finding the right countermeasures that can improve safety. Unfortunately, most commonly used indicators do not consider conflicts between single vehicles and roadside obstacles and barriers. A great number of vehicle crashes take place with roadside objects or obstacles. Only some recent proposed indicators have been trying to address this issue. This paper introduces a new procedure based on the simulation of potential crash events for the evaluation of safety levels in microsimulation traffic scenarios, which takes into account also potential crashes with roadside objects and barriers. The procedure can be used to define new conflict indicators. The proposed simulation procedure generates with the random perturbation of vehicle trajectories a set of potential crashes which can be evaluated accurately in terms of DeltaV, the energy of the impact, and/or expected number of injuries or casualties. The procedure can also be applied to real trajectories giving birth to new surrogate safety performance indicators, which can be considered as “simulation-based”. The methodology and a specific safety performance indicator are described and applied to a simulated test traffic scenario. Results indicate that the procedure is able to evaluate safety levels both at the intersection level and in the presence of roadside obstacles. The procedure produces results that are expressed in the same unity of measure for both vehicle to vehicle and vehicle to roadside object conflicts. The total energy for a square meter of all generated crash can be used and is shown on the map, for the test network, after the application of a threshold to evidence the most dangerous points. Without any detailed calibration of the microsimulation model and without any calibration of the parameters of the procedure (standard values have been used), it is possible to identify dangerous points. A preliminary sensitivity analysis has shown that results are not dependent on the different energy thresholds and different parameters of the procedure. This paper introduces a specific new procedure and the implementation in the form of a software package that is able to assess road safety, also considering potential conflicts with roadside objects. Some of the principles that are at the base of this specific model are discussed. The procedure can be applied on common microsimulation packages once vehicle trajectories and the positions of roadside barriers and obstacles are known. The procedure has many calibration parameters and research efforts will have to be devoted to make confrontations with real crash data in order to obtain the best parameters that have the potential of giving an accurate evaluation of the risk of any traffic scenario.Keywords: road safety, traffic, traffic safety, traffic simulation
Procedia PDF Downloads 13535 On the Bias and Predictability of Asylum Cases
Authors: Panagiota Katsikouli, William Hamilton Byrne, Thomas Gammeltoft-Hansen, Tijs Slaats
Abstract:
An individual who demonstrates a well-founded fear of persecution or faces real risk of being subjected to torture is eligible for asylum. In Danish law, the exact legal thresholds reflect those established by international conventions, notably the 1951 Refugee Convention and the 1950 European Convention for Human Rights. These international treaties, however, remain largely silent when it comes to how states should assess asylum claims. As a result, national authorities are typically left to determine an individual’s legal eligibility on a narrow basis consisting of an oral testimony, which may itself be hampered by several factors, including imprecise language interpretation, insecurity or lacking trust towards the authorities among applicants. The leaky ground, on which authorities must assess their subjective perceptions of asylum applicants' credibility, questions whether, in all cases, adjudicators make the correct decision. Moreover, the subjective element in these assessments raises questions on whether individual asylum cases could be afflicted by implicit biases or stereotyping amongst adjudicators. In fact, recent studies have uncovered significant correlations between decision outcomes and the experience and gender of the assigned judge, as well as correlations between asylum outcomes and entirely external events such as weather and political elections. In this study, we analyze a publicly available dataset containing approximately 8,000 summaries of asylum cases, initially rejected, and re-tried by the Refugee Appeals Board (RAB) in Denmark. First, we look for variations in the recognition rates, with regards to a number of applicants’ features: their country of origin/nationality, their identified gender, their identified religion, their ethnicity, whether torture was mentioned in their case and if so, whether it was supported or not, and the year the applicant entered Denmark. In order to extract those features from the text summaries, as well as the final decision of the RAB, we applied natural language processing and regular expressions, adjusting for the Danish language. We observed interesting variations in recognition rates related to the applicants’ country of origin, ethnicity, year of entry and the support or not of torture claims, whenever those were made in the case. The appearance (or not) of significant variations in the recognition rates, does not necessarily imply (or not) bias in the decision-making progress. None of the considered features, with the exception maybe of the torture claims, should be decisive factors for an asylum seeker’s fate. We therefore investigate whether the decision can be predicted on the basis of these features, and consequently, whether biases are likely to exist in the decisionmaking progress. We employed a number of machine learning classifiers, and found that when using the applicant’s country of origin, religion, ethnicity and year of entry with a random forest classifier, or a decision tree, the prediction accuracy is as high as 82% and 85% respectively. tentially predictive properties with regards to the outcome of an asylum case. Our analysis and findings call for further investigation on the predictability of the outcome, on a larger dataset of 17,000 cases, which is undergoing.Keywords: asylum adjudications, automated decision-making, machine learning, text mining
Procedia PDF Downloads 9634 Comparing Perceived Restorativeness in Natural and Urban Environment: A Meta-Analysis
Authors: Elisa Menardo, Margherita Pasini, Margherita Brondino
Abstract:
A growing body of empirical research from different areas of inquiry suggests that brief contact with natural environment restore mental resources. The Attention Restoration Theory (ART) is the widespread used and empirical founded theory developed to explain why exposure to nature helps people to recovery cognitive resources. It assumes that contact with nature allows people to free (and then recovery) voluntary attention resources and thus allows them to recover from a cognitive fatigue situation. However, it was suggested that some people could have more cognitive benefit after exposure to urban environment. The objective of this study is to report the results of a meta-analysis on studies (peer-reviewed articles) comparing the restorativeness (the quality to be restorative) perceived in natural environments than those perceived in urban environments. This meta-analysis intended to estimate how much nature environments (forests, parks, boulevards) are perceived to be more restorativeness than urban ones (i.e., the magnitude of the perceived restorativeness’ difference). Moreover, given the methodological difference between study, it studied the potential role of moderator variables as participants (student or other), instrument used (Perceived Restorativeness Scale or other), and procedure (in laboratory or in situ). PsycINFO, PsycARTICLES, Scopus, SpringerLINK, Web of Science online database were used to identify all peer-review articles on restorativeness published to date (k = 167). Reference sections of obtained papers were examined for additional studies. Only 22 independent studies (with a total of 1371 participants) met inclusion criteria (direct exposure to environment, comparison between one outdoor environment with natural element and one without natural element, and restorativeness measured by self-report scale) and were included in meta-analysis. To estimate the average effect size, a random effect model (Restricted Maximum-likelihood estimator) was used because the studies included in the meta-analysis were conducted independently and using different methods in different populations, so no common effect-size was expected. The presence of publication bias was checked using trim and fill approach. Univariate moderator analysis (mixed effect model) were run to determine whether the variable coded moderated the perceived restorativeness difference. Results show that natural environments are perceived to be more restorativeness than urban environments, confirming from an empirical point of view what is now considered a knowledge gained in environmental psychology. The relevant information emerging from this study is the magnitude of the estimated average effect size, which is particularly high (d = 1.99) compared to those that are commonly observed in psychology. Significant heterogeneity between study was found (Q(19) = 503.16, p < 0.001;) and studies’ variability was very high (I2[C.I.] = 96.97% [94.61 - 98.62]). Subsequent univariate moderator analyses were not significant. Methodological difference (participants, instrument, and procedure) did not explain variability between study. Other methodological difference (e.g., research design, environment’s characteristics, light’s condition) could explain this variability between study. In the mine while, studies’ variability could be not due to methodological difference but to individual difference (age, gender, education level) and characteristics (connection to nature, environmental attitude). Furthers moderator analysis are working in progress.Keywords: meta-analysis, natural environments, perceived restorativeness, urban environments
Procedia PDF Downloads 17033 Towards Automatic Calibration of In-Line Machine Processes
Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales
Abstract:
In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820Keywords: data model, machine learning, industrial winding, calibration
Procedia PDF Downloads 24232 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’
Authors: Luminiţa Duţică, Gheorghe Duţică
Abstract:
One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.Keywords: heterophony, modalism, serialism, synchrony, syntax
Procedia PDF Downloads 34531 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning
Authors: Xingyu Gao, Qiang Wu
Abstract:
Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.Keywords: patent influence, interpretable machine learning, predictive models, SHAP
Procedia PDF Downloads 5030 On the Influence of Sleep Habits for Predicting Preterm Births: A Machine Learning Approach
Authors: C. Fernandez-Plaza, I. Abad, E. Diaz, I. Diaz
Abstract:
Births occurring before the 37th week of gestation are considered preterm births. A threat of preterm is defined as the beginning of regular uterine contractions, dilation and cervical effacement between 23 and 36 gestation weeks. To author's best knowledge, the factors that determine the beginning of the birth are not completely defined yet. In particular, the incidence of sleep habits on preterm births is weekly studied. The aim of this study is to develop a model to predict the factors affecting premature delivery on pregnancy, based on the above potential risk factors, including those derived from sleep habits and light exposure at night (introduced as 12 variables obtained by a telephone survey using two questionnaires previously used by other authors). Thus, three groups of variables were included in the study (maternal, fetal and sleep habits). The study was approved by Research Ethics Committee of the Principado of Asturias (Spain). An observational, retrospective and descriptive study was performed with 481 births between January 1, 2015 and May 10, 2016 in the University Central Hospital of Asturias (Spain). A statistical analysis using SPSS was carried out to compare qualitative and quantitative variables between preterm and term delivery. Chi-square test qualitative variable and t-test for quantitative variables were applied. Statistically significant differences (p < 0.05) between preterm vs. term births were found for primiparity, multi-parity, kind of conception, place of residence or premature rupture of membranes and interruption during nights. In addition to the statistical analysis, machine learning methods to look for a prediction model were tested. In particular, tree based models were applied as the trade-off between performance and interpretability is especially suitable for this study. C5.0, recursive partitioning, random forest and tree bag models were analysed using caret R-package. Cross validation with 10-folds and parameter tuning to optimize the methods were applied. In addition, different noise reduction methods were applied to the initial data using NoiseFiltersR package. The best performance was obtained by C5.0 method with Accuracy 0.91, Sensitivity 0.93, Specificity 0.89 and Precision 0.91. Some well known preterm birth factors were identified: Cervix Dilation, maternal BMI, Premature rupture of membranes or nuchal translucency analysis in the first trimester. The model also identifies other new factors related to sleep habits such as light through window, bedtime on working days, usage of electronic devices before sleeping from Mondays to Fridays or change of sleeping habits reflected in the number of hours, in the depth of sleep or in the lighting of the room. IF dilation < = 2.95 AND usage of electronic devices before sleeping from Mondays to Friday = YES and change of sleeping habits = YES, then preterm is one of the predicting rules obtained by C5.0. In this work a model for predicting preterm births is developed. It is based on machine learning together with noise reduction techniques. The method maximizing the performance is the one selected. This model shows the influence of variables related to sleep habits in preterm prediction.Keywords: machine learning, noise reduction, preterm birth, sleep habit
Procedia PDF Downloads 14929 Transformers in Gene Expression-Based Classification
Authors: Babak Forouraghi
Abstract:
A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.Keywords: transformers, generative ai, gene expression design, classification
Procedia PDF Downloads 6028 Developing Early Intervention Tools: Predicting Academic Dishonesty in University Students Using Psychological Traits and Machine Learning
Authors: Pinzhe Zhao
Abstract:
This study focuses on predicting university students' cheating tendencies using psychological traits and machine learning techniques. Academic dishonesty is a significant issue that compromises the integrity and fairness of educational institutions. While much research has been dedicated to detecting cheating behaviors after they have occurred, there is limited work on predicting such tendencies before they manifest. The aim of this research is to develop a model that can identify students who are at higher risk of engaging in academic misconduct, allowing for earlier interventions to prevent such behavior. Psychological factors are known to influence students' likelihood of cheating. Research shows that traits such as test anxiety, moral reasoning, self-efficacy, and achievement motivation are strongly linked to academic dishonesty. High levels of anxiety may lead students to cheat as a way to cope with pressure. Those with lower self-efficacy are less confident in their academic abilities, which can push them toward dishonest behaviors to secure better outcomes. Students with weaker moral judgment may also justify cheating more easily, believing it to be less wrong under certain conditions. Achievement motivation also plays a role, as students driven primarily by external rewards, such as grades, are more likely to cheat compared to those motivated by intrinsic learning goals. In this study, data on students’ psychological traits is collected through validated assessments, including scales for anxiety, moral reasoning, self-efficacy, and motivation. Additional data on academic performance, attendance, and engagement in class are also gathered to create a more comprehensive profile. Using machine learning algorithms such as Random Forest, Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) networks, the research builds models that can predict students’ cheating tendencies. These models are trained and evaluated using metrics like accuracy, precision, recall, and F1 scores to ensure they provide reliable predictions. The findings demonstrate that combining psychological traits with machine learning provides a powerful method for identifying students at risk of cheating. This approach allows for early detection and intervention, enabling educational institutions to take proactive steps in promoting academic integrity. The predictive model can be used to inform targeted interventions, such as counseling for students with high test anxiety or workshops aimed at strengthening moral reasoning. By addressing the underlying factors that contribute to cheating behavior, educational institutions can reduce the occurrence of academic dishonesty and foster a culture of integrity. In conclusion, this research contributes to the growing body of literature on predictive analytics in education. It offers a approach by integrating psychological assessments with machine learning to predict cheating tendencies. This method has the potential to significantly improve how academic institutions address academic dishonesty, shifting the focus from punishment after the fact to prevention before it occurs. By identifying high-risk students and providing them with the necessary support, educators can help maintain the fairness and integrity of the academic environment.Keywords: academic dishonesty, cheating prediction, intervention strategies, machine learning, psychological traits, academic integrity
Procedia PDF Downloads 2327 The Istrian Istrovenetian-Croatian Bilingual Corpus
Authors: Nada Poropat Jeletic, Gordana Hrzica
Abstract:
Bilingual conversational corpora represent a meaningful and the most comprehensive data source for investigating the genuine contact phenomena in non-monitored bi-lingual speech productions. They can be particularly useful for bilingual research since some features of bilingual interaction can hardly be accessed with more traditional methodologies (e.g., elicitation tasks). The method of language sampling provides the resources for describing language interaction in a bilingual community and/or in bilingual situations (e.g. code-switching, amount of languages used, number of languages used, etc.). To capture these phenomena in genuine communication situations, such sampling should be as close as possible to spontaneous communication. Bilingual spoken corpus design is methodologically demanding. Therefore this paper aims at describing the methodological challenges that apply to the corpus design of the conversational corpus design of the Istrian Istrovenetian-Croatian Bilingual Corpus. Croatian is the first official language of the Croatian-Italian officially bilingual Istria County, while Istrovenetian is a diatopic subvariety of Venetian, a longlasting lingua franca in the Istrian peninsula, the mother tongue of the members of the Italian National Community in Istria and the primary code of informal everyday communication among the Istrian Italophone population. Within the CLARIN infrastructure, TalkBank is being used, as it provides relevant procedures for designing and analyzing bilingual corpora. Furthermore, it allows public availability allows for easy replication of studies and cumulative progress as a research community builds up around the corpus, while the tools developed within the field of corpus linguistics enable easy retrieval and analysis of information. The method of language sampling employed is kept at the level of spontaneous communication, in order to maximise the naturalness of the collected conversational data. All speakers have provided written informed consent in which they agree to be recorded at a random point within the period of one month after signing the consent. Participants are administered a background questionnaire providing information about the socioeconomic status and the exposure and language usage in the participants social networks. Recording data are being transcribed, phonologically adapted within a standard-sized orthographic form, coded and segmented (speech streams are being segmented into communication units based on syntactic criteria) and are being marked following the CHAT transcription system and its associated CLAN suite of programmes within the TalkBank toolkit. The corpus consists of transcribed sound recordings of 36 bilingual speakers, while the target is to publish the whole corpus by the end of 2020, by sampling spontaneous conversations among approximately 100 speakers from all the bilingual areas of Istria for ensuring representativeness (the participants are being recruited across three generations of native bilingual speakers in all the bilingual areas of the peninsula). Conversational corpora are still rare in TalkBank, so the Corpus will contribute to BilingBank as a highly relevant and scientifically reliable resource for an internationally established and active research community. The impact of the research of communities with societal bilingualism will contribute to the growing body of research on bilingualism and multilingualism, especially regarding topics of language dominance, language attrition and loss, interference and code-switching etc.Keywords: conversational corpora, bilingual corpora, code-switching, language sampling, corpus design methodology
Procedia PDF Downloads 146