Search results for: visual basic software
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9051

Search results for: visual basic software

561 Finite Element Molecular Modeling: A Structural Method for Large Deformations

Authors: A. Rezaei, M. Huisman, W. Van Paepegem

Abstract:

Atomic interactions in molecular systems are mainly studied by particle mechanics. Nevertheless, researches have also put on considerable effort to simulate them using continuum methods. In early 2000, simple equivalent finite element models have been developed to study the mechanical properties of carbon nanotubes and graphene in composite materials. Afterward, many researchers have employed similar structural simulation approaches to obtain mechanical properties of nanostructured materials, to simplify interface behavior of fiber-reinforced composites, and to simulate defects in carbon nanotubes or graphene sheets, etc. These structural approaches, however, are limited to small deformations due to complicated local rotational coordinates. This article proposes a method for the finite element simulation of molecular mechanics. For ease in addressing the approach, here it is called Structural Finite Element Molecular Modeling (SFEMM). SFEMM method improves the available structural approaches for large deformations, without using any rotational degrees of freedom. Moreover, the method simulates molecular conformation, which is a big advantage over the previous approaches. Technically, this method uses nonlinear multipoint constraints to simulate kinematics of the atomic multibody interactions. Only truss elements are employed, and the bond potentials are implemented through constitutive material models. Because the equilibrium bond- length, bond angles, and bond-torsion potential energies are intrinsic material parameters, the model is independent of initial strains or stresses. In this paper, the SFEMM method has been implemented in ABAQUS finite element software. The constraints and material behaviors are modeled through two Fortran subroutines. The method is verified for the bond-stretch, bond-angle and bond-torsion of carbon atoms. Furthermore, the capability of the method in the conformation simulation of molecular structures is demonstrated via a case study of a graphene sheet. Briefly, SFEMM builds up a framework that offers more flexible features over the conventional molecular finite element models, serving the structural relaxation modeling and large deformations without incorporating local rotational degrees of freedom. Potentially, the method is a big step towards comprehensive molecular modeling with finite element technique, and thereby concurrently coupling an atomistic domain to a solid continuum domain within a single finite element platform.

Keywords: finite element, large deformation, molecular mechanics, structural method

Procedia PDF Downloads 132
560 The Interactive Wearable Toy "+Me", for the Therapy of Children with Autism Spectrum Disorders: Preliminary Results

Authors: Beste Ozcan, Valerio Sperati, Laura Romano, Tania Moretta, Simone Scaffaro, Noemi Faedda, Federica Giovannone, Carla Sogos, Vincenzo Guidetti, Gianluca Baldassarre

Abstract:

+me is an experimental interactive toy with the appearance of a soft, pillow-like, panda. Shape and consistency are designed to arise emotional attachment in young children: a child can wear it around his/her neck and treat it as a companion (i.e. a transitional object). When caressed on paws or head, the panda emits appealing, interesting outputs like colored lights or amusing sounds, thanks to embedded electronics. Such sensory patterns can be modified through a wirelessly connected tablet: by this, an adult caregiver can adapt +me responses to a child's reactions or requests, for example, changing the light hue or the type of sound. The toy control is therefore shared, as it depends on both the child (who handles the panda) and the adult (who manages the tablet and mediates the sensory input-output contingencies). These features make +me a potential tool for therapy with children with Neurodevelopmental Disorders (ND), characterized by impairments in the social area, like Autism Spectrum Disorders (ASD) and Language Disorders (LD): as a proposal, the toy could be used together with a therapist, in rehabilitative play activities aimed at encouraging simple social interactions and reinforcing basic relational and communication skills. +me was tested in two pilot experiments, the first one involving 15 Typically Developed (TD) children aged in 8-34 months, the second one involving 7 children with ASD, and 7 with LD, aged in 30-48 months. In both studies a researcher/caregiver, during a one-to-one, ten-minute activity plays with the panda and encourages the child to do the same. The purpose of both studies was to ascertain the general acceptability of the device as an interesting toy that is an object able to capture the child's attention and to maintain a high motivation to interact with it and with the adult. Behavioral indexes for estimating the interplay between the child, +me and caregiver were rated from the video recording of the experimental sessions. Preliminary results show how -on average- participants from 3 groups exhibit a good engagement: they touch, caress, explore the panda and show enjoyment when they manage to trigger luminous and sound responses. During the experiments, children tend to imitate the caregiver's actions on +me, often looking (and smiling) at him/her. Interesting behavioral differences between TD, ASD, and LD groups are scored: for example, ASD participants produce a fewer number of smiles both to panda and to a caregiver with respect to TD group, while LD scores stand between ASD and TD subjects. These preliminary observations suggest that the interactive toy +me is able to raise and maintain the interest of toddlers and therefore it can be reasonably used as a supporting tool during therapy, to stimulate pivotal social skills as imitation, turn-taking, eye contact, and social smiles. Interestingly, the young age of participants, along with the behavioral differences between groups, seem to suggest a further potential use of the device: a tool for early differential diagnosis (the average age of a child

Keywords: autism spectrum disorders, interactive toy, social interaction, therapy, transitional wearable companion

Procedia PDF Downloads 98
559 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa

Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini

Abstract:

Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.

Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time

Procedia PDF Downloads 130
558 Comparative Study of Static and Dynamic Representations of the Family Structure and Its Clinical Utility

Authors: Marietta Kékes Szabó

Abstract:

The patterns of personality (mal)function and the individuals’ psychosocial environment influence the healthy status collectively and may lie in the background of psychosomatic disorders. Although the patients with their diversified symptoms usually do not have any organic problems, the experienced complaint, the fear of serious illness and the lack of social support often lead to increased anxiety and further enigmatic symptoms. The role of the family system and its atmosphere seem to be very important in this process. More studies explored the characteristics of dysfunctional family organization: inflexible family structure, hidden conflicts that are not spoken about by the family members during their daily interactions, undefined role boundaries, neglect or overprotection of the children by the parents and coalition between generations. However, questionnaires that are used to measure the properties of the family system are able to explore only its unit and cannot pay attention to the dyadic interactions, while the representation of the family structure by a figure placing test gives us a new perspective to better understand the organization of the (sub)system(s). Furthermore, its dynamic form opens new perspectives to explore the family members’ joint representations, which gives us the opportunity to know more about the flexibility of cohesion and hierarchy of the given family system. In this way, the communication among the family members can be also examined. The aim of my study was to collect a great number of information about the organization of psychosomatic families. In our research we used Gehring’s Family System Test (FAST) both in static and dynamic forms to mobilize the family members’ mental representations about their family and to get data in connection with their individual representations as well as cooperation. There were four families in our study, all of them with a young adult person. Two families with healthy participants and two families with asthmatic patient(s) were involved in our research. The family members’ behavior that could be observed during the dynamic situation was recorded on video for further data analysis with Noldus Observer XT 8.0 program software. In accordance with the previous studies, our results show that the family structure of the families with at least one psychosomatic patient is more rigid than it was found in the control group and the certain (typical, ideal, and conflict) dynamic representations reflected mainly the most dominant family member’s individual concept. The behavior analysis also confirmed the intensified role of the dominant person(s) in the family life, thereby influencing the family decisions, the place of the other family members, as well as the atmosphere of the interactions, which could also be grasped well by the applied methods. However, further research is needed to learn more about the phenomenon that can open the door for new therapeutic approaches.

Keywords: psychosomatic families, family structure, family system test (FAST), static and dynamic representations, behavior analysis

Procedia PDF Downloads 370
557 Membrane Technologies for Obtaining Bioactive Fractions from Blood Main Protein: An Exploratory Study for Industrial Application

Authors: Fatima Arrutia, Francisco Amador Riera

Abstract:

The meat industry generates large volumes of blood as a result of meat processing. Several industrial procedures have been implemented in order to treat this by-product, but are focused on the production of low-value products, and in many cases, blood is simply discarded as waste. Besides, in addition to economic interests, there is an environmental concern due to bloodborne pathogens and other chemical contaminants found in blood. Consequently, there is a dire need to find extensive uses for blood that can be both applicable to industrial scale and able to yield high value-added products. Blood has been recognized as an important source of protein. The main blood serum protein in mammals is serum albumin. One of the top trends in food market is functional foods. Among them, bioactive peptides can be obtained from protein sources by microbiological fermentation or enzymatic and chemical hydrolysis. Bioactive peptides are short amino acid sequences that can have a positive impact on health when administered. The main drawback for bioactive peptide production is the high cost of the isolation, purification and characterization techniques (such as chromatography and mass spectrometry) that make unaffordable the scale-up. On the other hand, membrane technologies are very suitable to apply to the industry because they offer a very easy scale-up and are low-cost technologies, compared to other traditional separation methods. In this work, the possibility of obtaining bioactive peptide fractions from serum albumin by means of a simple procedure of only 2 steps (hydrolysis and membrane filtration) was evaluated, as an exploratory study for possible industrial application. The methodology used in this work was, firstly, a tryptic hydrolysis of serum albumin in order to release the peptides from the protein. The protein was previously subjected to a thermal treatment in order to enhance the enzyme cleavage and thus the peptide yield. Then, the obtained hydrolysate was filtered through a nanofiltration/ultrafiltration flat rig at three different pH values with two different membrane materials, so as to compare membrane performance. The corresponding permeates were analyzed by liquid chromatography-tandem mass spectrometry technology in order to obtain the peptide sequences present in each permeate. Finally, different concentrations of every permeate were evaluated for their in vitro antihypertensive and antioxidant activities though ACE-inhibition and DPPH radical scavenging tests. The hydrolysis process with the previous thermal treatment allowed achieving a degree of hydrolysis of the 49.66% of the maximum possible. It was found that peptides were best transmitted to the permeate stream at pH values that corresponded to their isoelectric points. Best selectivity between peptide groups was achieved at basic pH values. Differences in peptide content were found between membranes and also between pH values for the same membrane. The antioxidant activity of all permeates was high compared with the control only for the highest dose. However, antihypertensive activity was best for intermediate concentrations, rather than higher or lower doses. Therefore, although differences between them, all permeates were promising regarding antihypertensive and antioxidant properties.

Keywords: bioactive peptides, bovine serum albumin, hydrolysis, membrane filtration

Procedia PDF Downloads 176
556 Nursing Professionals’ Perception of the Work Environment, Safety Climate and Job Satisfaction in the Brazilian Hospitals during the COVID-19 Pandemic

Authors: Ana Claudia de Souza Costa, Beatriz de Cássia Pinheiro Goulart, Karine de Cássia Cavalari, Henrique Ceretta Oliveira, Edineis de Brito Guirardello

Abstract:

Background: During the COVID-19 pandemic, nursing represents the largest category of health professionals who were on the front line. Thus, investigating the practice environment and the job satisfaction of nursing professionals during the pandemic becomes fundamental since it reflects on the quality of care and the safety climate. The aim of this study was to evaluate and compare the nursing professionals' perception of the work environment, job satisfaction, and safety climate of the different hospitals and work shifts during the COVID-19 pandemic. Method: This is a cross-sectional survey with 130 nursing professionals from public, private and mixed hospitals in Brazil. For data collection, was used an electronic form containing the personal and occupational variables, work environment, job satisfaction, and safety climate. The data were analyzed using descriptive statistics and ANOVA or Kruskal-Wallis tests according to the data distribution. The distribution was evaluated by means of the Shapiro-Wilk test. The analysis was done in the SPSS 23 software, and it was considered a significance level of 5%. Results: The mean age of the participants was 35 years (±9.8), with a mean time of 6.4 years (±6.7) of working experience in the institution. Overall, the nursing professionals evaluated the work environment as favorable; they were dissatisfied with their job in terms of pay, promotion, benefits, contingent rewards, operating procedures and satisfied with coworkers, nature of work, supervision, and communication, and had a negative perception of the safety climate. When comparing the hospitals, it was found that they did not differ in their perception of the work environment and safety climate. However, they differed with regard to job satisfaction, demonstrating that nursing professionals from public hospitals were more dissatisfied with their work with regard to promotion when compared to professionals from private (p=0.02) and mixed hospitals (p< 0.01) and nursing professionals from mixed hospitals were more satisfied than those from private hospitals (p= 0.04) with regard to supervision. Participants working in night shifts had the worst perception of the work environment related to nurse participation in hospital affairs (p= 0.02), nursing foundations for quality care (p= 0.01), nurse manager ability, leadership and support (p= 0.02), safety climate (p< 0.01), job satisfaction related to contingent rewards (p= 0.04), nature of work (p= 0.03) and supervision (p< 0.01). Conclusion: The nursing professionals had a favorable perception of the environment and safety climate but differed among hospitals regarding job satisfaction for the promotion and supervision domains. There was also a difference between the participants regarding the work shifts, being the night shifts, those with the lowest scores, except for satisfaction with operational conditions.

Keywords: health facility environment, job satisfaction, patient safety, nursing

Procedia PDF Downloads 130
555 Law of the River and Indigenous Water Rights: Reassessing the International Legal Frameworks for Indigenous Rights and Water Justice

Authors: Sultana Afrin Nipa

Abstract:

Life on Earth cannot thrive or survive without water. Water is intimately tied with community, culture, spirituality, identity, socio-economic progress, security, self-determination, and livelihood. Thus, access to water is a United Nations recognized human right due to its significance in these realms. However, there is often conflict between those who consider water as the spiritual and cultural value and those who consider it an economic value thus being threatened by economic development, corporate exploitation, government regulation, and increased privatization, highlighting the complex relationship between water and culture. The Colorado River basin is home to over 29 federally recognized tribal nations. To these tribes, it holds cultural, economic, and spiritual significance and often extends to deep human-to-non-human connections frequently precluded by the Westphalian regulations and settler laws. Despite the recognition of access to rivers as a fundamental human right by the United Nations, tribal communities and their water rights have been historically disregarded through inter alia, colonization, and dispossession of their resources. Law of the River such as ‘Winter’s Doctrine’, ‘Bureau of Reclamation (BOR)’ and ‘Colorado River Compact’ have shaped the water governance among the shareholders. However, tribal communities have been systematically excluded from these key agreements. While the Winter’s Doctrine acknowledged that tribes have the right to withdraw water from the rivers that pass through their reservations for self-sufficiency, the establishment of the BOR led to the construction of dams without tribal consultation, denying the ‘Winters’ regulation and violating these rights. The Colorado River Compact, which granted only 20% of the water to the tribes, diminishes the significance of international legal frameworks that prioritize indigenous self-determination and free pursuit of socio-economic and cultural development. Denial of this basic water right is the denial of the ‘recognition’ of their sovereignty and self-determination that questions the effectiveness of the international law. This review assesses the international legal frameworks concerning indigenous rights and water justice and aims to pinpoint gaps hindering the effective recognition and protection of Indigenous water rights in Colorado River Basin. This study draws on a combination of historical and qualitative data sets. The historical data encompasses the case settlements provided by the Bureau of Reclamation (BOR) respectively the notable cases of Native American water rights settlements on lower Colorado basin related to Arizona from 1979-2008. This material serves to substantiate the context of promises made to the Indigenous people and establishes connections between existing entities. The qualitative data consists of the observation of recorded meetings of the Central Arizona Project (CAP) to evaluate how the previously made promises are reflected now. The study finds a significant inconsistency in participation in the decision-making process and the lack of representation of Native American tribes in water resource management discussions. It highlights the ongoing challenges faced by the indigenous people to achieve their self-determination goal despite the legal arrangements.

Keywords: colorado river, indigenous rights, law of the river, water governance, water justice

Procedia PDF Downloads 17
554 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 66
553 Problem-Based Learning for Hospitality Students. The Case of Madrid Luxury Hotels and the Recovery after the Covid Pandemic

Authors: Caridad Maylin-Aguilar, Beatriz Duarte-Monedero

Abstract:

Problem-based learning (PBL) is a useful tool for adult and practice oriented audiences, as University students. As a consequence of the huge disruption caused by the COVID pandemic in the hospitality industry, hotels of all categories closed down in Spain from March 2020. Since that moment, the luxury segment was blooming with optimistic prospects for new openings. Hence, Hospitality students were expecting a positive situation in terms of employment and career development. By the beginning of the 2020-21 academic year, these expectations were seriously harmed. By October 2020, only 9 of the 32 hotels in the luxury segment were opened with an occupation rate of 9%. Shortly after, the evidence of a second wave affecting especially Spain and the homelands of incoming visitors bitterly smashed all forecasts. In accordance with the situation, a team of four professors and practitioners, from four different subject areas, developed a real case, inspired in one of these hotels, the 5-stars Emperatriz by Barceló. Students in their 2nd course were provided with real information as marketing plans, profit and losses and operational accounts, employees profiles and employment costs. The challenge for them was to act as consultants, identifying potential courses of action, related to best, base and worst case. In order to do that, they were organized in teams and supported by 4th course students. Each professor deployed the problem in their subject; thus, research on the customers behavior and feelings were necessary to review, as part of the marketing plan, if the current offering of the hotel was clear enough to guarantee and to communicate a safe environment, as well as the ranking of other basic, supporting and facilitating services. Also, continuous monitoring of competitors’ activity was necessary to understand what was the behavior of the open outlets. The actions designed after the diagnose were ranked in accordance with their impact and feasibility in terms of time and resources. Also they must be actionable by the current staff of the hotel and their managers and a vision of internal marketing was appreciated. After a process of refinement, seven teams presented their conclusions to Emperatriz general manager and the rest of professors. Four main ideas were chosen, and all the teams, irrespectively of authorship, were asked to develop them to the state of a minimum viable product, with estimations of impacts and costs. As the process continues, students are nowadays accompanying the hotel and their staff in the prudent reopening of facilities, almost one year after the closure. From a professor’s point of view, key learnings were 1.- When facing a real problem, a holistic view is needed. Therefore, the vision of subjects as silos collapses, 2- When educating new professionals, providing them with the resilience and resistance necessaries to deal with a problem is always mandatory, but now seems more relevant and 3.- collaborative work and contact with real practitioners in such an uncertain and changing environment is a challenge, but it is worth when considering the learning result and its potential.

Keywords: problem-based learning, hospitality recovery, collaborative learning, resilience

Procedia PDF Downloads 171
552 Climate Change Effects of Vehicular Carbon Monoxide Emission from Road Transportation in Part of Minna Metropolis, Niger State, Nigeria

Authors: H. M. Liman, Y. M. Suleiman A. A. David

Abstract:

Poor air quality often considered one of the greatest environmental threats facing the world today is caused majorly by the emission of carbon monoxide into the atmosphere. The principal air pollutant is carbon monoxide. One prominent source of carbon monoxide emission is the transportation sector. Not much was known about the emission levels of carbon monoxide, the primary pollutant from the road transportation in the study area. Therefore, this study assessed the levels of carbon monoxide emission from road transportation in the Minna, Niger State. The database shows the carbon monoxide data collected. MSA Altair gas alert detector was used to take the carbon monoxide emission readings in Parts per Million for the peak and off-peak periods of vehicular movement at the road intersections. Their Global Positioning System (GPS) coordinates were recorded in the Universal Transverse Mercator (UTM). Bar chart graphs were plotted by using the emissions level of carbon dioxide as recorded on the field against the scientifically established internationally accepted safe limit of 8.7 Parts per Million of carbon monoxide in the atmosphere. Further statistical analysis was also carried out on the data recorded from the field using the Statistical Package for Social Sciences (SPSS) software and Microsoft excel to show the variance of the emission levels of each of the parameters in the study area. The results established that emissions’ level of atmospheric carbon monoxide from the road transportation in the study area exceeded the internationally accepted safe limits of 8.7 parts per million. In addition, the variations in the average emission levels of CO between the four parameters showed that morning peak is having the highest average emission level of 24.5PPM followed by evening peak with 22.84PPM while morning off peak is having 15.33 and the least is evening off peak 12.94PPM. Based on these results, recommendations made for poor air quality mitigation via carbon monoxide emissions reduction from transportation include Introduction of the urban mass transit would definitely reduce the number of traffic on the roads, hence the emissions from several vehicles that would have been on the road. This would also be a cheaper means of transportation for the masses and Encouraging the use of vehicles using alternative sources of energy like solar, electric and biofuel will also result in less emission levels as the these alternative energy sources other than fossil fuel originated diesel and petrol vehicles do not emit especially carbon monoxide.

Keywords: carbon monoxide, climate change emissions, road transportation, vehicular

Procedia PDF Downloads 358
551 Social Skills as a Significant Aspect of a Successful Start of Compulsory Education

Authors: Eva Šmelová, Alena Berčíková

Abstract:

The issue of school maturity and readiness of a child for a successful start of compulsory education is one of the long-term monitored areas, especially in the context of education and psychology. In the context of the curricular reform in the Czech Republic, the issue has recently gained importance. Analyses of research in this area suggest a lack of a broader overview of indicators informing about the current level of children’s school maturity and school readiness. Instead, various studies address partial issues. Between 2009 and 2013 a research study was performed at the Faculty of Education, Palacký University Olomouc (Czech Republic) focusing on children’s maturity and readiness for compulsory education. In this study, social skills were of marginal interest; the main focus was on the mental area. This previous research is smoothly linked with the present study, the objective of which is to identify the level of school maturity and school readiness in selected characteristics of social skills as part of the adaptation process after enrolment in compulsory education. In this context, the following research question has been formulated: During the process of adaptation to the school environment, which social skills are weakened? The method applied was observation, for the purposes of which the authors developed a research tool – record sheet with 11 items – social skills that a child should have by the end of preschool education. The items were assessed by first-grade teachers at the beginning of the school year. The degree of achievement and intensity of the skills were assessed for each child using an assessment scale. In the research, the authors monitored a total of three independent variables (gender, postponement of school attendance, participation in inclusive education). The effect of these independent variables was monitored using 11 dependent variables. These variables are represented by the results achieved in selected social skills. Statistical data processing was assisted by the Computer Centre of Palacký University Olomouc. Statistical calculations were performed using SPSS v. 12.0 for Windows and STATISTICA: StatSoft STATISTICA CR, Cz (software system for data analysis). The research sample comprised 115 children. In their paper, the authors present the results of the research and at the same time point to possible areas of further investigation. They also highlight possible risks associated with weakened social skills.

Keywords: compulsory education, curricular reform, educational diagnostics, pupil, school curriculum, school maturity, school readiness, social skills

Procedia PDF Downloads 231
550 Impact Evaluation and Technical Efficiency in Ethiopia: Correcting for Selectivity Bias in Stochastic Frontier Analysis

Authors: Tefera Kebede Leyu

Abstract:

The purpose of this study was to estimate the impact of LIVES project participation on the level of technical efficiency of farm households in three regions of Ethiopia. We used household-level data gathered by IRLI between February and April 2014 for the year 2013(retroactive). Data on 1,905 (754 intervention and 1, 151 control groups) sample households were analyzed using STATA software package version 14. Efforts were made to combine stochastic frontier modeling with impact evaluation methodology using the Heckman (1979) two-stage model to deal with possible selectivity bias arising from unobservable characteristics in the stochastic frontier model. Results indicate that farmers in the two groups are not efficient and operate below their potential frontiers i.e., there is a potential to increase crop productivity through efficiency improvements in both groups. In addition, the empirical results revealed selection bias in both groups of farmers confirming the justification for the use of selection bias corrected stochastic frontier model. It was also found that intervention farmers achieved higher technical efficiency scores than the control group of farmers. Furthermore, the selectivity bias-corrected model showed a different technical efficiency score for the intervention farmers while it more or less remained the same for that of control group farmers. However, the control group of farmers shows a higher dispersion as measured by the coefficient of variation compared to the intervention counterparts. Among the explanatory variables, the study found that farmer’s age (proxy to farm experience), land certification, frequency of visit to improved seed center, farmer’s education and row planting are important contributing factors for participation decisions and hence technical efficiency of farmers in the study areas. We recommend that policies targeting the design of development intervention programs in the agricultural sector focus more on providing farmers with on-farm visits by extension workers, provision of credit services, establishment of farmers’ training centers and adoption of modern farm technologies. Finally, we recommend further research to deal with this kind of methodological framework using a panel data set to test whether technical efficiency starts to increase or decrease with the length of time that farmers participate in development programs.

Keywords: impact evaluation, efficiency analysis and selection bias, stochastic frontier model, Heckman-two step

Procedia PDF Downloads 45
549 Achieving Product Robustness through Variation Simulation: An Industrial Case Study

Authors: Narendra Akhadkar, Philippe Delcambre

Abstract:

In power protection and control products, assembly process variations due to the individual parts manufactured from single or multi-cavity tooling is a major problem. The dimensional and geometrical variations on the individual parts, in the form of manufacturing tolerances and assembly tolerances, are sources of clearance in the kinematic joints, polarization effect in the joints, and tolerance stack-up. All these variations adversely affect the quality of product, functionality, cost, and time-to-market. Variation simulation analysis may be used in the early product design stage to predict such uncertainties. Usually, variations exist in both manufacturing processes and materials. In the tolerance analysis, the effect of the dimensional and geometrical variations of the individual parts on the functional characteristics (conditions) of the final assembled products are studied. A functional characteristic of the product may be affected by a set of interrelated dimensions (functional parameters) that usually form a geometrical closure in a 3D chain. In power protection and control products, the prerequisite is: when a fault occurs in the electrical network, the product must respond quickly to react and break the circuit to clear the fault. Usually, the response time is in milliseconds. Any failure in clearing the fault may result in severe damage to the equipment or network, and human safety is at stake. In this article, we have investigated two important functional characteristics that are associated with the robust performance of the product. It is demonstrated that the experimental data obtained at the Schneider Electric Laboratory prove the very good prediction capabilities of the variation simulation performed using CETOL (tolerance analysis software) in an industrial context. Especially, this study allows design engineers to better understand the critical parts in the product that needs to be manufactured with good, capable tolerances. On the contrary, some parts are not critical for the functional characteristics (conditions) of the product and may lead to some reduction of the manufacturing cost, ensuring robust performance. The capable tolerancing is one of the most important aspects in product and manufacturing process design. In the case of miniature circuit breaker (MCB), the product's quality and its robustness are mainly impacted by two aspects: (1) allocation of design tolerances between the components of a mechanical assembly and (2) manufacturing tolerances in the intermediate machining steps of component fabrication.

Keywords: geometrical variation, product robustness, tolerance analysis, variation simulation

Procedia PDF Downloads 144
548 Inbreeding Study Using Runs of Homozygosity in Nelore Beef Cattle

Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari

Abstract:

The best linear unbiased predictor (BLUP) is a method commonly used in genetic evaluations of breeding programs. However, this approach can lead to higher inbreeding coefficients in the population due to the intensive use of few bulls with higher genetic potential, usually presenting some degree of relatedness. High levels of inbreeding are associated to low genetic viability, fertility, and performance for some economically important traits and therefore, should be constantly monitored. Unreliable pedigree data can also lead to misleading results. Genomic information (i.e., single nucleotide polymorphism – SNP) is a useful tool to estimate the inbreeding coefficient. Runs of homozygosity have been used to evaluate homozygous segments inherited due to direct or collateral inbreeding and allows inferring population selection history. This study aimed to evaluate runs of homozygosity (ROH) and inbreeding in a population of Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip and the quality control was carried out excluding SNPs located in non-autosomal regions, with unknown position, with a p-value in the Hardy-Weinberg equilibrium lower than 10⁻⁵, call rate lower than 0.98 and samples with the call rate lower than 0.90. After the quality control, 809 animals and 509,107 SNPs remained for analyses. For the ROH analysis, PLINK software was used considering segments with at least 50 SNPs with a minimum length of 1Mb in each animal. The inbreeding coefficient was calculated using the ratio between the sum of all ROH sizes and the size of the whole genome (2,548,724kb). A total of 25.711 ROH were observed, presenting mean, median, minimum, and maximum length of 3.34Mb, 2Mb, 1Mb, and 80.8Mb, respectively. The number of SNPs present in ROH segments varied from 50 to 14.954. The longest ROH length was observed in one animal, which presented a length of 634Mb (24.88% of the genome). Four bulls were among the 10 animals with the longest extension of ROH, presenting 11% of ROH with length higher than 10Mb. Segments longer than 10Mb indicate recent inbreeding. Therefore, the results indicate an intensive use of few sires in the studied data. The distribution of ROH along the chromosomes showed that chromosomes 5 and 6 presented a large number of segments when compared to other chromosomes. The mean, median, minimum, and maximum inbreeding coefficients were 5.84%, 5.40%, 0.00%, and 24.88%, respectively. Although the mean inbreeding was considered low, the ROH indicates a recent and intensive use of few sires, which should be avoided for the genetic progress of breed.

Keywords: autozygosity, Bos taurus indicus, genomic information, single nucleotide polymorphism

Procedia PDF Downloads 130
547 Antioxidant Potency of Ethanolic Extracts from Selected Aromatic Plants by in vitro Spectrophotometric Analysis

Authors: Tatjana Kadifkova Panovska, Svetlana Kulevanova, Blagica Jovanova

Abstract:

Biological systems possess the ability to neutralize the excess of reactive oxygen species (ROS) and to protect cells from destructive alterations. However, many pathological conditions (cardiovascular diseases, autoimmune disorders, cancer) are associated with inflammatory processes that generate an excessive amount of reactive oxygen species (ROS) that shift the balance between endogenous antioxidant systems and free oxygen radicals in favor of the latter, leading to oxidative stress. Therefore, an additional source of natural compounds with antioxidant properties that will reduce the amount of ROS in cells is much needed despite their broad utilization; many plant species remain largely unexplored. Therefore, the purpose of the present study is to investigate the antioxidant activity of twenty-five selected medicinal and aromatic plant species. The antioxidant activity of the ethanol extracts was evaluated with in vitro assays: 2,2’-diphenyl-1-pycryl-hydrazyl (DPPH), ferric reducing antioxidant power (FRAP), non-site-specific- (NSSOH) and site-specific hydroxyl radical-2-deoxy-D-ribose degradation (SSOH) assays. The Folin-Ciocalteu method and AlCl3 method were performed to determine total phenolic content (TPC) and total flavonoid content (TFC). All examined plant extracts manifested antioxidant activity to a different extent. Cinnamomum verum J.Presl bark and Ocimum basilicum L. Herba demonstrated strong radical scavenging activity and reducing power with the DPPH and FRAP assay, respectively. Additionally, significant hydroxyl scavenging potential and metal chelating properties were observed using the NSSOH and SSOH assays. Furthermore, significant variations were determined in the total polyphenolic content (TPC) and total flavonoid content (TFC), with Cinnamomum verum and Ocimum basilicum showing the highest amount of total polyphenols. The considerably strong radical scavenging activity, hydroxyl scavenging potential and reducing power for the species mentioned above suggest of a presence of highly bioactive phytochemical compounds, predominantly polyphenols. Since flavonoids are the most abundant group of polyphenols that possess a large number of available reactive OH groups in their structure, it is considered that they are the main contributors to the radical scavenging properties of the examined plant extracts. This observation is supported by the positive correlation between the radical scavenging activity and the total polyphenolic and flavonoid content obtained in the current research. The observations from the current research nominate Cinnamomum verum bark and Ocimum basilicum herba as potential sources of bioactive compounds that could be utilized as antioxidative additives in the food and pharmaceutical industries. Moreover, the present study will help the researchers as basic data for future research in exploiting the hidden potential of these important plants that have not been explored so far.

Keywords: ethanol extracts, radical scavenging activity, reducing power, total polyphenols.

Procedia PDF Downloads 181
546 Parents’ Perspectives on After-School Educational Service from a Cross-Cultural Background: A Comparative Semi-Structured Interview Approach Based in China and Ireland

Authors: Xining Wang

Abstract:

After-school educational service has been proven that it could benefit children’s academic performance, socio-emotional skills, and physical health level. However, there is little research demonstrating parents’ perspectives on the choice of after-school educational service from a level of cross-cultural backgrounds. China and Ireland are typical representatives of collectivist countries (e.g., estimated individualism score is 20) and individualist countries (e.g., estimated individualism score is 70) according to Hofstede's cultural dimensions theory. Living in countries with distinguished cultural backgrounds, there is an evident discrepancy in parents’ attitudes towards domestic after-school education and parents’ motivations for choosing after-school educational services. Through conducting a semi-structured interview with 15 parents from China and 15 parents from Ireland, using thematic analysis software (ATLAS) to extract the key information, and applying a comparative approach to process data analysis; results present polarization of Chinese and Irish parents' perspectives and motivations on after-school educational service. For example, Chinese parents tend to view after-school education as a complement to school education. It is a service they purchased for their children to acquire extra knowledge and skills so that they could adapt to the highly competitive educational setting. Given the fact that children’s education is a priority for Chinese families, most parents believe that their children would succeed in the future through massive learning. This attitude reflects that Chinese parents are more likely to apply authoritarian parenting methods and having a strong expectations for their children. Conversely, Irish parents' choice of after-school educational service is a consideration that primarily based on their own situation, secondly, for their family. For instance, with the expansion of the labor market, there is a change in household structure. Irish mothers are more likely to seek working opportunities instead of looking after the family. Irish parents view that after-school educational service is an essential need for themselves and a beneficial component for their family due to the external pressure (e.g., the growing work intensity and extended working hours, increasing numbers of separated families, as well as parents’ pursuit of higher education and promotion). These factors are fundamental agents that encourage Irish parents to choose after-school educational services. To conclude, the findings could provide readers with a better understanding of parents’ disparate and contrasting perspectives on after-school educational services from a multi-culture level.

Keywords: after-school, China, family studies, Ireland, parents

Procedia PDF Downloads 160
545 An Analysis of Gamification in the Post-Secondary Classroom

Authors: F. Saccucci

Abstract:

Gamification has now started to take root in the post-secondary classroom. Educators have learned much about gamification to date but there is still a great deal to learn. One definition of gamification is the ability to engage post-secondary students with games that are fun and correlate to class room curriculum. There is no shortage of literature illustrating the advantages of gamification in the class room. This study is an extension of similar thought as well as an extension of a previous study where in class testing proved with the used of paired T-test that gamification did significantly improve the students’ understanding of subject material. Gamification itself in the class room can range from high end computer simulated software to paper based games of which both have advantages and disadvantages. This analysis used a paper based game to highlight certain qualitative advantages of gamification. The paper based game in this analysis was inexpensive, required low preparation time for the faculty member and consumed approximately 20 minutes of class room time. Data for the study was collected through in class student feedback surveys and narrative from the faculty member moderating the game. Students were randomly selected into groups of four. Qualitative advantages identified in this analysis included: 1. Students had a chance to meet, connect and know other students. 2. Students enjoyed the gamification process given there was a sense of fun and competition. 3. The post assessment that followed the simulation game was not part of their grade calculation therefore it was an opportunity to participate in a low risk activity whereby students could subsequently self-assess their understanding of the subject material. 4. In the view of the student, content knowledge did increase after the gamification process. These qualitative advantages identified in this analysis contribute to the argument that there should be an attempt to use gamification in today’s post-secondary class room. The analysis also highlighted that eighty (80) percent of the respondents believe twenty minutes devoted to the gamification process was appropriate, however twenty (20) percentage of respondents believed that rather than scheduling a gamification process and its post quiz in the last week, a review for the final exam may have been more useful. An additional study to this hopes to determine if the scheduling of the gamification had any correlation to a percentage of the students not wanting to be engaged in the process. As well, the additional study hopes to determine at what incremental level of time invested in class room gamification produce no material incremental benefits to the student as well as determine if any correlation exist between respondents preferring not to have it at the end of the semester to students not believing the gamification process added to the increase of their curricular knowledge.

Keywords: gamification, inexpensive, non-quantitative advantages, post-secondary

Procedia PDF Downloads 188
544 A Quantitative Study on the “Unbalanced Phenomenon” of Mixed-Use Development in the Central Area of Nanjing Inner City Based on the Meta-Dimensional Model

Authors: Yang Chen, Lili Fu

Abstract:

Promoting urban regeneration in existing areas has been elevated to a national strategy in China. In this context, because of the multidimensional sustainable effect through the intensive use of land, mixed-use development has become an important objective for high-quality urban regeneration in the inner city. However, in the long period of time since China's reform and opening up, the "unbalanced phenomenon" of mixed-use development in China's inner cities has been very serious. On the one hand, the excessive focus on certain individual spaces has led to an increase in the level of mixed-use development in some areas, substantially ahead of others, resulting in a growing gap between different parts of the inner city; On the other hand, the excessive focus on a one-dimensional element of the spatial organization of mixed-use development, such as the enhancement of functional mix or spatial capacity, has led to a lagging phenomenon or neglect in the construction of other dimensional elements, such as pedestrian permeability, green environmental quality, social inclusion, etc. This phenomenon is particularly evident in the central area of the inner city, and it clearly runs counter to the need for sustainable development in China's new era. Therefore, a rational qualitative and quantitative analysis of the "unbalanced phenomenon" will help to identify the problem and provide a basis for the formulation of relevant optimization plans in the future. This paper builds a dynamic evaluation method of mixed-use development based on a meta-dimensional model and then uses spatial evolution analysis and spatial consistency analysis with ArcGIS software to reveal the "unbalanced phenomenon " in over the past 40 years of the central city area in Nanjing, a China’s typical city facing regeneration. This study result finds that, compared to the increase in functional mix and capacity, the dimensions of residential space mix, public service facility mix, pedestrian permeability, and greenness in Nanjing's city central area showed different degrees of lagging improvement, and the unbalanced development problems in each part of the city center are different, so the governance and planning plan for future mixed-use development needs to fully address these problems. The research methodology of this paper provides a tool for comprehensive dynamic identification of mixed-use development level’s change, and the results deepen the knowledge of the evolution of mixed-use development patterns in China’s inner cities and provide a reference basis for future regeneration practices.

Keywords: mixed-use development, unbalanced phenomenon, the meta-dimensional model, over the past 40 years of Nanjing, China

Procedia PDF Downloads 78
543 Tuning the Surface Roughness of Patterned Nanocellulose Films: An Alternative to Plastic Based Substrates for Circuit Priniting in High-Performance Electronics

Authors: Kunal Bhardwaj, Christine Browne

Abstract:

With the increase in global awareness of the environmental impacts of plastic-based products, there has been a massive drive to reduce our use of these products. Use of plastic-based substrates in electronic circuits has been a matter of concern recently. Plastics provide a very smooth and cheap surface for printing high-performance electronics due to their non-permeability to ink and easy mouldability. In this research, we explore the use of nano cellulose (NC) films in electronics as they provide an advantage of being 100% recyclable and eco-friendly. The main hindrance in the mass adoption of NC film as a substitute for plastic is its higher surface roughness which leads to ink penetration, and dispersion in the channels on the film. This research was conducted to tune the RMS roughness of NC films to a range where they can replace plastics in electronics(310-470nm). We studied the dependence of the surface roughness of the NC film on the following tunable aspects: 1) composition by weight of the NC suspension that is sprayed on a silicon wafer 2) the width and the depth of the channels on the silicon wafer used as a base. Various silicon wafers with channel depths ranging from 6 to 18 um and channel widths ranging from 5 to 500um were used as a base. Spray coating method for NC film production was used and two solutions namely, 1.5wt% NC and a 50-50 NC-CNC (cellulose nanocrystal) mixture in distilled water, were sprayed through a Wagner sprayer system model 117 at an angle of 90 degrees. The silicon wafer was kept on a conveyor moving at a velocity of 1.3+-0.1 cm/sec. Once the suspension was uniformly sprayed, the mould was left to dry in an oven at 50°C overnight. The images of the films were taken with the help of an optical profilometer, Olympus OLS 5000. These images were converted into a ‘.lext’ format and analyzed using Gwyddion, a data and image analysis software. Lowest measured RMS roughness of 291nm was with a 50-50 CNC-NC mixture, sprayed on a silicon wafer with a channel width of 5 µm and a channel depth of 12 µm. Surface roughness values of 320+-17nm were achieved at lower (5 to 10 µm) channel widths on a silicon wafer. This research opened the possibility of the usage of 100% recyclable NC films with an additive (50% CNC) in high-performance electronics. Possibility of using additives like Carboxymethyl Cellulose (CMC) is also being explored due to the hypothesis that CMC would reduce friction amongst fibers, which in turn would lead to better conformations amongst the NC fibers. CMC addition would thus be able to help tune the surface roughness of the NC film to an even greater extent in future.

Keywords: nano cellulose films, electronic circuits, nanocrystals and surface roughness

Procedia PDF Downloads 104
542 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)

Authors: Azimollah Aleshzadeh, Enver Vural Yavuz

Abstract:

The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.

Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping

Procedia PDF Downloads 115
541 Typology of Fake News Dissemination Strategies in Social Networks in Social Events

Authors: Mohadese Oghbaee, Borna Firouzi

Abstract:

The emergence of the Internet and more specifically the formation of social media has provided the ground for paying attention to new types of content dissemination. In recent years, Social media users share information, communicate with others, and exchange opinions on social events in this space. Many of the information published in this space are suspicious and produced with the intention of deceiving others. These contents are often called "fake news". Fake news, by disrupting the circulation of the concept and similar concepts such as fake news with correct information and misleading public opinion, has the ability to endanger the security of countries and deprive the audience of the basic right of free access to real information; Competing governments, opposition elements, profit-seeking individuals and even competing organizations, knowing about this capacity, act to distort and overturn the facts in the virtual space of the target countries and communities on a large scale and influence public opinion towards their goals. This process of extensive de-truthing of the information space of the societies has created a wave of harm and worries all over the world. The formation of these concerns has led to the opening of a new path of research for the timely containment and reduction of the destructive effects of fake news on public opinion. In addition, the expansion of this phenomenon has the potential to create serious and important problems for societies, and its impact on events such as the 2016 American elections, Brexit, 2017 French elections, 2019 Indian elections, etc., has caused concerns and led to the adoption of approaches It has been dealt with. In recent years, a simple look at the growth trend of research in "Scopus" shows an increasing increase in research with the keyword "false information", which reached its peak in 2020, namely 524 cases, reached, while in 2015, only 30 scientific-research contents were published in this field. Considering that one of the capabilities of social media is to create a context for the dissemination of news and information, both true and false, in this article, the classification of strategies for spreading fake news in social networks was investigated in social events. To achieve this goal, thematic analysis research method was chosen. In this way, an extensive library study was first conducted in global sources. Then, an in-depth interview was conducted with 18 well-known specialists and experts in the field of news and media in Iran. These experts were selected by purposeful sampling. Then by analyzing the data using the theme analysis method, strategies were obtained; The strategies achieved so far (research is in progress) include unrealistically strengthening/weakening the speed and content of the event, stimulating psycho-media movements, targeting emotional audiences such as women, teenagers and young people, strengthening public hatred, calling the reaction legitimate/illegitimate. events, incitement to physical conflict, simplification of violent protests and targeted publication of images and interviews were introduced.

Keywords: fake news, social network, social events, thematic analysis

Procedia PDF Downloads 44
540 Investigation on Single Nucleotide Polymorphism in Candidate Genes and Their Association with Occurrence of Mycobacterium avium Subspecies Paratuberculosis Infection in Cattle

Authors: Ran Vir Singh, Anuj Chauhan, Subhodh Kumar, Rajesh Rathore, Satish Kumar, B Gopi, Sushil Kumar, Tarun Kumar, Ramji Yadav, Donna Phangchopi, Shoor Vir Singh

Abstract:

Paratuberculosis caused by Mycobacterium avium subspecies paratuberculosis (MAP) is a chronic granulomatous enteritis affecting ruminants. It is responsible for significant economic losses in livestock industry worldwide. This organism is also of public health concern due to an unconfirmed link to Crohn’s disease. Susceptibility to paratuberculosis has been suggested to have genetic component with low to moderate heritability. Number of SNPs in various candidates genes have been observed to be affecting the susceptibility toward paratuberculosis. The objective of this study was to explore the association of various SNPs in the candidate genes and QTL region with MAP. A total of 117 SNPs from SLC11A1, IFNG, CARD15, TLR2, TLR4, CLEC7A, CD209, SP110, ANKARA2, PGLYRP1 and one QTL were selected for study. A total of 1222 cattle from various organized herds, gauhsalas and farmer herds were screened for MAP infection by Johnin intradermal skin test, AGID, serum ELISA, fecal microscopy, fecal culture and IS900 blood PCR. Based on the results of these tests, a case and control population of 200 and 183 respectively was established for study. A total of 117 SNPs from 10 candidate genes and one QTL were selected and validated/tested in our case and control population by PCR-RFLP technique. Data was analyzed using SAS 9.3 software. Statistical analysis revealed that, 107 out of 117 SNPs were not significantly associated with occurrence of MAP. Only SNP rs55617172 of TLR2, rs8193046 and rs8193060 of TLR4, rs110353594 and rs41654445 of CLEC7A, rs208814257of CD209, rs41933863 of ANKRA2, two loci {SLC11A1(53C/G)} and {IFNG (185 G/r) } and SNP rs41945014 in QTL region was significantly associated with MAP. Six SNP from 10 significant SNPs viz., rs110353594 and rs41654445 from CLEC7A, rs8193046 and rs8193060 from TLR4, rs109453173 from SLC11A1 rs208814257 from CD209 were validated in new case and control population. Out of these only one SNP rs8193046 of TLR4 gene was found significantly associated with occurrence of MAP in cattle. ODD ratio indicates that animals with AG genotype were more susceptible to MAP and this finding is in accordance with the earlier report. Hence it reaffirms that AG genotype can serve as a reliable genetic marker for indentifying more susceptible cattle in future selection against MAP infection in cattle.

Keywords: SNP, candidate genes, paratuberculosis, cattle

Procedia PDF Downloads 332
539 DNA Hypomethylating Agents Induced Histone Acetylation Changes in Leukemia

Authors: Sridhar A. Malkaram, Tamer E. Fandy

Abstract:

Purpose: 5-Azacytidine (5AC) and decitabine (DC) are DNA hypomethylating agents. We recently demonstrated that both drugs increase the enzymatic activity of the histone deacetylase enzyme SIRT6. Accordingly, we are comparing the changes H3K9 acetylation changes in the whole genome induced by both drugs using leukemia cells. Description of Methods & Materials: Mononuclear cells from the bone marrow of six de-identified naive acute myeloid leukemia (AML) patients were cultured with either 500 nM of DC or 5AC for 72 h followed by ChIP-Seq analysis using a ChIP-validated acetylated-H3K9 (H3K9ac) antibody. Chip-Seq libraries were prepared from treated and untreated cells using SMARTer ThruPLEX DNA- seq kit (Takara Bio, USA) according to the manufacturer’s instructions. Libraries were purified and size-selected with AMPure XP beads at 1:1 (v/v) ratio. All libraries were pooled prior to sequencing on an Illumina HiSeq 1500. The dual-indexed single-read Rapid Run was performed with 1x120 cycles at 5 pM final concentration of the library pool. Sequence reads with average Phred quality < 20, with length < 35bp, PCR duplicates, and those aligning to blacklisted regions of the genome were filtered out using Trim Galore v0.4.4 and cutadapt v1.18. Reads were aligned to the reference human genome (hg38) using Bowtie v2.3.4.1 in end-to-end alignment mode. H3K9ac enriched (peak) regions were identified using diffReps v1.55.4 software using input samples for background correction. The statistical significance of differential peak counts was assessed using a negative binomial test using all individuals as replicates. Data & Results: The data from the six patients showed significant (Padj<0.05) acetylation changes at 925 loci after 5AC treatment versus 182 loci after DC treatment. Both drugs induced H3K9 acetylation changes at different chromosomal regions, including promoters, coding exons, introns, and distal intergenic regions. Ten common genes showed H3K9 acetylation changes by both drugs. Approximately 84% of the genes showed an H3K9 acetylation decrease by 5AC versus 54% only by DC. Figures 1 and 2 show the heatmaps for the top 100 genes and the 99 genes showing H3K9 acetylation decrease after 5AC treatment and DC treatment, respectively. Conclusion: Despite the similarity in hypomethylating activity and chemical structure, the effect of both drugs on H3K9 acetylation change was significantly different. More changes in H3K9 acetylation were observed after 5 AC treatments compared to DC. The impact of these changes on gene expression and the clinical efficacy of these drugs requires further investigation.

Keywords: DNA methylation, leukemia, decitabine, 5-Azacytidine, epigenetics

Procedia PDF Downloads 124
538 Glycosaminoglycan, a Cartilage Erosion Marker in Synovial Fluid of Osteoarthritis Patients Strongly Correlates with WOMAC Function Subscale

Authors: Priya Kulkarni, Soumya Koppikar, Narendrakumar Wagh, Dhanshri Ingle, Onkar Lande, Abhay Harsulkar

Abstract:

Cartilage is an extracellular matrix composed of aggrecan, which imparts it with a great tensile strength, stiffness and resilience. Disruption in cartilage metabolism leading to progressive degeneration is a characteristic feature of Osteoarthritis (OA). The process involves enzymatic depolymerisation of cartilage specific proteoglycan, releasing free glycosaminoglycan (GAG). This released GAG in synovial fluid (SF) of knee joint serves as a direct measure of cartilage loss, however, limited due to its invasive nature. Western Ontario and McMaster Universities Arthritis Index (WOMAC) is widely used for assessing pain, stiffness and physical-functions in OA patients. The scale is comprised of three subscales namely, pain, stiffness and physical-function, intends to measure patient’s perspective of disease severity as well as efficacy of prescribed treatment. Twenty SF samples obtained from OA patients were analysed for their GAG values in SF using DMMB based assay. LK 1.0 vernacular version was used to attain WOMAC scale. The results were evaluated using SAS University software (Edition 1.0) for statistical significance. All OA patients revealed higher GAG values compared to the control value of 78.4±30.1µg/ml (obtained from our non-OA patients). Average WOMAC calculated was 51.3 while pain, stiffness and function estimated were 9.7, 3.9 and 37.7, respectively. Interestingly, a strong statistical correlation was established between WOMAC function subscale and GAG (p = 0.0102). This subscale is based on day-to-day activities like stair-use, bending, walking, getting in/out of car, rising from bed. However, pain and stiffness subscale did not show correlation with any of the studied markers and endorsed the atypical inflammation in OA pathology. On one side, where knee pain showed poor correlation with GAG, it is often noted that radiography is insensitive to cartilage degenerative changes; thus OA remains undiagnosed for long. Moreover, active cartilage degradation phase remains elusive to both, patient and clinician. Through analysis of large number of OA patients we have established a close association of Kellgren-Lawrence grades and increased cartilage loss. A direct attempt to correlate WOMAC and radiographic progression of OA with various biomarkers has not been attempted so far. We found a good correlation in GAG levels in SF and the function subscale.

Keywords: cartilage, Glycosaminoglycan, synovial fluid, western ontario and McMaster Universities Arthritis Index

Procedia PDF Downloads 425
537 Increment of Panel Flutter Margin Using Adaptive Stiffeners

Authors: S. Raja, K. M. Parammasivam, V. Aghilesh

Abstract:

Fluid-structure interaction is a crucial consideration in the design of many engineering systems such as flight vehicles and bridges. Aircraft lifting surfaces and turbine blades can fail due to oscillations caused by fluid-structure interaction. Hence, it is focussed to study the fluid-structure interaction in the present research. First, the effect of free vibration over the panel is studied. It is well known that the deformation of a panel and flow induced forces affects one another. The selected panel has a span 300mm, chord 300mm and thickness 2 mm. The project is to study, the effect of cross-sectional area and the stiffener location is carried out for the same panel. The stiffener spacing is varied along both the chordwise and span-wise direction. Then for that optimal location the ideal stiffener length is identified. The effect of stiffener cross-section shapes (T, I, Hat, Z) over flutter velocity has been conducted. The flutter velocities of the selected panel with two rectangular stiffeners of cantilever configuration are estimated using MSC NASTRAN software package. As the flow passes over the panel, deformation takes place which further changes the flow structure over it. With increasing velocity, the deformation goes on increasing, but the stiffness of the system tries to dampen the excitation and maintain equilibrium. But beyond a critical velocity, the system damping suddenly becomes ineffective, so it loses its equilibrium. This estimated in NASTRAN using PK method. The first 10 modal frequencies of a simple panel and stiffened panel are estimated numerically and are validated with open literature. A grid independence study is also carried out and the modal frequency values remain the same for element lengths less than 20 mm. The current investigation concludes that the span-wise stiffener placement is more effective than the chord-wise placement. The maximum flutter velocity achieved for chord-wise placement is 204 m/s while for a span-wise arrangement it is augmented to 963 m/s for the stiffeners location of ¼ and ¾ of the chord from the panel edge (50% of chord from either side of the mid-chord line). The flutter velocity is directly proportional to the stiffener cross-sectional area. A significant increment in flutter velocity from 218m/s to 1024m/s is observed for the stiffener lengths varying from 50% to 60% of the span. The maximum flutter velocity above Mach 3 is achieved. It is also observed that for a stiffened panel, the full effect of stiffener can be achieved only when the stiffener end is clamped. Stiffeners with Z cross section incremented the flutter velocity from 142m/s (Panel with no stiffener) to 328 m/s, which is 2.3 times that of simple panel.

Keywords: stiffener placement, stiffener cross-sectional area, stiffener length, stiffener cross sectional area shape

Procedia PDF Downloads 272
536 Strategic Public Procurement: A Lever for Social Entrepreneurship and Innovation

Authors: B. Orser, A. Riding, Y. Li

Abstract:

To inform government about how gender gaps in SME ( small and medium-sized enterprise) contracting might be redressed, the research question was: What are the key obstacles to, and response strategies for, increasing the engagement of women business owners among SME suppliers to the government of Canada? Thirty-five interviews with senior policymakers, supplier diversity organization executives, and expert witnesses to the Canadian House of Commons, Standing Committee on Government Operations and Estimates. Qualitative data were conducted and analysed using N’Vivo 11 software. High order response categories included: (a) SME risk mitigation strategies, (b) SME procurement program design, and (c) performance measures. Primary obstacles cited were government red tape and long and complicated requests for proposals (RFPs). The majority of 'common' complaints occur when SMEs have questions about the federal procurement process. Witness responses included use of outcome-based rather than prescriptive procurement practices, more agile procurement, simplified RFPs, making payment within 30 days a procurement priority. Risk mitigation strategies included provision of procurement officers to assess risks and opportunities for businesses and development of more agile procurement procedures and processes. Recommendations to enhance program design included: improved definitional consistency of qualifiers and selection criteria, better co-ordination across agencies; clarification about how SME suppliers benefit from federal contracting; goal setting; specification of categories that are most suitable for women-owned businesses; and, increasing primary contractor awareness about the importance of subcontract relationships. Recommendations also included third-party certification of eligible firms and the need to enhance SMEs’ financial literacy to reduce financial errors. Finally, there remains the need for clear and consistent pre-program statistics to establish baselines (by sector, issuing department) performance measures, targets based on percentage of contracts granted, value of contract, percentage of target employee (women, indigenous), and community benefits including hiring local employees. The study advances strategies to enhance federal procurement programs to facilitate socio-economic policy objectives.

Keywords: procurement, small business, policy, women

Procedia PDF Downloads 96
535 Neurodiversity in Post Graduate Medical Education: A Rapid Solution to Faculty Development

Authors: Sana Fatima, Paul Sadler, Jon Cooper, David Mendel, Ayesha Jameel

Abstract:

Background: Neurodiversity refers to intrinsic differences between human minds and encompasses dyspraxia, dyslexia, attention deficit hyperactivity disorder, dyscalculia, autism spectrum disorder, and Tourette syndrome. There is increasing recognition of neurodiversity in relation to disability/diversity in medical education and the associated impact on training, career progression, and personal and professional wellbeing. In addition, documented and anecdotal evidence suggests that medical educators and training providers in all four nations (UK) are increasingly concerned about understanding neurodiversity and identifying and providing support for neurodivergent trainees. Summary of Work: A national Neurodiversity Task and Finish group were established to survey Health Education England local office Professional Support teams about insights into infrastructure, training for educators, triggers for assessment, resources, and intervention protocols. This group drew from educational leadership, professional and personal neurodiverse expertise, occupational medicine, employer human resource, and trainees. An online, exploratory survey was conducted to gather insights from supervisors and trainers across England using the Professional Support Units' platform. Summary of Results: This survey highlighted marked heterogeneity in the identification, assessment, and approaches to support and management of neurodivergent trainees and highlighted a 'deficit' approach to neurodiversity. It also demonstrated a paucity of educational and protocol resources for educators and supervisors in supporting neurodivergent trainees. Discussions and Conclusions: In phase one, we focused on faculty development. An educational repository for all supervising trainees using a thematic approach was formalised. This was guided by our survey findings specific for neurodiversity and took a triple 'A' approach: awareness, assessment, and action. This is further supported by video material incorporating stories in training as well as mobile workshops for trainers for more immersive learning. The subtle theme from both the survey and Task and finish group suggested a move away from deficit-focused methods toward a positive holistic, interdisciplinary approach within a biopsychosocial framework. Contributions: 1. Faculty Knowledge and basic understanding of neurodiversity are key to supporting trainees with known or underlying Neurodiverse conditions. This is further complicated by challenges around non-disclosure, varied presentations, stigma, and intersectionality. 2. There is national (and international) inconsistency in the approach to how trainees are managed once a neurodiverse condition is suspected or diagnosed. 3. A carefully constituted and focussed Task and Finish group can rapidly identify national inconsistencies in neurodiversity and implement rapid educational interventions. 4. Nuanced findings from surveys and discussion can reframe the approach to neurodiversity; from a medical model to a more comprehensive, asset-based, biopsychosocial model of support, fostering a cultural shift, accepting 'diversity' in all its manifestations, visible and hidden.

Keywords: neurodiversity, professional support, human considerations, workplace wellbeing

Procedia PDF Downloads 76
534 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range

Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva

Abstract:

Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.

Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability

Procedia PDF Downloads 127
533 Blended Cloud Based Learning Approach in Information Technology Skills Training and Paperless Assessment: Case Study of University of Cape Coast

Authors: David Ofosu-Hamilton, John K. E. Edumadze

Abstract:

Universities have come to recognize the role Information and Communication Technology (ICT) skills plays in the daily activities of tertiary students. The ability to use ICT – essentially, computers and their diverse applications – are important resources that influence an individual’s economic and social participation and human capital development. Our society now increasingly relies on the Internet, and the Cloud as a means to communicate and disseminate information. The educated individual should, therefore, be able to use ICT to create and share knowledge that will improve society. It is, therefore, important that universities require incoming students to demonstrate a level of computer proficiency or trained to do so at a minimal cost by deploying advanced educational technologies. The training and standardized assessment of all in-coming first-year students of the University of Cape Coast in Information Technology Skills (ITS) have become a necessity as students’ most often than not highly overestimate their digital skill and digital ignorance is costly to any economy. The one-semester course is targeted at fresh students and aimed at enhancing the productivity and software skills of students. In this respect, emphasis is placed on skills that will enable students to be proficient in using Microsoft Office and Google Apps for Education for their academic work and future professional work whiles using emerging digital multimedia technologies in a safe, ethical, responsible, and legal manner. The course is delivered in blended mode - online and self-paced (student centered) using Alison’s free cloud-based tutorial (Moodle) of Microsoft Office videos. Online support is provided via discussion forums on the University’s Moodle platform and tutor-directed and assisted at the ICT Centre and Google E-learning laboratory. All students are required to register for the ITS course during either the first or second semester of the first year and must participate and complete it within a semester. Assessment focuses on Alison online assessment on Microsoft Office, Alison online assessment on ALISON ABC IT, Peer assessment on e-portfolio created using Google Apps/Office 365 and an End of Semester’s online assessment at the ICT Centre whenever the student was ready in the cause of the semester. This paper, therefore, focuses on the digital culture approach of hybrid teaching, learning and paperless examinations and the possible adoption by other courses or programs at the University of Cape Coast.

Keywords: assessment, blended, cloud, paperless

Procedia PDF Downloads 232
532 Understanding Stock-Out of Pharmaceuticals in Timor-Leste: A Case Study in Identifying Factors Impacting on Pharmaceutical Quantification in Timor-Leste

Authors: Lourenco Camnahas, Eileen Willis, Greg Fisher, Jessie Gunson, Pascale Dettwiller, Charlene Thornton

Abstract:

Stock-out of pharmaceuticals is a common issue at all level of health services in Timor-Leste, a small post-conflict country. This lead to the research questions: what are the current methods used to quantify pharmaceutical supplies; what factors contribute to the on-going pharmaceutical stock-out? The study examined factors that influence the pharmaceutical supply chain system. Methodology: Privett and Goncalvez dependency model has been adopted for the design of the qualitative interviews. The model examines pharmaceutical supply chain management at three management levels: management of individual pharmaceutical items, health facilities, and health systems. The interviews were conducted in order to collect information on inventory management, logistics management information system (LMIS) and the provision of pharmaceuticals. Andersen' behavioural model for healthcare utilization also informed the interview schedule, specifically factors linked to environment (healthcare system and external environment) and the population (enabling factors). Forty health professionals (bureaucrats, clinicians) and six senior officers from a United Nations Agency, a global multilateral agency and a local non-governmental organization were interviewed on their perceptions of factors (healthcare system/supply chain and wider environment) impacting on stock out. Additionally, policy documents for the entire healthcare system, along with population data were collected. Findings: An analysis using Pozzebon’s critical interpretation identified a range of difficulties within the system from poor coordination to failure to adhere to policy guidelines along with major difficulties with inventory management, quantification, forecasting, and budgetary constraints. Weak logistics management information system, lack of capacity in inventory management, monitoring and supervision are additional organizational factors that also contributed to the issue. There were various methods of quantification of pharmaceuticals applied in the government sector, and non-governmental organizations. Lack of reliable data is one of the major problems in the pharmaceutical provision. Global Fund has the best quantification methods fed by consumption data and malaria cases. There are other issues that worsen stock-out: political intervention, work ethic and basic infrastructure such as unreliable internet connectivity. Major issues impacting on pharmaceutical quantification have been identified. However, current data collection identified limitations within the Andersen model; specifically, a failure to take account of predictors in the healthcare system and the environment (culture/politics/social. The next step is to (a) compare models used by three non-governmental agencies with the government model; (b) to run the Andersen explanatory model for pharmaceutical expenditure for 2 to 5 drug items used by these three development partners in order to see how it correlates with the present model in terms of quantification and forecasting the needs; (c) to repeat objectives (a) and (b) using the government model; (d) to draw a conclusion about the strength.

Keywords: inventory management, pharmaceutical forecasting and quantification, pharmaceutical stock-out, pharmaceutical supply chain management

Procedia PDF Downloads 206