Search results for: role of technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16525

Search results for: role of technology

1015 A Preliminary Randomized Controlled Trial of Pure L-Ascorbic Acid with Using a Needle-Free and Micro-Needle Mesotherapy in Treatment of Anti-Aging Procedure

Authors: M. Zasada, A. Markiewicz, A. Erkiert-Polguj, E. Budzisz

Abstract:

The epidermis is a keratinized stratified squamous epithelium covered by the hydro-lipid barrier. Therefore, active substances should be able to penetrate through this hydro-lipid coating. L-ascorbic acid is one of the vitamins which plays an important role in stimulation fibroblast to produce collagen type I and in hyperpigmentation lightening. Vitamin C is a water-soluble antioxidant, which protects skin from oxidation damage and rejuvenates photoaged skin. No-needle mesotherapy is a non-invasive rejuvenation technique depending on electric pulses, electroporation, and ultrasounds. These physicals factors result in deeper penetration of cosmetics. It is important to increase the penetration of L-ascorbic acid, thereby increasing the spectrum of its activity. The aim of the work was to assess the effectiveness of pure L-ascorbic acid activity in anti-aging therapy using a needle-free and micro-needling mesotherapy. The study was performed on a group of 35 healthy volunteers in accordance with the Declaration of Helsinki of 1964 and agreement of the Ethics Commissions no RNN/281/16/KE 2017. Women were randomized to mesotherapy or control group. Control group applied topically 2,5 ml serum containing 20% L-ascorbic acid with hydrate from strawberries, every 10 days for a period of 9 weeks. No-needle mesotherapy, on the left half of the face and micro-needling on the right with the same serum, was done in mesotherapy group. The pH of serum was 3.5-4, and the serum was prepared directly prior to the facial treatment. The skin parameters were measured at the beginning and before each treatment. The measurement of the forehead skin was done using Cutometer® (measurement of skin elasticity and firmness), Corneometer® (skin hydration measurement), Mexameter® (skin tone measurement). Also, the photographs were taken by Fotomedicus system. Additionally, the volunteers fulfilled the questionnaire. Serum was tested for microbiological purity and stability after the opening of the cosmetic. During the study, all of the volunteers were taken care of a dermatologist. The regular application of the serum has caused improvement of the skin parameters. Respectively, after 4 and 8 weeks improvement in hydration and elasticity has been seen (Corneometer®, Cutometer® results). Moreover, the number of hyper-pigmentated spots has decreased (Mexameter®). After 8 weeks the volunteers has claimed that the tested product has smoothing and moisturizing features. Subjective opinions indicted significant improvement of skin color and elasticity. The product containing the L-ascorbic acid used with intercellular penetration promoters demonstrates higher anti-aging efficiency than control. In vivo studies confirmed the effectiveness of serum and the impact of the active substance on skin firmness and elasticity, the degree of hydration and skin tone. Mesotherapy with pure L-ascorbic acid provides better diffusion of active substances through the skin.

Keywords: anti-aging, l-ascorbic acid, mesotherapy, promoters

Procedia PDF Downloads 257
1014 Production of Functional Crackers Enriched with Olive (Olea europaea L.) Leaf Extract

Authors: Rosa Palmeri, Julieta I. Monteleone, Antonio C. Barbera, Carmelo Maucieri, Aldo Todaro, Virgilio Giannone, Giovanni Spagna

Abstract:

In recent years, considerable interest has been shown in the functional properties of foods, and a relevant role has been played by phenolic compounds, able to scavenge free radicals. A more sustainable agriculture has to emerge to guarantee food supply over the next years. Wheat, corn, and rice are the most common cereals cultivated, but also other cereal species, such as barley can be appreciated for their peculiarities. Barley (Hordeum vulgare L.) is a C3 winter cereal that shows high resistance at drought and salt stresses. There are growing interests in barley as ingredient for the production of functional foods due to its high content of phenolic compounds and Beta-glucans. In this respect, the possibility of separating specific functional fractions from food industry by-products looks very promising. Olive leaves represent a quantitatively significant by-product of olive grove farming, and are an interesting source of phenolic compounds. In particular, oleuropein, which provide important nutritional benefits, is the main phenolic compound in olive leaves and ranges from 17% to 23% depending upon the cultivar and growing season period. Together with oleuropein and its derivatives (e.g. dimethyloleuropein, oleuropein diglucoside), olive leaves further contain tyrosol, hydroxytyrosol, and a series of secondary metabolities structurally related to them: verbascoside, ligstroside, hydroxytyrosol glucoside, tyrosol glucoside, oleuroside, oleoside-11-methyl ester, and nuzhenide. Several flavonoids, flavonoid glycosides, and phenolic acids have also described in olive leaves. The aim of this work was the production of functional food with higher content of polyphenols and the evaluation of their shelf life. Organic durum wheat and barley grains contain higher levels of phenolic compounds were used for the production of crackers. Olive leaf extract (OLE) was obtained from cv. ‘Biancolilla’ by aqueous extraction method. Two baked goods trials were performed with both organic durum wheat and barley flours, adding olive leaf extract. Control crackers, made as comparison, were produced with the same formulation replacing OLE with water. Total phenolic compound, moisture content, activity water, and textural properties at different time of storage were determined to evaluate the shelf-life of the products. Our the preliminary results showed that the enriched crackers showed higher phenolic content and antioxidant activity than control. Alternative uses of olive leaf extracts for crackers production could represent a good candidate for the addition of functional ingredients because bakery items are daily consumed, and have long shelf-life.

Keywords: barley, functional foods, olive leaf, polyphenols, shelf life

Procedia PDF Downloads 292
1013 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks

Authors: Afnan Al-Romi, Iman Al-Momani

Abstract:

The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.

Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN

Procedia PDF Downloads 312
1012 Problems and Prospects of Protection of Historical Building as a Corner Stone of Cultural Policy for International Collaboration in New Era: A Study of Fars Province, Iran

Authors: Kiyanoush Ghalavand, Ali Ferydooni

Abstract:

Fars province Fārs or Pārs is a vast land located in the southwest of Iran. All over the province, you can see and feel the glory of Ancient Iranian culture and civilization. There are many monuments from pre-historical to the Islamic era within this province. The existence of ancient cultural and historical monuments in Fars province including the historical complex of Persepolis, the tombs of Persian poets Hafez and Saadi, and dozens of other valuable cultural and historical works of this province as a symbol of Iranian national identity and the manifestation of transcendent cultural values of this national identity. Fars province is quintessentially Persian. Its name is the modern version of ancient Parsa, the homeland, if not the place of origin, of the Persians, one of the great powers of antiquity. From here, the Persian Empire ruled much of Western and Central Asia, receiving ambassadors and messengers at Persepolis. It was here that the Persian kings were buried, both in the mountain behind Persepolis and in the rock face of nearby Naqsh-e Rustam. We have a complex paradox in Persian and Islamic ideology in the age of technology in Iran. The main purpose of the present article is to identify and describe the problems and prospects of origin and development of the modern approach to the conservation and restoration of ancient monuments and historic buildings, the influence that this development has had on international collaboration in the protection and conservation of cultural heritage, and the present consequences worldwide. The definition of objects and structures of the past as heritage, and the policies related to their protection, restoration, and conservation, have evolved together with modernity, and are currently recognized as an essential part of the responsibilities of modern society. Since the eighteenth century, the goal of this protection has been defined as the cultural heritage of humanity; gradually this has included not only ancient monuments and past works of art but even entire territories for a variety of new values generated in recent decades. In its medium-term program of 1989, UNESCO defined the full scope of such heritage. The cultural heritage may be defined as the entire corpus of material signs either artistic or symbolic handed on by the past to each culture and, therefore, to the whole of humankind. As a constituent part of the affirmation and enrichment of cultural identities, as a legacy belonging to all humankind, the cultural heritage gives each particular place its recognizable features and is the storehouse of human experience. The preservation and the presentation of the cultural heritage are therefore a corner-stone of any cultural policy. The process, from which these concepts and policies have emerged, has been identified as the ‘modern conservation movement’.

Keywords: tradition, modern, heritage, historical building, protection, cultural policy, fars province

Procedia PDF Downloads 150
1011 Minding the Gap: Consumer Contracts in the Age of Online Information Flow

Authors: Samuel I. Becher, Tal Z. Zarsky

Abstract:

The digital world becomes part of our DNA now. The way e-commerce, human behavior, and law interact and affect one another is rapidly and significantly changing. Among others things, the internet equips consumers with a variety of platforms to share information in a volume we could not imagine before. As part of this development, online information flows allow consumers to learn about businesses and their contracts in an efficient and quick manner. Consumers can become informed by the impressions that other, experienced consumers share and spread. In other words, consumers may familiarize themselves with the contents of contracts through the experiences that other consumers had. Online and offline, the relationship between consumers and businesses are most frequently governed by consumer standard form contracts. For decades, such contracts are assumed to be one-sided and biased against consumers. Consumer Law seeks to alleviate this bias and empower consumers. Legislatures, consumer organizations, scholars, and judges are constantly looking for clever ways to protect consumers from unscrupulous firms and unfair behaviors. While consumers-businesses relationships are theoretically administered by standardized contracts, firms do not always follow these contracts in practice. At times, there is a significant disparity between what the written contract stipulates and what consumers experience de facto. That is, there is a crucial gap (“the Gap”) between how firms draft their contracts on the one hand, and how firms actually treat consumers on the other. Interestingly, the Gap is frequently manifested by deviation from the written contract in favor of consumers. In other words, firms often exercise lenient approach in spite of the stringent written contracts they draft. This essay examines whether, counter-intuitively, policy makers should add firms’ leniency to the growing list of firms suspicious behaviors. At first glance, firms should be allowed, if not encouraged, to exercise leniency. Many legal regimes are looking for ways to cope with unfair contract terms in consumer contracts. Naturally, therefore, consumer law should enable, if not encourage, firms’ lenient practices. Firms’ willingness to deviate from their strict contracts in order to benefit consumers seems like a sensible approach. Apparently, such behavior should not be second guessed. However, at times online tools, firm’s behaviors and human psychology result in a toxic mix. Beneficial and helpful online information should be treated with due respect as it may occasionally have surprising and harmful qualities. In this essay, we illustrate that technological changes turn the Gap into a key component in consumers' understanding, or misunderstanding, of consumer contracts. In short, a Gap may distort consumers’ perception and undermine rational decision-making. Consequently, this essay explores whether, counter-intuitively, consumer law should sanction firms that create a Gap and use it. It examines when firms’ leniency should be considered as manipulative or exercised in bad faith. It then investigates whether firms should be allowed to enforce the written contract even if the firms deliberately and consistently deviated from it.

Keywords: consumer contracts, consumer protection, information flow, law and economics, law and technology, paper deal v firms' behavior

Procedia PDF Downloads 187
1010 The Rise and Effects of Social Movement on Ethnic Relations in Malaysia: The Bersih Movement as a Case Study

Authors: Nur Rafeeda Daut

Abstract:

The significance of this paper is to provide an insight on the role of social movement in building stronger ethnic relations in Malaysia. In particular, it focuses on how the BERSIH movement have been able to bring together the different ethnic groups in Malaysia to resist the present political administration that is seen to manipulate the electoral process and oppress the basic freedom of expression of Malaysians. Attention is given on how and why this group emerged and its mobilisation strategies. Malaysia which is a multi-ethnic and multi-religious society gained its independence from the British in 1957. Like many other new nations, it faces the challenges of nation building and governance. From economic issues to racial and religious tension, Malaysia is experiencing high level of corruption and income disparity among the different ethnic groups. The political parties in Malaysia are also divided along ethnic lines. BERSIH which is translated as ‘clean’ is a movement which seeks to reform the current electoral system in Malaysia to ensure equality, justice, free and fair elections. It was originally formed in 2007 as a joint committee that comprised leaders from political parties, civil society groups and NGOs. In April 2010, the coalition developed as an entirely civil society movement unaffiliated to any political party. BERSIH claimed that the electoral roll in Malaysia has been marred by fraud and other irregularities. In 2015, the BERSIH movement organised its biggest rally in Malaysia which also includes 38 other rallies held internationally. Supporters of BERSIH that participated in the demonstration were comprised of all the different ethnic groups in Malaysia. In this paper, two social movement theories are used: resource mobilization theory and political opportunity structure to explain the emergence and mobilization of the BERSIH movement in Malaysia. Based on these two theories, corruption which is believed to have contributed to the income disparity among Malaysians has generated the development of this movement. The rise of re-islamisation values propagated by certain groups in Malaysia and the shift in political leadership has also created political opportunities for this movement to emerge. In line with the political opportunity structure theory, the BERSIH movement will continue to create more opportunities for the empowerment of civil society and the unity of ethnic relations in Malaysia. Comparison is made on the degree of ethnic unity in the country before and after BERSIH was formed. This would include analysing the level of re-islamisation values and also the level of corruption in relation to economic income under the premiership of the former Prime Minister Mahathir and the present Prime Minister Najib Razak. The country has never seen such uprisings like BERSIH where ethnic groups which over the years have been divided by ethnic based political parties and economic disparity joined together with a common goal for equality and fair elections. As such, the BERSIH movement is a unique case where it illustrates the change of political landscape, ethnic relations and civil society in Malaysia.

Keywords: ethnic relations, Malaysia, political opportunity structure, resource mobilization theory and social movement

Procedia PDF Downloads 333
1009 An Approach to Determine the in Transit Vibration to Fresh Produce Using Long Range Radio (LORA) Wireless Transducers

Authors: Indika Fernando, Jiangang Fei, Roger Stanely, Hossein Enshaei

Abstract:

Ever increasing demand for quality fresh produce by the consumers, had increased the gravity on the post-harvest supply chains in multi-fold in the recent years. Mechanical injury to fresh produce was a critical factor for produce wastage, especially with the expansion of supply chains, physically extending to thousands of miles. The impact of vibration damages in transit was identified as a specific area of focus which results in wastage of significant portion of the fresh produce, at times ranging from 10% to 40% in some countries. Several studies were concentrated on quantifying the impact of vibration to fresh produce, and it was a challenge to collect vibration impact data continuously due to the limitations in battery life or the memory capacity in the devices. Therefore, the study samples were limited to a stretch of the transit passage or a limited time of the journey. This may or may not give an accurate understanding of the vibration impacts encountered throughout the transit passage, which limits the accuracy of the results. Consequently, an approach which can extend the capacity and ability of determining vibration signals in the transit passage would contribute to accurately analyze the vibration damage along the post-harvest supply chain. A mechanism was developed to address this challenge, which is capable of measuring the in transit vibration continuously through the transit passage subject to a minimum acceleration threshold (0.1g). A system, consisting six tri-axel vibration transducers installed in different locations inside the cargo (produce) pallets in the truck, transmits vibration signals through LORA (Long Range Radio) technology to a central device installed inside the container. The central device processes and records the vibration signals transmitted by the portable transducers, along with the GPS location. This method enables to utilize power consumption for the portable transducers to maximize the capability of measuring the vibration impacts in the transit passage extending to days in the distribution process. The trial tests conducted using the approach reveals that it is a reliable method to measure and quantify the in transit vibrations along the supply chain. The GPS capability enables to identify the locations in the supply chain where the significant vibration impacts were encountered. This method contributes to determining the causes, susceptibility and intensity of vibration impact damages to fresh produce in the post-harvest supply chain. Extensively, the approach could be used to determine the vibration impacts not limiting to fresh produce, but for products in supply chains, which may extend from few hours to several days in transit.

Keywords: post-harvest, supply chain, wireless transducers, LORA, fresh produce

Procedia PDF Downloads 253
1008 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices

Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese

Abstract:

Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.

Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis

Procedia PDF Downloads 166
1007 Servant Leadership and Organisational Climate in South African Private Schools: A Qualitative Study

Authors: Christo Swart, Lidia Pottas, David Maree

Abstract:

Background: It is a sine qua non that the South African educational system finds itself in a profound crisis and that traditional school leadership styles are outdated and hinder quality education. New thinking is mandatory to improve the status quo and school leadership has an immense role to play to improve the current situation. It is believed that the servant leadership paradigm, when practiced by school leadership, may have a significant influence on the school environment in totality. This study investigates the private school segment in search of constructive answers to assist with the educational crises in South Africa. It is assumed that where school leadership can augment a supportive and empowering environment for teachers to constructively engage in their teaching and learning activities - then many challenges facing by school system may be subjugated in a productive manner. Aim: The aim of this study is fourfold. To outline the constructs of servant leadership which are perceived by teachers of private schools as priorities to enhance a successful school environment. To describe the constructs of organizational climate which are observed by teachers of private schools as priorities to enhance a successful school environment. To investigate whether the participants perceived a link between the constructs of servant leadership and organizational climate. To consider the process to be followed to introduce the constructs of SL and OC the school system in general as perceived by participants. Method: This study utilized a qualitative approach to explore the mediation between school leadership and the organizational climate in private schools in the search for amicable answers. The participants were purposefully selected for the study. Focus group interviews were held with participants from primary and secondary schools and a focus group discussion was conducted with principals of both primary and secondary schools. The interview data were transcribed and analyzed and identical patterns of coded data were grouped together under emerging themes. Findings: It was found that the practice of servant leadership by school leadership indeed mediates a constructive and positive school climate. It was found that the constructs of empowerment, accountability, humility and courage – interlinking with one other - are prominent of servant leadership concepts that are perceived by teachers of private schools as priorities for school leadership to enhance a successful school environment. It was confirmed that the groupings of training and development, communication, trust and work environment are perceived by teachers of private schools as prominent features of organizational climate as practiced by school leadership to augment a successful school environment. It can be concluded that the participants perceived several links between the constructs of servant leadership and organizational climate that encourage a constructive school environment and that there is a definite positive consideration and motivation that the two concepts be introduced to the school system in general. It is recommended that school leadership mentors and guides teachers to take ownership of the constructs of servant leadership as well as organizational climate and that public schools be researched and consider to implement the two paradigms. The study suggests that aspirant teachers be exposed to leadership as well as organizational paradigms during their studies at university.

Keywords: empowering environment for teachers and learners, new thinking required, organizational climate, school leadership, servant leadership

Procedia PDF Downloads 205
1006 The Intensity of Root and Soil Respiration Is Significantly Determined by the Organic Matter and Moisture Content of the Soil

Authors: Zsolt Kotroczó, Katalin Juhos, Áron Béni, Gábor Várbíró, Tamás Kocsis, István Fekete

Abstract:

Soil organic matter plays an extremely important role in the functioning and regulation processes of ecosystems. It follows that the C content of organic matter in soil is one of the most important indicators of soil fertility. Part of the carbon stored in them is returned to the atmosphere during soil respiration. Climate change and inappropriate land use can accelerate these processes. Our work aimed to determine how soil CO2 emissions change over ten years as a result of organic matter manipulation treatments. With the help of this, we were able to examine not only the effects of the different organic matter intake but also the effects of the different microclimates that occur as a result of the treatments. We carried out our investigations in the area of the Síkfőkút DIRT (Detritus Input and Removal Treatment) Project. The research area is located in the southern, hilly landscape of the Bükk Mountains, northeast of Eger (Hungary). GPS coordinates of the project: 47°55′34′′ N and 20°26′ 29′′ E, altitude 320-340 m. The soil of the area is Luvisols. The 27-hectare protected forest area is now under the supervision of the Bükki National Park. The experimental plots in Síkfőkút were established in 2000. We established six litter manipulation treatments each with three 7×7 m replicate plots established under complete canopy cover. There were two types of detritus addition treatments (Double Wood and Double Litter). In three treatments, detritus inputs were removed: No Litter No Roots plots, No Inputs, and the Controls. After the establishment of the plots, during the drier periods, the NR and NI treatments showed the highest CO2 emissions. In the first few years, the effect of this process was evident, because due to the lack of living vegetation, the amount of evapotranspiration on the NR and NI plots was much lower, and transpiration practically ceased on these plots. In the wetter periods, the NL and NI treatments showed the lowest soil respiration values, which were significantly lower compared to the Co, DW, and DL treatments. Due to the lower organic matter content and the lack of surface litter cover, the water storage capacity of these soils was significantly limited, therefore we measured the lowest average moisture content among the treatments after ten years. Soil respiration is significantly influenced by temperature values. Furthermore, the supply of nutrients to the soil microorganisms is also a determining factor, which in this case is influenced by the litter production dictated by the treatments. In the case of dry soils with a moisture content of less than 20% in the initial period, litter removal treatments showed a strong correlation with soil moisture (r=0.74). In very dry soils, a smaller increase in moisture does not cause a significant increase in soil respiration, while it does in a slightly higher moisture range. In wet soils, the temperature is the main regulating factor, above a certain moisture limit, water displaces soil air from the soil pores, which inhibits aerobic decomposition processes, and so heterotrophic soil respiration also declines.

Keywords: soil biology, organic matter, nutrition, DIRT, soil respiration

Procedia PDF Downloads 60
1005 Food Insecurity and Other Correlates of Individual Components of Metabolic Syndrome in Women Living with HIV (WLWH) in the United States

Authors: E. Wairimu Mwangi, Daniel Sarpong

Abstract:

Background: Access to effective antiretroviral therapy in the United States has resulted in the rise in longevity in people living with HIV (PLHIV). Despite the progress, women living with HIV (WLWH) experience increasing rates of cardiometabolic disorders compared with their HIV-negative counterparts. Studies focusing on the predictors of metabolic disorders in this population have largely focused on the composite measure of metabolic syndrome (METs). This study seeks to identify the predictors of composite and individual METs factors in a nationally representative sample of WLWH. In particular, the study also examines the role of food security in predicting METs. Methods: The study comprised 1800 women, a subset of participants from the Women’s Interagency HIV Study (WIHS). The primary exposure variable, food security, was measured using the U.S. 10-item Household Food Security Survey Module. The outcome measures are the five metabolic syndrome indicators (elevated blood pressure [systolic BP > 130 mmHg and diastolic BP ≥ 85 mmHg], elevated fasting glucose [≥ 110 mg/dL], elevated fasting triglyceride [≥ 150 mg/dL], reduced HDL cholesterol [< 50 mg/dL], and waist circumference > 88 cm) and the composite measure - Metabolic Syndrome (METs) Status. Each metabolic syndrome indicator was coded one if yes and 0 otherwise. The values of the five indicators were summed, and participants with a total score of 3 or greater were classified as having metabolic syndrome. Participants classified as having metabolic syndrome were assigned a code of 1 and 0 otherwise for analysis. The covariates accounted for in this study fell into sociodemographic factors and behavioral and health characteristics. Results: The participants' mean (SD) age was 47.1 (9.1) years, with 71.4% Blacks and 10.9% Whites. About a third (33.1%) had less than a high school (HS) diploma, 60.4% were married, 32.8% were employed, and 53.7% were low-income. The prevalence of worst dietary diversity, low, moderate, and high food security were 24.1%, 26.6%, 17.0%, and 56.4%, respectively. The correlate profile of the five individual METs factors plus the composite measure of METs differ significantly, with METs based on HDL having the most correlates (Age, Education, Drinking Status, Low Income, Body Mass Index, and Health Perception). Additionally, metabolic syndrome based on waist circumference was the only metabolic factor where food security was significantly correlated (Food Security, Age, and Body Mass Index). Age was a significant predictor of all five individual METs factors plus the composite METs measure. Except for METs based on Fasting Triglycerides, body mass index (BMI) was a significant correlate of the various measures of metabolic syndrome. Conclusion: High-density Lipoprotein (HDL) cholesterol significantly correlated with most predictors. BMI was a significant predictor of all METs factors except Fasting Triglycerides. Food insecurity, the primary predictor, was only significantly associated with waist circumference.

Keywords: blood pressure, food insecurity, fasting glucose, fasting triglyceride, high-density lipoprotein, metabolic syndrome, waist circumference, women living with HIV

Procedia PDF Downloads 46
1004 A Standard-Based Competency Evaluation Scale for Preparing Qualified Adapted Physical Education Teachers

Authors: Jiabei Zhang

Abstract:

Although adapted physical education (APE) teacher preparation programs are available in the nation, a consistent standards-based competency evaluation scale for preparing of qualified personnel for teaching children with disabilities in APE cannot be identified in the literature. The purpose of this study was to develop a standard-based competency evaluation scale for assessing qualifications for teaching children with disabilities in APE. Standard-based competencies were reviewed and identified based on research evidence documented as effective in teaching children with disabilities in APE. A standard-based competency scale was developed for assessing qualifications for teaching children with disabilities in APE. This scale included 20 standard-based competencies and a 4-point Likert-type scale for each standard-based competency. The first standard-based competency is knowledgeable of the causes of disabilities and their effects. The second competency is the ability to assess physical education skills of children with disabilities. The third competency is able to collaborate with other personnel. The fourth competency is knowledgeable of the measurement and evaluation. The fifth competency is to understand federal and state laws. The sixth competency is knowledgeable of the unique characteristics of all learners. The seventh competency is the ability to write in behavioral terms for objectives. The eighth competency is knowledgeable of developmental characteristics. The ninth competency is knowledgeable of normal and abnormal motor behaviors. The tenth competency is the ability to analyze and adapt the physical education curriculums. The eleventh competency is to understand the history and the philosophy of physical education. The twelfth competency is to understand curriculum theory and development. The thirteenth competency is the ability to utilize instructional designs and plans. The fourteenth competency is the ability to create and implement physical activities. The fifteenth competency is the ability to utilize technology applications. The sixteenth competency is to understand the value of program evaluation. The seventeenth competency is to understand professional standards. The eighteenth competency is knowledgeable of the focused instruction and individualized interventions. The nineteenth competency is able to complete a research project independently. The twentieth competency is to teach children with disabilities in APE independently. The 4-point Likert-type scale ranges from 1 for incompetent to 4 for highly competent. This scale is used for assessing if one completing all course works is eligible for receiving an endorsement for teaching children with disabilities in APE, which is completed based on the grades earned on three courses targeted for each standard-based competency. A mean grade received in three courses primarily addressing a standard-based competency will be marked on a competency level in the above scale. The level 4 is marked for a mean grade of A one receives over three courses, the level 3 for a mean grade of B over three courses, and so on. One should receive a mean score of 3 (competent level) or higher (highly competent) across 19 standard-based competencies after completing all courses specified for receiving an endorsement for teaching children with disabilities in APE. The validity, reliability, and objectivity of this standard-based competency evaluation scale are to be documented.

Keywords: evaluation scale, teacher preparation, adapted physical education teachers, and children with disabilities

Procedia PDF Downloads 105
1003 Biocompatibility Tests for Chronic Application of Sieve-Type Neural Electrodes in Rats

Authors: Jeong-Hyun Hong, Wonsuk Choi, Hyungdal Park, Jinseok Kim, Junesun Kim

Abstract:

Identifying the chronic functions of an implanted neural electrode is an important factor in acquiring neural signals through the electrode or restoring the nerve functions after peripheral nerve injury. The purpose of this study was to investigate the biocompatibility of the chronic implanted neural electrode into the sciatic nerve. To do this, a sieve-type neural electrode was implanted at proximal and distal ends of a transected sciatic nerve as an experimental group (Sieve group, n=6), and the end-to-end epineural repair was operated with the cut sciatic nerve as a control group (reconstruction group, n=6). All surgeries were performed on the sciatic nerve of the right leg in Sprague Dawley rats. Behavioral tests were performed before and 1, 4, 7, 10, 14, and weekly days until 5 months following surgery. Changes in sensory function were assessed by measuring paw withdrawal responses to mechanical and cold stimuli. Motor function was assessed by motion analysis using a Qualisys program, which showed a range of motion (ROM) related to the joints. Neurofilament-heavy chain and fibronectin expression were detected 5 months after surgery. In both groups, the paw withdrawal response to mechanical stimuli was slightly decreased from 3 weeks after surgery and then significantly decreased at 6 weeks after surgery. The paw withdrawal response to cold stimuli was increased from 4 days following surgery in both groups and began to decrease from 6 weeks after surgery. The ROM of the ankle joint was showed a similar pattern in both groups. There was significantly increased from 1 day after surgery and then decreased from 4 days after surgery. Neurofilament-heavy chain expression was observed throughout the entire sciatic nerve tissues in both groups. Especially, the sieve group was showed several neurofilaments that passed through the channels of the sieve-type neural electrode. In the reconstruction group, however, a suture line was seen through neurofilament-heavy chain expression up to 5 months following surgery. In the reconstruction group, fibronectin was detected throughout the sciatic nerve. However, in the sieve group, the fibronectin was observed only in the surrounding nervous tissues of an implanted neural electrode. The present results demonstrated that the implanted sieve-type neural electrode induced a focal inflammatory response. However, the chronic implanted sieve-type neural electrodes did not cause any further inflammatory response following peripheral nerve injury, suggesting the possibility of the chronic application of the sieve-type neural electrodes. This work was supported by the Basic Science Research Program funded by the Ministry of Science (2016R1D1A1B03933986), and by the convergence technology development program for bionic arm (2017M3C1B2085303).

Keywords: biocompatibility, motor functions, neural electrodes, peripheral nerve injury, sensory functions

Procedia PDF Downloads 135
1002 The Combined Use of L-Arginine and Progesterone During the Post-breeding Period in Female Rabbits Increases the Weight of Their Fetuses

Authors: Diego F. Carrillo-González, Milena Osorio, Natalia M. Cerro, Yasser Y. Lenis

Abstract:

Introduction: mortality during the implantation and early embryonic development periods reach around 30% in different mammalian species. It has been described that progesterone (P4) and Arginine (Arg) play a beneficial role in establishing and maintaining early pregnancy in mammals. The combined effect between Arg and P4 on reproductive parameters in the rabbit species is not yet elucidated, to our best knowledge. Objective: to assess the effect of L-arginine and progesterone during the post-breeding period in female rabbits on the composition of the amniotic fluid, the placental structure, and the bone growth in their fetuses. Methods: crossbred female rabbits (n=16) were randomly distributed into four experimental groups (Ctrl, Arg, P4, and Arg+P4). In the control group, 0.9% saline solution was administered as a placebo, the Arg group was administered arginine (50 mg/kg BW) from day 4.5 to day 19 post-breeding, the P4 group was administered progesterone (Gestavec®, 1.5 mg/kg BW) from 24 hours to day 4 post-breeding and for the Arg+P4 group, an administration was performed under the same time and dose guidelines as the Arg and P4 treatments. Four females were sacrificed, and the amniotic fluid was collected and analyzed with rapid urine test strips, while the placenta and fetuses were processed in the laboratory to obtain histological plates. The percentage of deciduous, labyrinthine, and junctional zones was determined, and the length of the femur for each fetus was measured as an indicator of growth. Descriptive statistics were applied to identify the success rates for each of the tests. Afterwards, A one-way analysis of variance (ANOVA) was performed, and a comparison of means was conducted by Tukey's test. Results: a higher density (p<0.05) was observed in the amniotic fluid for fetuses in the control group (1022±2.5g/mL) compared to the P4 (1015±5.3g/mL) and Arg+P4 (1016±4,9g/mL) groups. Additionally, the density of amniotic fluid in the Arg group (1021±2.5g/mL) was higher (p<0.05) than in the P4 group. The concentration of protein, glucose, and ascorbic acid had no statistical difference between treatments (p>0.05). The histological analysis of the uteroplacental regions, a statistical difference (p<0,05) in the proportion of deciduous zone was found between the P4 group (9.6±2.6%) when compared with the Ctrl (28.15±12.3%), and Arg+P4 (26.3±4.9) groups. In the analysis of the fetuses, the weight was higher for the Arg group (2.69±0.18), compared to the other groups (p<0.05), while a shorter length was observed (p<0.05) in the fetuses for the Arg+P4 group (25.97±1.17). However, no difference (p>0.05) was found when comparing the length of the developing femurs between the experimental groups. Conclusion: the combination of L-arginine and progesterone allows a reduction in the density of amniotic fluid, without affecting the protein, energy, and antioxidant components. However, the use of L-arginine stimulates weight gain in fetuses, without affecting size, which could be used to improve production parameters in rabbit production systems. In addition, the modification in the deciduous zone could show a placental adaptation based on the fetal growth process, however more specific studies on the placentation process are required.

Keywords: arginine, progesterone, rabbits, reproduction

Procedia PDF Downloads 74
1001 The Effect of Emotional Intelligence on Physiological Stress of Managers

Authors: Mikko Salminen, Simo Järvelä, Niklas Ravaja

Abstract:

One of the central models of emotional intelligence (EI) is that of Mayer and Salovey’s, which includes ability to monitor own feelings and emotions and those of others, ability to discriminate different emotions, and to use this information to guide thinking and actions. There is vast amount of previous research where positive links between EI and, for example, leadership successfulness, work outcomes, work wellbeing and organizational climate have been reported. EI has also a role in the effectiveness of work teams, and the effects of EI are especially prominent in jobs requiring emotional labor. Thus, also the organizational context must be taken into account when considering the effects of EI on work outcomes. Based on previous research, it is suggested that EI can also protect managers from the negative consequences of stress. Stress may have many detrimental effects on the manager’s performance in essential work tasks. Previous studies have highlighted the effects of stress on, not only health, but also, for example, on cognitive tasks such as decision-making, which is important in managerial work. The motivation for the current study came from the notion that, unfortunately, many stressed individuals may not be aware of the circumstance; periods of stress-induced physiological arousal may be prolonged if there is not enough time for recovery. To tackle this problem, physiological stress levels of managers were collected using recording of heart rate variability (HRV). The goal was to use this data to provide the managers with feedback on their stress levels. The managers could access this feedback using a www-based learning environment. In the learning environment, in addition to the feedback on stress level and other collected data, also developmental tasks were provided. For example, those with high stress levels were sent instructions for mindfulness exercises. The current study focuses on the relation between the measured physiological stress levels and EI of the managers. In a pilot study, 33 managers from various fields wore the Firstbeat Bodyguard HRV measurement devices for three consecutive days and nights. From the collected HRV data periods (minutes) of stress and recovery were detected using dedicated software. The effects of EI on HRV-calculated stress indexes were studied using Linear Mixed Models procedure in SPSS. There was a statistically significant effect of total EI, defined as an average score of Schutte’s emotional intelligence test, on the percentage of stress minutes during the whole measurement period (p=.025). More stress minutes were detected on those managers who had lower emotional intelligence. It is suggested, that high EI provided managers with better tools to cope with stress. Managing of own emotions helps the manager in controlling possible negative emotions evoked by, e.g., critical feedback or increasing workload. High EI managers may also be more competent in detecting emotions of others, which would lead to smoother interactions and less conflicts. Given the recent trend to different quantified-self applications, it is suggested that monitoring of bio-signals would prove to be a fruitful direction to further develop new tools for managerial and leadership coaching.

Keywords: emotional intelligence, leadership, heart rate variability, personality, stress

Procedia PDF Downloads 214
1000 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective

Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou

Abstract:

The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.

Keywords: mortality map, spatial patterns, statistical area, variation

Procedia PDF Downloads 245
999 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India

Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar

Abstract:

The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.

Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose

Procedia PDF Downloads 246
998 Harnessing Sunlight for Clean Water: Scalable Approach for Silver-Loaded Titanium Dioxide Nanoparticles

Authors: Satam Alotibi, Muhammad J. Al-Zahrani, Fahd K. Al-Naqidan, Turki S. Hussein, Moteb Alotaibi, Mohammed Alyami, Mahdy M. Elmahdy, Abdellah Kaiba, Fatehia S. Alhakami, Talal F. Qahtan

Abstract:

Water pollution is a critical global challenge that demands scalable and effective solutions for water decontamination. In this captivating research, we unveil a groundbreaking strategy for harnessing solar energy to synthesize silver (Ag) clusters on stable titanium dioxide (TiO₂) nanoparticles dispersed in water, without the need for traditional stabilization agents. These Ag-loaded TiO₂ nanoparticles exhibit exceptional photocatalytic activity, surpassing that of pristine TiO₂ nanoparticles, offering a promising solution for highly efficient water decontamination under sunlight irradiation. To the best knowledge, we have developed a unique method to stabilize TiO₂ P25 nanoparticles in water without the use of stabilization agents. This breakthrough allows us to create an ideal platform for the solar-driven synthesis of Ag clusters. Under sunlight irradiation, the stable dispersion of TiO₂ P25 nanoparticles acts as a highly efficient photocatalyst, generating electron-hole pairs. The photogenerated electrons effectively reduce silver ions derived from a silver precursor, resulting in the formation of Ag clusters. The Ag clusters loaded on TiO₂ P25 nanoparticles exhibit remarkable photocatalytic activity for water decontamination under sunlight irradiation. Acting as active sites, these Ag clusters facilitate the generation of reactive oxygen species (ROS) upon exposure to sunlight. These ROS play a pivotal role in rapidly degrading organic pollutants, enabling efficient water decontamination. To confirm the success of our approach, we characterized the synthesized Ag-loaded TiO₂ P25 nanoparticles using cutting-edge analytical techniques, such as transmission electron microscopy (TEM), scanning electron microscopy (SEM), X-ray diffraction (XRD), and spectroscopic methods. These characterizations unequivocally confirm the successful synthesis of Ag clusters on stable TiO₂ P25 nanoparticles without traditional stabilization agents. Comparative studies were conducted to evaluate the superior photocatalytic performance of Ag-loaded TiO₂ P25 nanoparticles compared to pristine TiO₂ P25 nanoparticles. The Ag clusters loaded on TiO₂ P25 nanoparticles exhibit significantly enhanced photocatalytic activity, benefiting from the synergistic effect between the Ag clusters and TiO₂ nanoparticles, which promotes ROS generation for efficient water decontamination. Our scalable strategy for synthesizing Ag clusters on stable TiO₂ P25 nanoparticles without stabilization agents presents a game-changing solution for highly efficient water decontamination under sunlight irradiation. The use of commercially available TiO₂ P25 nanoparticles streamlines the synthesis process and enables practical scalability. The outstanding photocatalytic performance of Ag-loaded TiO₂ P25 nanoparticles opens up new avenues for their application in large-scale water treatment and remediation processes, addressing the urgent need for sustainable water decontamination solutions.

Keywords: water pollution, solar energy, silver clusters, TiO₂ nanoparticles, photocatalytic activity

Procedia PDF Downloads 54
997 Aspects Concerning the Use of Recycled Concrete Aggregates

Authors: Ion Robu, Claudiu Mazilu, Radu Deju

Abstract:

Natural aggregates (gravel and crushed) are essential non-renewable resources which are used for infrastructure works and civil engineering. In European Union member states from Southeast Europe, it is estimated that the construction industry will grow by 4.2% thereafter complicating aggregate supply management. In addition, a significant additional problem that can be associated to the aggregates industry is wasting potential resources through waste dumping of inert waste, especially waste from construction and demolition activities. In 2012, in Romania, less than 10% of construction and demolition waste (including concrete) are valorized, while the European Union requires that by 2020 this proportion should be at least 70% (Directive 2008/98/EC on waste, transposed into Romanian legislation by Law 211/2011). Depending on the efficiency of waste processing and the quality of recycled aggregate concrete (RCA) obtained, poor quality aggregate can be used as foundation material for roads and at the high quality for new concrete on construction. To obtain good quality concrete using recycled aggregate is necessary to meet the minimum requirements defined by the rules for the manufacture of concrete with natural aggregate. Properties of recycled aggregate (density, granulosity, granule shape, water absorption, weight loss to Los Angeles test, attached mortar content etc.) are the basis for concrete quality; also establishing appropriate proportions between components and the concrete production methods are extremely important for its quality. This paper presents a study on the use of recycled aggregates, from a concrete of specified class, to acquire new cement concrete with different percentages of recycled aggregates. To achieve recycled aggregates several batches of concrete class C16/20, C25/30 and C35/45 were made, the compositions calculation being made according NE012/2007 CP012/2007. Tests for producing recycled aggregate was carried out using concrete samples of the established three classes after 28 days of storage under the above conditions. Cubes with 150mm side were crushed in a first stage with a jaw crusher Liebherr type set at 50 mm nominally. The resulting material was separated by sieving on granulometric sorts and 10-50 sort was used for preliminary tests of crushing in the second stage with a jaw crusher BB 200 Retsch model, respectively a hammer crusher Buffalo Shuttle WA-12-H model. It was highlighted the influence of the type of crusher used to obtain recycled aggregates on granulometry and granule shape and the influence of the attached mortar on the density, water absorption, behavior to the Los Angeles test etc. The proportion of attached mortar was determined and correlated with provenance concrete class of the recycled aggregates and their granulometric sort. The aim to characterize the recycled aggregates is their valorification in new concrete used in construction. In this regard have been made a series of concrete in which the recycled aggregate content was varied from 0 to 100%. The new concrete were characterized by point of view of the change in the density and compressive strength with the proportion of recycled aggregates. It has been shown that an increase in recycled aggregate content not necessarily mean a reduction in compressive strength, quality of the aggregate having a decisive role.

Keywords: recycled concrete aggregate, characteristics, recycled aggregate concrete, properties

Procedia PDF Downloads 197
996 Unifying RSV Evolutionary Dynamics and Epidemiology Through Phylodynamic Analyses

Authors: Lydia Tan, Philippe Lemey, Lieselot Houspie, Marco Viveen, Darren Martin, Frank Coenjaerts

Abstract:

Introduction: Human respiratory syncytial virus (hRSV) is the leading cause of severe respiratory tract infections in infants under the age of two. Genomic substitutions and related evolutionary dynamics of hRSV are of great influence on virus transmission behavior. The evolutionary patterns formed are due to a precarious interplay between the host immune response and RSV, thereby selecting the most viable and less immunogenic strains. Studying genomic profiles can teach us which genes and consequent proteins play an important role in RSV survival and transmission dynamics. Study design: In this study, genetic diversity and evolutionary rate analysis were conducted on 36 RSV subgroup B whole genome sequences and 37 subgroup A genome sequences. Clinical RSV isolates were obtained from nasopharyngeal aspirates and swabs of children between 2 weeks and 5 years old of age. These strains, collected during epidemic seasons from 2001 to 2011 in the Netherlands and Belgium by either conventional or 454-sequencing. Sequences were analyzed for genetic diversity, recombination events, synonymous/non-synonymous substitution ratios, epistasis, and translational consequences of mutations were mapped to known 3D protein structures. We used Bayesian statistical inference to estimate the rate of RSV genome evolution and the rate of variability across the genome. Results: The A and B profiles were described in detail and compared to each other. Overall, the majority of the whole RSV genome is highly conserved among all strains. The attachment protein G was the most variable protein and its gene had, similar to the non-coding regions in RSV, more elevated (two-fold) substitution rates than other genes. In addition, the G gene has been identified as the major target for diversifying selection. Overall, less gene and protein variability was found within RSV-B compared to RSV-A and most protein variation between the subgroups was found in the F, G, SH and M2-2 proteins. For the F protein mutations and correlated amino acid changes are largely located in the F2 ligand-binding domain. The small hydrophobic phosphoprotein and nucleoprotein are the most conserved proteins. The evolutionary rates were similar in both subgroups (A: 6.47E-04, B: 7.76E-04 substitution/site/yr), but estimates of the time to the most recent common ancestor were much lower for RSV-B (B: 19, A: 46.8 yrs), indicating that there is more turnover in this subgroup. Conclusion: This study provides a detailed description of whole RSV genome mutations, the effect on translation products and the first estimate of the RSV genome evolution tempo. The immunogenic G protein seems to require high substitution rates in order to select less immunogenic strains and other conserved proteins are most likely essential to preserve RSV viability. The resulting G gene variability makes its protein a less interesting target for RSV intervention methods. The more conserved RSV F protein with less antigenic epitope shedding is, therefore, more suitable for developing therapeutic strategies or vaccines.

Keywords: drug target selection, epidemiology, respiratory syncytial virus, RSV

Procedia PDF Downloads 400
995 Sustainable Living Where the Immaterial Matters

Authors: Maria Hadjisoteriou, Yiorgos Hadjichristou

Abstract:

This paper aims to explore and provoke a debate, through the work of the design studio, “living where the immaterial matters” of the architecture department of the University of Nicosia, on the role that the “immaterial matter” can play in enhancing innovative sustainable architecture and viewing the cities as sustainable organisms that always grow and alter. The blurring, juxtaposing binary of immaterial and matter, as the theoretical backbone of the Unit is counterbalanced by the practicalities of the contested sites of the last divided capital Nicosia with its ambiguous green line and the ghost city of Famagusta in the island of Cyprus. Jonathan Hill argues that the ‘immaterial is as important to architecture as the material concluding that ‘Immaterial–Material’ weaves the two together, so that they are in conjunction not opposition’. This understanding of the relationship of the immaterial vs material set the premises and the departing point of our argument, and talks about new recipes for creating hybrid public space that can lead to the unpredictability of a complex and interactive, sustainable city. We hierarchized the human experience as a priority. We distinguish the notion of space and place referring to Heidegger’s ‘building dwelling thinking’: ‘a distinction between space and place, where spaces gain authority not from ‘space’ appreciated mathematically but ‘place’ appreciated through human experience’. Following the above, architecture and the city are seen as one organism. The notions of boundaries, porous borders, fluidity, mobility, and spaces of flows are the lenses of the investigation of the unit’s methodology, leading to the notion of a new hybrid urban environment, where the main constituent elements are in a flux relationship. The material and the immaterial flows of the town are seen interrelated and interwoven with the material buildings and their immaterial contents, yielding to new sustainable human built environments. The above premises consequently led to choices of controversial sites. Indisputably a provoking site was the ghost town of Famagusta where the time froze back in 1974. Inspired by the fact that the nature took over the a literally dormant, decaying city, a sustainable rebirthing was seen as an opportunity where both nature and built environment, material and immaterial are interwoven in a new emergent urban environment. Similarly, we saw the dividing ‘green line’ of Nicosia completely failing to prevent the trespassing of images, sounds and whispers, smells and symbols that define the two prevailing cultures and becoming a porous creative entity which tends to start reuniting instead of separating , generating sustainable cultures and built environments. The authors would like to contribute to the debate by introducing a question about a new recipe of cooking the built environment. Can we talk about a new ‘urban recipe’: ‘cooking architecture and city’ to deliver an ever changing urban sustainable organism, whose identity will mainly depend on the interrelationship of the immaterial and material constituents?

Keywords: blurring zones, porous borders, spaces of flow, urban recipe

Procedia PDF Downloads 408
994 Metalorganic Chemical Vapor Deposition Overgrowth on the Bragg Grating for Gallium Nitride Based Distributed Feedback Laser

Authors: Junze Li, M. Li

Abstract:

Laser diodes fabricated from the III-nitride material system are emerging solutions for the next generation telecommunication systems and optical clocks based on Ca at 397nm, Rb at 420.2nm and Yb at 398.9nm combined 556 nm. Most of the applications require single longitudinal optical mode lasers, with very narrow linewidth and compact size, such as communication systems and laser cooling. In this case, the GaN based distributed feedback (DFB) laser diode is one of the most effective candidates with gratings are known to operate with narrow spectra as well as high power and efficiency. Given the wavelength range, the period of the first-order diffraction grating is under 100 nm, and the realization of such gratings is technically difficult due to the narrow line width and the high quality nitride overgrowth based on the Bragg grating. Some groups have reported GaN DFB lasers with high order distributed feedback surface gratings, which avoids the overgrowth. However, generally the strength of coupling is lower than that with Bragg grating embedded into the waveguide within the GaN laser structure by two-step-epitaxy. Therefore, the overgrowth on the grating technology need to be studied and optimized. Here we propose to fabricate the fine step shape structure of first-order grating by the nanoimprint combined inductively coupled plasma (ICP) dry etching, then carry out overgrowth high quality AlGaN film by metalorganic chemical vapor deposition (MOCVD). Then a series of gratings with different period, depths and duty ratios are designed and fabricated to study the influence of grating structure to the nano-heteroepitaxy. Moreover, we observe the nucleation and growth process by step-by-step growth to study the growth mode for nitride overgrowth on grating, under the condition that the grating period is larger than the mental migration length on the surface. The AFM images demonstrate that a smooth surface of AlGaN film is achieved with an average roughness of 0.20 nm over 3 × 3 μm2. The full width at half maximums (FWHMs) of the (002) reflections in the XRD rocking curves are 278 arcsec for the AlGaN film, and the component of the Al within the film is 8% according to the XRD mapping measurement, which is in accordance with design values. By observing the samples with growth time changing from 200s, 400s to 600s, the growth model is summarized as the follow steps: initially, the nucleation is evenly distributed on the grating structure, as the migration length of Al atoms is low; then, AlGaN growth alone with the grating top surface; finally, the AlGaN film formed by lateral growth. This work contributed to carrying out GaN DFB laser by fabricating grating and overgrowth on the nano-grating patterned substrate by wafer scale, moreover, growth dynamics had been analyzed as well.

Keywords: DFB laser, MOCVD, nanoepitaxy, III-niitride

Procedia PDF Downloads 169
993 Physicochemical-Mechanical, Thermal and Rheological Properties Analysis of Pili Tree (Canarium Ovatum) Resin as Aircraft Integral Fuel Tank Sealant

Authors: Mark Kennedy, E. Bantugon, Noruane A. Daileg

Abstract:

Leaks arising from aircraft fuel tanks is a protracted problem for the aircraft manufacturers, operators, and maintenance crews. It principally arises from stress, structural defects, or degraded sealants as the aircraft age. It can be ignited by different sources, which can result in catastrophic flight and consequences, exhibiting a major drain both on time and budget. In order to mitigate and eliminate this kind of problem, the researcher produced an experimental sealant having a base material of natural tree resin, the Pili Tree Resin. Aside from producing an experimental sealant, the main objective of this research is to analyze its physical, chemical, mechanical, thermal, and rheological properties, which is beneficial and effective for specific aircraft parts, particularly the integral fuel tank. The experimental method of research was utilized in this study since it is a product invention. This study comprises two parts, specifically the Optimization Process and the Characterization Process. In the Optimization Process, the experimental sealant was subjected to the Flammability Test, an important test and consideration according to 14 Code of Federal Regulation Appendix N, Part 25 - Fuel Tank Flammability Exposure and Reliability Analysis, to get the most suitable formulation. Followed by the Characterization Process, where the formulated experimental sealant has undergone thirty-eight (38) different standard testing including Organoleptic, Instrumental Color Measurement Test, Smoothness of Appearance Test, Miscibility Test, Boiling Point Test, Flash Point Test, Curing Time, Adhesive Test, Toxicity Test, Shore A Hardness Test, Compressive Strength, Shear Strength, Static Bending Strength, Tensile Strength, Peel Strength Test, Knife Test, Adhesion by Tape Test, Leakage Test), Drip Test, Thermogravimetry-Differential Thermal Analysis (TG-DTA), Differential Scanning Calorimetry, Calorific Value, Viscosity Test, Creep Test, and Anti-Sag Resistance Test to determine and analyze the five (5) material properties of the sealant. The numerical values of the mentioned tests are determined using product application, testing, and calculation. These values are then used to calculate the efficiency of the experimental sealant. Accordingly, this efficiency is the means of comparison between the experimental and commercial sealant. Based on the results of the different standard testing conducted, the experimental sealant exceeded all the data results of the commercial sealant. This result shows that the physicochemical-mechanical, thermal, and rheological properties of the experimental sealant are far more effective as an aircraft integral fuel tank sealant alternative in comparison to the commercial sealant. Therefore, Pili Tree possesses a new role and function: a source of ingredients in sealant production.

Keywords: Aircraft Integral Fuel Tank, Physicochemi-mechanical, Pili Tree Resin, Properties, Rheological, Sealant, Thermal

Procedia PDF Downloads 268
992 Statistical Analysis to Compare between Smart City and Traditional Housing

Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh

Abstract:

Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and design

Keywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving

Procedia PDF Downloads 102
991 The Structural Alteration of DNA Native Structure of Staphylococcus aureus Bacteria by Designed Quinoxaline Small Molecules Result in Their Antibacterial Properties

Authors: Jeet Chakraborty, Sanjay Dutta

Abstract:

Antibiotic resistance by bacteria has proved to be a severe threat to mankind in recent times, and this fortifies an urgency to design and develop potent antibacterial small molecules/compounds with nonconventional mechanisms than the conventional ones. DNA carries the genetic signature of any organism, and bacteria maintain their genomic DNA inside the cell in a well-regulated compact form with the help of various nucleoid associated proteins like HU, HNS, etc. These proteins control various fundamental processes like gene expression, replication, etc., inside the cell. Alteration of the native DNA structure of bacteria can lead to severe consequences in cellular processes inside the bacterial cell that ultimately result in the death of the organism. The change in the global DNA structure by small molecules initiates a plethora of cellular responses that have not been very well investigated. Echinomycin and Triostin-A are biologically active Quinoxaline small molecules that typically consist of a quinoxaline chromophore attached with an octadepsipeptide ring. They bind to double-stranded DNA in a sequence-specific way and have high activity against a wide variety of bacteria, mainly against Gram-positive ones. To date, few synthetic quinoxaline scaffolds were synthesized, displaying antibacterial potential against a broad scale of pathogenic bacteria. QNOs (Quinoxaline N-oxides) are known to target DNA and instigate reactive oxygen species (ROS) production in bacteria, thereby exhibiting antibacterial properties. The divergent role of Quinoxaline small molecules in medicinal research qualifies them for the evaluation of their antimicrobial properties as a potential candidate. The previous study from our lab has given new insights on a 6-nitroquinoxaline derivative 1d as an intercalator of DNA, which induces conformational changes in DNA upon binding.7 The binding event observed was dependent on the presence of a crucial benzyl substituent on the quinoxaline moiety. This was associated with a large induced CD (ICD) appearing in a sigmoidal pattern upon the interaction of 1d with dsDNA. The induction of DNA superstructures by 1d at high Drug:DNA ratios was observed that ultimately led to DNA condensation. Eviction of invitro-assembled nucleosome upon treatment with a high dose of 1d was also observed. In this work, monoquinoxaline derivatives of 1d were synthesized by various modifications of the 1d scaffold. The set of synthesized 6-nitroquinoxaline derivatives along with 1d were all subjected to antibacterial evaluation across five different bacteria species. Among the compound set, 3a displayed potent antibacterial activity against Staphylococcus aureus bacteria. 3a was further subjected to various biophysical studies to check whether the DNA structural alteration potential was still intact. The biological response of S. aureus cells upon treatment with 3a was studied using various cell biology processes, which led to the conclusion that 3d can initiate DNA damage in the S. aureus cells. Finally, the potential of 3a in disrupting preformed S.aureus and S.epidermidis biofilms was also studied.

Keywords: DNA structural change, antibacterial, intercalator, DNA superstructures, biofilms

Procedia PDF Downloads 156
990 Solutions to Reduce CO2 Emissions in Autonomous Robotics

Authors: Antoni Grau, Yolanda Bolea, Alberto Sanfeliu

Abstract:

Mobile robots can be used in many different applications, including mapping, search, rescue, reconnaissance, hazard detection, and carpet cleaning, exploration, etc. However, they are limited due to their reliance on traditional energy sources such as electricity and oil which cannot always provide a convenient energy source in all situations. In an ever more eco-conscious world, solar energy offers the most environmentally clean option of all energy sources. Electricity presents threats of pollution resulting from its production process, and oil poses a huge threat to the environment. Not only does it pose harm by the toxic emissions (for instance CO2 emissions), it produces the combustion process necessary to produce energy, but there is the ever present risk of oil spillages and damages to ecosystems. Solar energy can help to mitigate carbon emissions by replacing more carbon intensive sources of heat and power. The challenge of this work is to propose the design and the implementation of electric battery recharge stations. Those recharge docks are based on the use of renewable energy such as solar energy (with photovoltaic panels) with the object to reduce the CO2 emissions. In this paper, a comparative study of the CO2 emission productions (from the use of different energy sources: natural gas, gas oil, fuel and solar panels) in the charging process of the Segway PT batteries is carried out. To make the study with solar energy, a photovoltaic panel, and a Buck-Boost DC/DC block has been used. Specifically, the STP005S-12/Db solar panel has been used to carry out our experiments. This module is a 5Wp-photovoltaic (PV) module, configured with 36 monocrystalline cells serially connected. With those elements, a battery recharge station is made to recharge the robot batteries. For the energy storage DC/DC block, a series of ultracapacitors have been used. Due to the variation of the PV panel with the temperature and irradiation, and the non-integer behavior of the ultracapacitors as well as the non-linearities of the whole system, authors have been used a fractional control method to achieve that solar panels supply the maximum allowed power to recharge the robots in the lesser time. Greenhouse gas emissions for production of electricity vary due to regional differences in source fuel. The impact of an energy technology on the climate can be characterised by its carbon emission intensity, a measure of the amount of CO2, or CO2 equivalent emitted by unit of energy generated. In our work, the coal is the fossil energy more hazardous, providing a 53% more of gas emissions than natural gas and a 30% more than fuel. Moreover, it is remarkable that existing fossil fuel technologies produce high carbon emission intensity through the combustion of carbon-rich fuels, whilst renewable technologies such as solar produce little or no emissions during operation, but may incur emissions during manufacture. The solar energy thus can help to mitigate carbon emissions.

Keywords: autonomous robots, CO2 emissions, DC/DC buck-boost, solar energy

Procedia PDF Downloads 409
989 A 4-Month Low-carb Nutrition Intervention Study Aimed to Demonstrate the Significance of Addressing Insulin Resistance in 2 Subjects with Type-2 Diabetes for Better Management

Authors: Shashikant Iyengar, Jasmeet Kaur, Anup Singh, Arun Kumar, Ira Sahay

Abstract:

Insulin resistance (IR) is a condition that occurs when cells in the body become less responsive to insulin, leading to higher levels of both insulin and glucose in the blood. This condition is linked to metabolic syndromes, including diabetes. It is crucial to address IR promptly after diagnosis to prevent long-term complications associated with high insulin and high blood glucose. This four-month case study highlights the importance of treating the underlying condition to manage diabetes effectively. Insulin is essential for regulating blood sugar levels by facilitating the uptake of glucose into cells for energy or storage. In IR individuals, cells are less efficient at taking up glucose from the blood resulting in elevated blood glucose levels. As a result of IR, beta cells produce more insulin to make up for the body's inability to use insulin effectively. This leads to high insulin levels, a condition known as hyperinsulinemia, which further impairs glucose metabolism and can contribute to various chronic diseases. In addition to regulating blood glucose, insulin has anti-catabolic effects, preventing the breakdown of molecules in the body, such as inhibiting glycogen breakdown in the liver, inhibiting gluconeogenesis, and inhibiting lipolysis. If a person is insulin-sensitive or metabolically healthy, an optimal level of insulin prevents fat cells from releasing fat and promotes the storage of glucose and fat in the body. Thus optimal insulin levels are crucial for maintaining energy balance and plays a key role in metabolic processes. During the four-month study, researchers looked at the impact of a low-carb dietary (LCD) intervention on two male individuals (A & B) who had Type-2 diabetes. Althoughvneither of these individuals were obese, they were both slightly overweight and had abdominal fat deposits. Before the trial began, important markers such as fasting blood glucose (FBG), triglycerides (TG), high-density lipoprotein (HDL) cholesterol, and Hba1c were measured. These markers are essential in defining metabolic health, their individual values and variability are integral in deciphering metabolic health. The ratio of TG to HDL is used as a surrogate marker for IR. This ratio has a high correlation with the prevalence of metabolic syndrome and with IR itself. It is a convenient measure because it can be calculated from a standard lipid profile and does not require more complex tests. In this four-month trial, an improvement in insulin sensitivity was observed through the ratio of TG/HDL, which, in turn, improves fasting blood glucose levels and HbA1c. For subject A, HbA1c dropped from 13 to 6.28, and for subject B, it dropped from 9.4 to 5.7. During the trial, neither of the subjects were taking any diabetic medications. The significant improvements in their health markers, such as better glucose control, along with an increase in energy levels, demonstrate that incorporating LCD interventions can effectively manage diabetes.

Keywords: metabolic disorder, insulin resistance, type-2 diabetes, low-carb nutrition

Procedia PDF Downloads 27
988 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 305
987 Profiling of Bacterial Communities Present in Feces, Milk, and Blood of Lactating Cows Using 16S rRNA Metagenomic Sequencing

Authors: Khethiwe Mtshali, Zamantungwa T. H. Khumalo, Stanford Kwenda, Ismail Arshad, Oriel M. M. Thekisoe

Abstract:

Ecologically, the gut, mammary glands and bloodstream consist of distinct microbial communities of commensals, mutualists and pathogens, forming a complex ecosystem of niches. The by-products derived from these body sites i.e. faeces, milk and blood, respectively, have many uses in rural communities where they aid in the facilitation of day-to-day household activities and occasional rituals. Thus, although livestock rearing plays a vital role in the sustenance of the livelihoods of rural communities, it may serve as a potent reservoir of different pathogenic organisms that could have devastating health and economic implications. This study aimed to simultaneously explore the microbial profiles of corresponding faecal, milk and blood samples from lactating cows using 16S rRNA metagenomic sequencing. Bacterial communities were inferred through the Divisive Amplicon Denoising Algorithm 2 (DADA2) pipeline coupled with SILVA database v138. All downstream analyses were performed in R v3.6.1. Alpha-diversity metrics showed significant differences between faeces and blood, faeces and milk, but did not vary significantly between blood and milk (Kruskal-Wallis, P < 0.05). Beta-diversity metrics on Principal Coordinate Analysis (PCoA) and Non-Metric Dimensional Scaling (NMDS) clustered samples by type, suggesting that microbial communities of the studied niches are significantly different (PERMANOVA, P < 0.05). A number of taxa were significantly differentially abundant (DA) between groups based on the Wald test implemented in the DESeq2 package (Padj < 0.01). The majority of the DA taxa were significantly enriched in faeces than in milk and blood, except for the genus Anaplasma, which was significantly enriched in blood and was, in turn, the most abundant taxon overall. A total of 30 phyla, 74 classes, 156 orders, 243 families and 408 genera were obtained from the overall analysis. The most abundant phyla obtained between the three body sites were Firmicutes, Bacteroidota, and Proteobacteria. A total of 58 genus-level taxa were simultaneously detected between the sample groups, while bacterial signatures of at least 8 of these occurred concurrently in corresponding faeces, milk and blood samples from the same group of animals constituting a pool. The important taxa identified in this study could be categorized into four potentially pathogenic clusters: i) arthropod-borne; ii) food-borne and zoonotic; iii) mastitogenic and; iv) metritic and abortigenic. This study provides insight into the microbial composition of bovine faeces, milk, and blood and its extent of overlapping. It further highlights the potential risk of disease occurrence and transmission between the animals and the inhabitants of the sampled rural community, pertaining to their unsanitary practices associated with the use of cattle by-products.

Keywords: microbial profiling, 16S rRNA, NGS, feces, milk, blood, lactating cows, small-scale farmers

Procedia PDF Downloads 97
986 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder

Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada

Abstract:

From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.

Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation

Procedia PDF Downloads 182