Search results for: relative poverty
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3005

Search results for: relative poverty

185 Newly Designed Ecological Task to Assess Cognitive Map Reading Ability: Behavioral Neuro-Anatomic Correlates of Mental Navigation

Authors: Igor Faulmann, Arnaud Saj, Roland Maurer

Abstract:

Spatial cognition consists in a plethora of high level cognitive abilities: among them, the ability to learn and to navigate in large scale environments is probably one of the most complex skills. Navigation is thought to rely on the ability to read a cognitive map, defined as an allocentric representation of ones environment. Those representations are of course intimately related to the two geometrical primitives of the environment: distance and direction. Also, many recent studies point to a predominant hippocampal and para-hippocampal role in spatial cognition, as well as in the more specific cluster of navigational skills. In a previous study in humans, we used a newly validated test assessing cognitive map processing by evaluating the ability to judge relative distances and directions: the CMRT (Cognitive Map Recall Test). This study identified in topographically disorientated patients (1) behavioral differences between the evaluation of distances and of directions, and (2) distinct causality patterns assessed via VLSM (i.e., distinct cerebral lesions cause distinct response patterns depending on the modality (distance vs direction questions). Thus, we hypothesized that: (1) if the CMRT really taps into the same resources as real navigation, there would be hippocampal, parahippocampal, and parietal activation, and (2) there exists underlying neuroanatomical and functional differences between the processing of this two modalities. Aiming toward a better understanding of the neuroanatomical correlates of the CMRT in humans, and more generally toward a better understanding of how the brain processes the cognitive map, we adapted the CMRT as an fMRI procedure. 23 healthy subjects (11 women, 12 men), all living in Geneva for at least 2 years, underwent the CMRT in fMRI. Results show, for distance and direction taken together, than the most active brain regions are the parietal, frontal and cerebellar parts. Additionally, and as expected, patterns of brain activation differ when comparing the two modalities. Furthermore, distance processing seems to rely more on parietal regions (compared to other brain regions in the same modality and also to direction). It is interesting to notice that no significant activity was observed in the hippocampal or parahippocampal areas. Direction processing seems to tap more into frontal and cerebellar brain regions (compared to other brain regions in the same modality and also to distance). Significant hippocampal and parahippocampal activity has been shown only in this modality. This results demonstrated a complex interaction of structures which are compatible with response patterns observed in other navigational tasks, thus showing that the CMRT taps at least partially into the same brain resources as real navigation. Additionally, differences between the processing of distances and directions leads to the conclusion that the human brain processes each modality distinctly. Further research should focus on the dynamics of this processing, allowing a clearer understanding between the two sub-processes.

Keywords: cognitive map, navigation, fMRI, spatial cognition

Procedia PDF Downloads 272
184 Contribution of Word Decoding and Reading Fluency on Reading Comprehension in Young Typical Readers of Kannada Language

Authors: Vangmayee V. Subban, Suzan Deelan. Pinto, Somashekara Haralakatta Shivananjappa, Shwetha Prabhu, Jayashree S. Bhat

Abstract:

Introduction and Need: During early years of schooling, the instruction in the schools mainly focus on children’s word decoding abilities. However, the skilled readers should master all the components of reading such as word decoding, reading fluency and comprehension. Nevertheless, the relationship between each component during the process of learning to read is less clear. The studies conducted in alphabetical languages have mixed opinion on relative contribution of word decoding and reading fluency on reading comprehension. However, the scenarios in alphasyllabary languages are unexplored. Aim and Objectives: The aim of the study was to explore the role of word decoding, reading fluency on reading comprehension abilities in children learning to read Kannada between the age ranges of 5.6 to 8.6 years. Method: In this cross sectional study, a total of 60 typically developing children, 20 each from Grade I, Grade II, Grade III maintaining equal gender ratio between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. The reading fluency and reading comprehension abilities of the children were assessed using Grade level passages selected from the Kannada text book of children core curriculum. All the passages consist of five questions to assess reading comprehension. The pseudoword decoding skills were assessed using 40 pseudowords with varying syllable length and their Akshara composition. Pseudowords are formed by interchanging the syllables within the meaningful word while maintaining the phonotactic constraints of Kannada language. The assessment material was subjected to content validation and reliability measures before collecting the data on the study samples. The data were collected individually, and reading fluency was assessed for words correctly read per minute. Pseudoword decoding was scored for the accuracy of reading. Results: The descriptive statistics indicated that the mean pseudoword reading, reading comprehension, words accurately read per minute increased with the Grades. The performance of Grade III children found to be higher, Grade I lower and Grade II remained intermediate of Grade III and Grade I. The trend indicated that reading skills gradually improve with the Grades. Pearson’s correlation co-efficient showed moderate and highly significant (p=0.00) positive co-relation between the variables, indicating the interdependency of all the three components required for reading. The hierarchical regression analysis revealed 37% variance in reading comprehension was explained by pseudoword decoding and was highly significant. Subsequent entry of reading fluency measure, there was no significant change in R-square and was only change 3%. Therefore, pseudoword-decoding evolved as a single most significant predictor of reading comprehension during early Grades of reading acquisition. Conclusion: The present study concludes that the pseudoword decoding skills contribute significantly to reading comprehension than reading fluency during initial years of schooling in children learning to read Kannada language.

Keywords: alphasyllabary, pseudo-word decoding, reading comprehension, reading fluency

Procedia PDF Downloads 233
183 Investigation of Hydrate Formation of Associated Petroleum Gas From Promoter Solutions for the Purpose of Utilization and Reduction of Its Burning

Authors: Semenov Matvei, Stoporev Andrey, Pavelyev Roman, Varfolomeev Mikhail

Abstract:

Gas hydrates are host-guest compounds. Guest molecules can be low molecular weight components of associated petroleum gas (C1-C4 hydrocarbons), carbon dioxide, hydrogen sulfide, and nitrogen. Gas hydrates have a number of unique properties that make them interesting from a technological point of view, for example, for storing hydrocarbon gases in solid form under moderate thermobaric conditions. The hydrate form of gas has a number of advantages, including a significant gas content in the hydrate, relative safety and environmental friendliness of the process. Such technology could be especially useful in cold regions, where hydrate production, storage and transportation can be more energy efficient. Recently, new developments have been proposed that seek to reduce the number of steps to obtain the finished hydrate, for example, using a pressing device/screw inside the reactor. However, the energy consumption required for the hydrate formation process remains a challenge. Thus, the goal of the current work is to study the patterns and mechanisms of the hydrate formation process using small additions of hydrate formation promoters under static conditions. The study of these aspects will help solve the problem of accelerated production of gas hydrates with minimal energy consumption. Currently, new compounds have been developed that can accelerate the formation of methane hydrate with a small amount of promoter in water, not exceeding 0.1% by weight. To test the influence of promoters on the process of hydrate formation, standard experiments are carried out under dynamic conditions with stirring. During such experiments, the time at which hydrate formation begins (induction period), the temperature at which formation begins (supercooling), the rate of hydrate formation, and the degree of conversion of water to hydrate are assessed. This approach helps to determine the most effective compound in comparative experiments with different promoters and select their optimal concentration. These experimental studies made it possible to study the features of the formation of associated petroleum gas hydrate from promoter solutions under static conditions. Phase transformations were studied using high-pressure micro-differential scanning calorimetry under various experimental conditions. Visual studies of the growth mode of methane hydrate depending on the type of promoter were also carried out. The work is an extension of the methodology for studying the effect of promoters on the process of associated petroleum gas hydrate formation in order to identify new ways to accelerate the formation of gas hydrates without the use of mixing. This work presents the results of a study of the process of associated petroleum gas hydrate formation using high-pressure differential scanning micro-calorimetry, visual investigation, gas chromatography, autoclaves study and stability data. It was found that the synthesized compounds multiply the conversion of water into hydrate under static conditions up to 96% due to a change in the growth mechanism of associated petroleum gas hydrate.

Keywords: gas hydrate, gas storage, promotor, associated petroleum gas

Procedia PDF Downloads 33
182 Fodder Production and Livestock Rearing in Relation to Climate Change and Possible Adaptation Measures in Manaslu Conservation Area, Nepal

Authors: Bhojan Dhakal, Naba Raj Devkota, Chet Raj Upreti, Maheshwar Sapkota

Abstract:

A study was conducted to find out the production potential, nutrient composition, and the variability of the most commonly available fodder trees along with the varying altitude to help optimize the dry matter requirement during winter lean period. The study was carried out from March to June, 2012 in Lho and Prok Village Development Committee of Manaslu Conservation Area (MCA), located in Gorkha district of Nepal. The other objective of the research was to learn the impact of climate change on livestock production linking it with feed availability. The study was conducted in two parts: social and biological. Accordingly, a households (HHs) survey was conducted to collect primary data from 70 HHs, focusing on the perception of respondents on impacts of climatic variability on the feeding management. The next part consisted of understanding yield potential and nutrient composition of the four most commonly available fodder trees (M. azedirach, M. alba, F. roxburghii, F. nemoralis), within two altitudes range: (1500-2000 masl and 2000-2500 masl) by using a RCB design in 2*4 factorial combination of treatments, each replicated four times. Results revealed that majority of the farmers perceived the change in climatic phenomenon more severely within the past five years. Farmers were using different adaptation technologies such as collection of forage from jungle, reducing unproductive animals, fodder trees utilization, and crop by product feeding at feed scarcity period. Ranking of the different fodder trees on the basis of indigenous knowledge and experiences revealed that F. roxburghii was the best-preferred fodder tree species (index value 0.72) in terms overall preferability whereas M. azedirach had highest growth and productivity (index value 0.77), F. roxburghii had highest adoptability (index value 0.69) and palatability (index value 0.69) as well. Similarly, fresh yield and dry matter yield of the each fodder trees was significant (P < 0.01) between the altitude and within species. Fodder trees yield analysis revealed that the highest dry matter (DM) yield (28 kg/tree) was obtained for F. roxburghii but that remained statistically similar (P > 0.05) to the other treatment. On the other hand, most of the parameters: ether extract (EE), acid detergent lignin (ADL), acid detergent fibre (ADF), cell wall digestibility (CWD), relative digestibility (RD), digestible nutrient (TDN), and Calcium (Ca) among the treatments were highly significant (P < 0.01). This indicates the scope of introducing productive and nutritive fodder trees species even at the high altitude to help reduce fodder scarcity problem during winter. The finding also revealed the scope of promoting all available local fodder trees species as crude protein content of these species were similar.

Keywords: fodder trees, yield potential, climate change, nutrient composition

Procedia PDF Downloads 275
181 Climate Change Impact on Whitefly (Bemisia tabaci) Population Infesting Tomato (Lycopersicon esculentus) in Sub-Himalayan India and Their Sustainable Management Using Biopesticides

Authors: Sunil Kumar Ghosh

Abstract:

Tomato (Lycopersicon esculentus L.) is an annual vegetable crop grown in the sub-Himalayan region of north east India throughout the year except rainy season in normal field cultivation. The crop is susceptible to various insect pests of which whitefly (Bemesia tabaci Genn.) causes heavy damage. Thus, a study on its occurrence and sustainable management is needed for successful cultivation. The pest was active throughout the growing period. During 38th standard week to 41st standard week that is during 3rd week of September to 2nd week of October minimum population was observed. The maximum population level was maintained during 11th standard week to 18th standard week that is during 2nd week of March to 3rd week of March with peak population (0.47/leaf) was recorded. Weekly population counts on white fly showed non-significant negative correlation (p=0.05) with temperature and weekly total rainfall where as significant negative correlation with relative humidity. Eight treatments were taken to study the management of the white fly pest such as botanical insecticide azadirachtin botanical extracts, Spilanthes paniculata flower, Polygonum hydropiper L. flower, tobacco leaf and garlic and mixed formulation like neem and floral extract of Spilanthes were evaluated and compared with the ability of acetamiprid. The insectide acetamiprid was found most lethal against whitefly providing 76.59% suppression, closely followed by extracts of neem + Spilanthes providing 62.39% suppression. Spectophotometric scanning of crude methanolic extract of Polygonum flower showed strong absorbance wave length between 645-675 nm. Considering the level of peaks of wave length the flower extract contain some important chemicals like Spirilloxanthin, Quercentin diglycoside, Quercentin 3-O-rutinoside, Procyanidin B1 and Isorhamnetin 3-O-rutinoside. These chemicals are responsible for pest control. Spectophotometric scanning of crude methanolic extract of Spilanthes flower showed strong absorbance wave length between 645-675 nm. Considering the level of peaks of wave length the flower extract contain some important chemicals of which polysulphide compounds are important and responsible of pest control. Neem and Spilanthes individually did not produce good results but when used as a mixture they recorded better results. Highest yield (30.15 t/ha) were recorded from acetamiprid treated plots followed by neem + Spilanthes (27.55 t/ha). Azadirachtin and Plant extracts are biopesticides having less or no hazardous effects on human health and environment. Thus they can be incorporated in IPM programmes and organic farming in vegetable cultivation.

Keywords: biopesticides, organic farming, seasonal fluctuation, vegetable IPM

Procedia PDF Downloads 286
180 Comparison of Non-destructive Devices to Quantify the Moisture Content of Bio-Based Insulation Materials on Construction Sites

Authors: Léa Caban, Lucile Soudani, Julien Berger, Armelle Nouviaire, Emilio Bastidas-Arteaga

Abstract:

Improvement of the thermal performance of buildings is a high concern for the construction industry. With the increase in environmental issues, new types of construction materials are being developed. These include bio-based insulation materials. They capture carbon dioxide, can be produced locally, and have good thermal performance. However, their behavior with respect to moisture transfer is still facing some issues. With a high porosity, the mass transfer is more important in those materials than in mineral insulation ones. Therefore, they can be more sensitive to moisture disorders such as mold growth, condensation risks or decrease of the wall energy efficiency. For this reason, the initial moisture content on the construction site is a piece of crucial knowledge. Measuring moisture content in a laboratory is a mastered task. Diverse methods exist but the easiest and the reference one is gravimetric. A material is weighed dry and wet, and its moisture content is mathematically deduced. Non-destructive methods (NDT) are promising tools to determine in an easy and fast way the moisture content in a laboratory or on construction sites. However, the quality and reliability of the measures are influenced by several factors. Classical NDT portable devices usable on-site measure the capacity or the resistivity of materials. Water’s electrical properties are very different from those of construction materials, which is why the water content can be deduced from these measurements. However, most moisture meters are made to measure wooden materials, and some of them can be adapted for construction materials with calibration curves. Anyway, these devices are almost never calibrated for insulation materials. The main objective of this study is to determine the reliability of moisture meters in the measurement of biobased insulation materials. The determination of which one of the capacitive or resistive methods is the most accurate and which device gives the best result is made. Several biobased insulation materials are tested. Recycled cotton, two types of wood fibers of different densities (53 and 158 kg/m3) and a mix of linen, cotton, and hemp. It seems important to assess the behavior of a mineral material, so glass wool is also measured. An experimental campaign is performed in a laboratory. A gravimetric measurement of the materials is carried out for every level of moisture content. These levels are set using a climatic chamber and by setting the relative humidity level for a constant temperature. The mass-based moisture contents measured are considered as references values, and the results given by moisture meters are compared to them. A complete analysis of the uncertainty measurement is also done. These results are used to analyze the reliability of moisture meters depending on the materials and their water content. This makes it possible to determine whether the moisture meters are reliable, and which one is the most accurate. It will then be used for future measurements on construction sites to assess the initial hygrothermal state of insulation materials, on both new-build and renovation projects.

Keywords: capacitance method, electrical resistance method, insulation materials, moisture transfer, non-destructive testing

Procedia PDF Downloads 76
179 Solids and Nutrient Loads Exported by Preserved and Impacted Low-Order Streams: A Comparison among Water Bodies in Different Latitudes in Brazil

Authors: Nicolas R. Finkler, Wesley A. Saltarelli, Taison A. Bortolin, Vania E. Schneider, Davi G. F. Cunha

Abstract:

Estimating the relative contribution of nonpoint or point sources of pollution in low-orders streams is an important tool for the water resources management. The location of headwaters in areas with anthropogenic impacts from urbanization and agriculture is a common scenario in developing countries. This condition can lead to conflicts among different water users and compromise ecosystem services. Water pollution also contributes to exporting organic loads to downstream areas, including higher order rivers. The purpose of this research is to preliminarily assess nutrients and solids loads exported by water bodies located in watersheds with different types of land uses in São Carlos - SP (Latitude. -22.0087; Longitude. -47.8909) and Caxias do Sul - RS (Latitude. -29.1634, Longitude. -51.1796), Brazil, using regression analysis. The variables analyzed in this study were Total Kjeldahl Nitrogen (TKN), Nitrate (NO3-), Total Phosphorus (TP) and Total Suspended Solids (TSS). Data were obtained in October and December 2015 for São Carlos (SC) and in November 2012 and March 2013 for Caxias do Sul (CXS). Such periods had similar weather patterns regarding precipitation and temperature. Altogether, 11 sites were divided into two groups, some classified as more pristine (SC1, SC4, SC5, SC6 and CXS2), with predominance of native forest; and others considered as impacted (SC2, SC3, CXS1, CXS3, CXS4 and CXS5), presenting larger urban and/or agricultural areas. Previous linear regression was applied for data on flow and drainage area of each site (R² = 0.9741), suggesting that the loads to be assessed had a significant relationship with the drainage areas. Thereafter, regression analysis was conducted between the drainage areas and the total loads for the two land use groups. The R² values were 0.070, 0.830, 0.752 e 0.455 respectively for SST, TKN, NO3- and TP loads in the more preserved areas, suggesting that the loads generated by runoff are significant in these locations. However, the respective R² values for sites located in impacted areas were respectively 0.488, 0.054, 0.519 e 0.059 for SST, TKN, NO3- and P loads, indicating a less important relationship between total loads and runoff as compared to the previous scenario. This study suggests three possible conclusions that will be further explored in the full-text article, with more sampling sites and periods: a) In preserved areas, nonpoint sources of pollution are more significant in determining water quality in relation to the studied variables; b) The nutrient (TKN and P) loads in impacted areas may be associated with point sources such as domestic wastewater discharges with inadequate treatment levels; and c) The presence of NO3- in impacted areas can be associated to the runoff, particularly in agricultural areas, where the application of fertilizers is common at certain times of the year.

Keywords: land use, linear regression, point and non-point pollution sources, streams, water resources management

Procedia PDF Downloads 281
178 An Introduction to the Radiation-Thrust Based on Alpha Decay and Spontaneous Fission

Authors: Shiyi He, Yan Xia, Xiaoping Ouyang, Liang Chen, Zhongbing Zhang, Jinlu Ruan

Abstract:

As the key system of the spacecraft, various propelling system have been developing rapidly, including ion thrust, laser thrust, solar sail and other micro-thrusters. However, there still are some shortages in these systems. The ion thruster requires the high-voltage or magnetic field to accelerate, resulting in extra system, heavy quantity and large volume. The laser thrust now is mostly ground-based and providing pulse thrust, restraint by the station distribution and the capacity of laser. The thrust direction of solar sail is limited to its relative position with the Sun, so it is hard to propel toward the Sun or adjust in the shadow.In this paper, a novel nuclear thruster based on alpha decay and spontaneous fission is proposed and the principle of this radiation-thrust with alpha particle has been expounded. Radioactive materials with different released energy, such as 210Po with 5.4MeV and 238Pu with 5.29MeV, attached to a metal film will provides various thrust among 0.02-5uN/cm2. With this repulsive force, radiation is able to be a power source. With the advantages of low system quantity, high accuracy and long active time, the radiation thrust is promising in the field of space debris removal, orbit control of nano-satellite array and deep space exploration. To do further study, a formula lead to the amplitude and direction of thrust by the released energy and decay coefficient is set up. With the initial formula, the alpha radiation elements with the half life period longer than a hundred days are calculated and listed. As the alpha particles emit continuously, the residual charge in metal film grows and affects the emitting energy distribution of alpha particles. With the residual charge or extra electromagnetic field, the emitting of alpha particles performs differently and is analyzed in this paper. Furthermore, three more complex situations are discussed. Radiation element generating alpha particles with several energies in different intensity, mixture of various radiation elements, and cascaded alpha decay are studied respectively. In combined way, it is more efficient and flexible to adjust the thrust amplitude. The propelling model of the spontaneous fission is similar with the one of alpha decay, which has a more complex angular distribution. A new quasi-sphere space propelling system based on the radiation-thrust has been introduced, as well as the collecting and processing system of excess charge and reaction heat. The energy and spatial angular distribution of emitting alpha particles on unit area and certain propelling system have been studied. As the alpha particles are easily losing energy and self-absorb, the distribution is not the simple stacking of each nuclide. With the change of the amplitude and angel of radiation-thrust, orbital variation strategy on space debris removal is shown and optimized.

Keywords: alpha decay, angular distribution, emitting energy, orbital variation, radiation-thruster

Procedia PDF Downloads 171
177 Experimental Study of Energy Absorption Efficiency (EAE) of Warp-Knitted Spacer Fabric Reinforced Foam (WKSFRF) Under Low-Velocity Impact

Authors: Amirhossein Dodankeh, Hadi Dabiryan, Saeed Hamze

Abstract:

Using fabrics to reinforce composites considerably leads to improved mechanical properties, including resistance to the impact load and the energy absorption of composites. Warp-knitted spacer fabrics (WKSF) are fabrics consisting of two layers of warp-knitted fabric connected by pile yarns. These connections create a space between the layers filled by pile yarns and give the fabric a three-dimensional shape. Today because of the unique properties of spacer fabrics, they are widely used in the transportation, construction, and sports industries. Polyurethane (PU) foams are commonly used as energy absorbers, but WKSF has much better properties in moisture transfer, compressive properties, and lower heat resistance than PU foam. It seems that the use of warp-knitted spacer fabric reinforced PU foam (WKSFRF) can lead to the production and use of composite, which has better properties in terms of energy absorption from the foam, its mold formation is enhanced, and its mechanical properties have been improved. In this paper, the energy absorption efficiency (EAE) of WKSFRF under low-velocity impact is investigated experimentally. The contribution of the effect of each of the structural parameters of the WKSF on the absorption of impact energy has also been investigated. For this purpose, WKSF with different structures such as two different thicknesses, small and large mesh sizes, and position of the meshes facing each other and not facing each other were produced. Then 6 types of composite samples with different structural parameters were fabricated. The physical properties of samples like weight per unit area and fiber volume fraction of composite were measured for 3 samples of any type of composites. Low-velocity impact with an initial energy of 5 J was carried out on 3 samples of any type of composite. The output of the low-velocity impact test is acceleration-time (A-T) graph with a lot deviation point, in order to achieve the appropriate results, these points were removed using the FILTFILT function of MATLAB R2018a. Using Newtonian laws of physics force-displacement (F-D) graph was drawn from an A-T graph. We know that the amount of energy absorbed is equal to the area under the F-D curve. Determination shows the maximum energy absorption is 2.858 J which is related to the samples reinforced with fabric with large mesh, high thickness, and not facing of the meshes relative to each other. An index called energy absorption efficiency was defined, which means absorption energy of any kind of our composite divided by its fiber volume fraction. With using this index, the best EAE between the samples is 21.6 that occurs in the sample with large mesh, high thickness, and meshes facing each other. Also, the EAE of this sample is 15.6% better than the average EAE of other composite samples. Generally, the energy absorption on average has been increased 21.2% by increasing the thickness, 9.5% by increasing the size of the meshes from small to big, and 47.3% by changing the position of the meshes from facing to non-facing.

Keywords: composites, energy absorption efficiency, foam, geometrical parameters, low-velocity impact, warp-knitted spacer fabric

Procedia PDF Downloads 141
176 Nanoparticle Exposure Levels in Indoor and Outdoor Demolition Sites

Authors: Aniruddha Mitra, Abbas Rashidi, Shane Lewis, Jefferson Doehling, Alexis Pawlak, Jacob Schwartz, Imaobong Ekpo, Atin Adhikari

Abstract:

Working or living close to demolition sites can increase risks of dust-related health problems. Demolition of concrete buildings may produce crystalline silica dust, which can be associated with a broad range of respiratory diseases including silicosis and lung cancers. Previous studies demonstrated significant associations between demolition dust exposure and increase in the incidence of mesothelioma or asbestos cancer. Dust is a generic term used for minute solid particles of typically <500 µm in diameter. Dust particles in demolition sites vary in a wide range of sizes. Larger particles tend to settle down from the air. On the other hand, the smaller and lighter solid particles remain dispersed in the air for a long period and pose sustained exposure risks. Submicron ultrafine particles and nanoparticles are respirable deeper into our alveoli beyond our body’s natural respiratory cleaning mechanisms such as cilia and mucous membranes and are likely to be retained in the lower airways. To our knowledge, how various demolition tasks release nanoparticles are largely unknown and previous studies mostly focused on course dust, PM2.5, and PM10. General belief is that the dust generated during demolition tasks are mostly large particles formed through crushing, grinding, or sawing of various concrete and wooden structures. Therefore, little consideration has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor, which was used for nanoparticle monitoring at two adjacent indoor and outdoor building demolition sites in southern Georgia. Nanoparticle levels were measured (n = 10) by TSI NanoScan SMPS Model 3910 at four different distances (5, 10, 15, and 30 m) from the work location as well as in control sites. Temperature and relative humidity levels were recorded. Indoor demolition works included acetylene torch, masonry drilling, ceiling panel removal, and other miscellaneous tasks. Whereas, outdoor demolition works included acetylene torch and skid-steer loader use to remove a HVAC system. Concentration ranges of nanoparticles of 13 particle sizes at the indoor demolition site were: 11.5 nm: 63 – 1054/cm³; 15.4 nm: 170 – 1690/cm³; 20.5 nm: 321 – 730/cm³; 27.4 nm: 740 – 3255/cm³; 36.5 nm: 1,220 – 17,828/cm³; 48.7 nm: 1,993 – 40,465/cm³; 64.9 nm: 2,848 – 58,910/cm³; 86.6 nm: 3,722 – 62,040/cm³; 115.5 nm: 3,732 – 46,786/cm³; 154 nm: 3,022 – 21,506/cm³; 205.4 nm: 12 – 15,482/cm³; 273.8 nm: Keywords: demolition dust, industrial hygiene, aerosol, occupational exposure

Procedia PDF Downloads 404
175 Theorizing Optimal Use of Numbers and Anecdotes: The Science of Storytelling in Newsrooms

Authors: Hai L. Tran

Abstract:

When covering events and issues, the news media often employ both personal accounts as well as facts and figures. However, the process of using numbers and narratives in the newsroom is mostly operated through trial and error. There is a demonstrated need for the news industry to better understand the specific effects of storytelling and data-driven reporting on the audience as well as explanatory factors driving such effects. In the academic world, anecdotal evidence and statistical evidence have been studied in a mutually exclusive manner. Existing research tends to treat pertinent effects as though the use of one form precludes the other and as if a tradeoff is required. Meanwhile, narratives and statistical facts are often combined in various communication contexts, especially in news presentations. There is value in reconceptualizing and theorizing about both relative and collective impacts of numbers and narratives as well as the mechanism underlying such effects. The current undertaking seeks to link theory to practice by providing a complete picture of how and why people are influenced by information conveyed through quantitative and qualitative accounts. Specifically, the cognitive-experiential theory is invoked to argue that humans employ two distinct systems to process information. The rational system requires the processing of logical evidence effortful analytical cognitions, which are affect-free. Meanwhile, the experiential system is intuitive, rapid, automatic, and holistic, thereby demanding minimum cognitive resources and relating to the experience of affect. In certain situations, one system might dominate the other, but rational and experiential modes of processing operations in parallel and at the same time. As such, anecdotes and quantified facts impact audience response differently and a combination of data and narratives is more effective than either form of evidence. In addition, the present study identifies several media variables and human factors driving the effects of statistics and anecdotes. An integrative model is proposed to explain how message characteristics (modality, vividness, salience, congruency, position) and individual differences (involvement, numeracy skills, cognitive resources, cultural orientation) impact selective exposure, which in turn activates pertinent modes of processing, and thereby induces corresponding responses. The present study represents a step toward bridging theoretical frameworks from various disciplines to better understand the specific effects and the conditions under which the use of anecdotal evidence and/or statistical evidence enhances or undermines information processing. In addition to theoretical contributions, this research helps inform news professionals about the benefits and pitfalls of incorporating quantitative and qualitative accounts in reporting. It proposes a typology of possible scenarios and appropriate strategies for journalists to use when presenting news with anecdotes and numbers.

Keywords: data, narrative, number, anecdote, storytelling, news

Procedia PDF Downloads 57
174 Theoretical-Methodological Model to Study Vulnerability of Death in the Past from a Bioarchaeological Approach

Authors: Geraldine G. Granados Vazquez

Abstract:

Every human being is exposed to the risk of dying; wherein some of them are more susceptible than others depending on the cause. Therefore, the cause could be the hazard to die that a group or individual has, making this irreversible damage the condition of vulnerability. Risk is a dynamic concept; which means that it depends on the environmental, social, economic and political conditions. Thus vulnerability may only be evaluated in terms of relative parameters. This research is focusing specifically on building a model that evaluate the risk or propensity of death in past urban societies in connection with the everyday life of individuals, considering that death can be a consequence of two coexisting issues: hazard and the deterioration of the resistance to destruction. One of the most important discussions in bioarchaeology refers to health and life conditions in ancient groups; the researchers are looking for more flexible models that evaluate these topics. In that way, this research proposes a theoretical-methodological model that assess the vulnerability of death in past urban groups. This model pretends to be useful to evaluate the risk of death, considering their sociohistorical context, and their intrinsic biological features. This theoretical and methodological model, propose four areas to assess vulnerability. The first three areas use statistical methods or quantitative analysis. While the last and fourth area, which corresponds to the embodiment, is based on qualitative analysis. The four areas and their techniques proposed are a) Demographic dynamics. From the distribution of age at the time of death, the analysis of mortality will be performed using life tables. From here, four aspects may be inferred: population structure, fertility, mortality-survival, and productivity-migration, b) Frailty. Selective mortality and heterogeneity in frailty can be assessed through the relationship between characteristics and the age at death. There are two indicators used in contemporary populations to evaluate stress: height and linear enamel hypoplasias. Height estimates may account for the individual’s nutrition and health history in specific groups; while enamel hypoplasias are an account of the individual’s first years of life, c) Inequality. Space reflects various sectors of society, also in ancient cities. In general terms, the spatial analysis uses measures of association to show the relationship between frail variables and space, d) Embodiment. The story of everyone leaves some evidence on the body, even in the bones. That led us to think about the dynamic individual's relations in terms of time and space; consequently, the micro analysis of persons will assess vulnerability from the everyday life, where the symbolic meaning also plays a major role. In sum, using some Mesoamerica examples, as study cases, this research demonstrates that not only the intrinsic characteristics related to the age and sex of individuals are conducive to vulnerability, but also the social and historical context that determines their state of frailty before death. An attenuating factor for past groups is that some basic aspects –such as the role they played in everyday life– escape our comprehension, and are still under discussion.

Keywords: bioarchaeology, frailty, Mesoamerica, vulnerability

Procedia PDF Downloads 195
173 The Effect of Political Characteristics on the Budget Balance of Local Governments: A Dynamic System Generalized Method of Moments Data Approach

Authors: Stefanie M. Vanneste, Stijn Goeminne

Abstract:

This paper studies the effect of political characteristics of 308 Flemish municipalities on their budget balance in the period 1995-2011. All local governments experience the same economic and financial setting, however some governments have high budget balances, while others have low budget balances. The aim of this paper is to explain the differences in municipal budget balances by a number of economic, socio-demographic and political variables. The economic and socio-demographic variables will be used as control variables, while the focus of this paper will be on the political variables. We test four hypotheses resulting from the literature, namely (i) the partisan hypothesis tests if left wing governments have lower budget balances, (ii) the fragmentation hypothesis stating that more fragmented governments have lower budget balances, (iii) the hypothesis regarding the power of the government, higher powered governments would resolve in higher budget balances, and (iv) the opportunistic budget cycle to test whether politicians manipulate the economic situation before elections in order to maximize their reelection possibilities and therefore have lower budget balances before elections. The contributions of our paper to the existing literature are multiple. First, we use the whole array of political variables and not just a selection of them. Second, we are dealing with a homogeneous database with the same budget and election rules, making it easier to focus on the political factors without having to control for the impact of differences in the political systems. Third, our research extends the existing literature on Flemish municipalities as this is the first dynamic research on local budget balances. We use a dynamic panel data model. Because of the two lagged dependent variables as explanatory variables, we employ the system GMM (Generalized Method of Moments) estimator. This is the best possible estimator as we are dealing with political panel data that is rather persistent. Our empirical results show that the effect of the ideological position and the power of the coalition are of less importance to explain the budget balance. The political fragmentation of the government on the other hand has a negative and significant effect on the budget balance. The more parties in a coalition the worse the budget balance is ceteris paribus. Our results also provide evidence of an opportunistic budget cycle, the budget balances are lower in pre-election years relative to the other years to try and increase the incumbents reelection possibilities. An additional finding is that the incremental effect of the budget balance is very important and should not be ignored like is being done in a lot of empirical research. The coefficients of the lagged dependent variables are always positive and very significant. This proves that the budget balance is subject to incrementalism. It is not possible to change the entire policy from one year to another so the actions taken in recent past years still have an impact on the current budget balance. Only a relatively small amount of research concerning the budget balance takes this considerable incremental effect into account. Our findings survive several robustness checks.

Keywords: budget balance, fragmentation, ideology, incrementalism, municipalities, opportunistic budget cycle, panel data, political characteristics, power, system GMM

Procedia PDF Downloads 281
172 Comparison of Cu Nanoparticle Formation and Properties with and without Surrounding Dielectric

Authors: P. Dubcek, B. Pivac, J. Dasovic, V. Janicki, S. Bernstorff

Abstract:

When grown only to nanometric sizes, metallic particles (e.g. Ag, Au and Cu) exhibit specific optical properties caused by the presence of plasmon band. The plasmon band represents collective oscillation of the conduction electrons, and causes a narrow band absorption of light in the visible range. When the nanoparticles are embedded in a dielectric, they also cause modifications of dielectrics optical properties. This can be fine-tuned by tuning the particle size. We investigated Cu nanoparticle growth with and without surrounding dielectric (SiO2 capping layer). The morphology and crystallinity were investigated by GISAXS and GIWAXS, respectively. Samples were produced by high vacuum thermal evaporation of Cu onto monocrystalline silicon substrate held at room temperature, 100°C or 180°C. One series was in situ capped by 10nm SiO2 layer. Additionally, samples were annealed at different temperatures up to 550°C, also in high vacuum. The room temperature deposited samples annealed at lower temperatures exhibit continuous film structure: strong oscillations in the GISAXS intensity are present especially in the capped samples. At higher temperatures enhanced surface dewetting and Cu nanoparticles (nanoislands) formation partially destroy the flatness of the interface. Therefore the particle type of scattering is enhanced, while the film fringes are depleted. However, capping layer hinders particle formation, and continuous film structure is preserved up to higher annealing temperatures (visible as strong and persistent fringes in GISAXS), compared to the non- capped samples. According to GISAXS, lateral particle sizes are reduced at higher temperatures, while particle height is increasing. This is ascribed to close packing of the formed particles at lower temperatures, and GISAXS deduced sizes are partially the result of the particle agglomerate dimensions. Lateral maxima in GISAXS are an indication of good positional correlation, and the particle to particle distance is increased as the particles grow with temperature elevation. This coordination is much stronger in the capped and lower temperature deposited samples. The dewetting is much more vigorous in the non-capped sample, and since nanoparticles are formed in a range of sizes, correlation is receding both with deposition and annealing temperature. Surface topology was checked by atomic force microscopy (AFM). Capped sample's surfaces were smoother and lateral size of the surface features were larger compared to the non-capped samples. Altogether, AFM results suggest somewhat larger particles and wider size distribution, and this can be attributed to the difference in probe size. Finally, the plasmonic effect was monitored by UV-Vis reflectance spectroscopy, and relative weak plasmonic effect could be explained by uncomplete dewetting or partial interconnection of the formed particles.

Keywords: coper, GISAXS, nanoparticles, plasmonics

Procedia PDF Downloads 101
171 Liquid Waste Management in Cluster Development

Authors: Abheyjit Singh, Kulwant Singh

Abstract:

There is a gradual depletion of the water table in the earth's crust, and it is required to converse and reduce the scarcity of water. This is only done by rainwater harvesting, recycling of water and by judicially consumption/utilization of water and adopting unique treatment measures. Domestic waste is generated in residential areas, commercial settings, and institutions. Waste, in general, is unwanted, undesirable, and nevertheless an inevitable and inherent product of social, economic, and cultural life. In a cluster, a need-based system is formed where the project is designed for systematic analysis, collection of sewage from the cluster, treating it and then recycling it for multifarious work. The liquid waste may consist of Sanitary sewage/ Domestic waste, Industrial waste, Storm waste, or Mixed Waste. The sewage contains both suspended and dissolved particles, and the total amount of organic material is related to the strength of the sewage. The untreated domestic sanitary sewage has a BOD (Biochemical Oxygen Demand) of 200 mg/l. TSS (Total Suspended Solids) about 240 mg/l. Industrial Waste may have BOD and TSS values much higher than those of sanitary sewage. Another type of impurities of wastewater is plant nutrients, especially when there are compounds of nitrogen N phosphorus P in the sewage; raw sanitary contains approx. 35 mg/l Nitrogen and 10 mg/l of Phosphorus. Finally, the pathogen in the waste is expected to be proportional to the concentration of facial coliform bacteria. The coliform concentration in raw sanitary sewage is roughly 1 billion per liter. The system of sewage disposal technique has been universally applied to all conditions, which are the nature of soil formation, Availability of land, Quantity of Sewage to be disposed of, The degree of treatment and the relative cost of disposal technique. The adopted Thappar Model (India) has the following designed parameters consisting of a Screen Chamber, a Digestion Tank, a Skimming Tank, a Stabilization Tank, an Oxidation Pond and a Water Storage Pond. The screening Chamber is used to remove plastic and other solids, The Digestion Tank is designed as an anaerobic tank having a retention period of 8 hours, The Skimming Tank has an outlet that is kept 1 meter below the surface anaerobic condition at the bottom and also help in organic solid remover, Stabilization Tank is designed as primary settling tank, Oxidation Pond is a facultative pond having a depth of 1.5 meter, Storage Pond is designed as per the requirement. The cost of the Thappar model is Rs. 185 Lakh per 3,000 to 4,000 population, and the Area required is 1.5 Acre. The complete structure will linning as per the requirement. The annual maintenance will be Rs. 5 lakh per year. The project is useful for water conservation, silage water for irrigation, decrease of BOD and there will be no longer damage to community assets and economic loss to the farmer community by inundation. There will be a healthy and clean environment in the community.

Keywords: collection, treatment, utilization, economic

Procedia PDF Downloads 46
170 Muhammad Bin Abi Al-Surūr Al-Bakriyy Al-Ṣiddīqiyy and His Approach to Interpretation: Sūrat Al-Fatḥ as an Example

Authors: Saleem Abu Jaber

Abstract:

Born into a Sufi family, in which his father and other relatives, as well as additional community members, were particularly rooted in scholarly and cultural inquiry, Muḥammad ʾAbū al-Surūr al-Bikriyy al-Ṣidīqiyy (1562–1598 CE) was a prominent scholar of his time. Despite his relative youth, he became influential in his writings, which included Quranic exegeses and works on Hadith, Arabic grammar, jurisprudence, and Sufism. He was also a practicing physician and was the first person to be named Mufti of the Sultanate in Egypt. He was active in the political arena, having been close to the Ottoman sultans, providing them his support and counsel. He strived for their empowerment and victory and often influenced their political convictions and actions. Al-Ṣidīqiyy enjoyed the patronage of his contemporary Ottoman Caliphate sultans. In general, these sultans always promoted studies in the Islamic sciences and were keen to support scholars and gain their trust. This paper addresses al-Ṣidīqiyy’s legacy as a Quranic commentator, focusing on his exegesis (tafsīr) of Sūrat al-Fatḥ (48), written in 1589. It appears in a manuscript found at the Süleymaniye Library in Istanbul, consisting of one volume of 144 pages. It is believed that no other manuscript containing the text of this exegesis is to be found in any other library or institute for Arabic manuscripts. According to al-Ṣabbāġ (1995), al-Ṣidīqiyy had written a complete commentary of the Quran, but efforts to recover it have only unearthed the current commentary, as well as that of Sūrat al-Kahf (18), Sūrat al-ʾAnʿām (6), and ʾĀyat al-Kursī (2:255). The only critical edition published to date is that of Sūrat al-Kahf. The other two are currently being prepared for publication as well. The paucity of scholarly studies on the works of al-Ṣidīqiyy renders the current study particularly significant, as it provides introduction to al-Ṣidīqiyy’s exegesis, a synopsis of the biographical and cultural background of its author and his family, and a critical evaluation of his scholarly contribution. It will introduce the manuscript on which this study is based and elaborate on the structure and rationale of the exegesis, on its very attribution to al-Ṣidīqiyy, and subsequently evaluate its overall significance to the understanding of Sufi approaches to Quranic interpretation in 16th century Ottoman Egypt. An analysis of al-Ṣidīqiyy’s approach to interpreting the Quran leads to the definitive conclusion that it indeed reflects Sufi principles. For instance, when citing other Sufi commentators, including his own ancestors, he uses the epithets mawlāna ‘our elder, our patron,’ al-ʾustāḏ ‘the master,’ unique to Sufi parlance. Crucially, his interpretation, is written in a realistic, uncomplicated, fetching style, as was customary among Sufi scholars of his time, whose leaning was one of clarity, based on their perception of themselves as being closest to Muḥammad and his family, and by extension to the sunna, as reflected in the traditional narrative of the Prophet’s biography and teachings.

Keywords: Quran’ sufiism, manuscript, exegesis, surah, Al-fath, sultanate, sunna

Procedia PDF Downloads 23
169 Remote Radiation Mapping Based on UAV Formation

Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov

Abstract:

High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.

Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation

Procedia PDF Downloads 54
168 Particle Swarm Optimization for Modified Spencer Model Under Different Excitations

Authors: Fatemeh Behbahani, Mehdi Behbahani

Abstract:

The new materials have exposed the technological advancement that has been used to facilitate the presentation of buildings to effectively suppress vibration. Recently researchers have increased their advantages, including decreased power requirements, mechanical simplicity, and a high power capability, because of the regulated Fluids and their applications. The fluids used in magneto-rheological dampers also improved their mechanical characteristics. The damper force caused by the current excitement adjustment was applied within the damper to the electromagnet. A supreme model is needed to be able to accurately estimate damping force according to the superior present hysteresis damper behavior to use the advantage of this remarkable method. Due to the supreme coverage of the nonlinear field of the hysteresis loop among the parametric model, the Spencer model has been commonly used for MR damper to describe hysteresis behavior. Despite this, there are still essential differences in the simulation and experimental outcomes. A novelty model according to the Spencer model is being used here to simulate the damper's nonlinear hysteretic behavior by taking the excitations of frequency, current, and amplitude as displacement and velocity as input variables. This suggested model has a greater benefit than the historically uncertain parameters of the Spencer model, where it can be re-evaluated if a new grouping of excitation parameters is preferred. Experimental experiments in the damping force measuring machine were carried out for validation of the simulations using MATLAB software, as shown in the previous paper which will be mentioned in the content. This paper aims to explain the optimal value of the parameters for the proposed model using a biological-inspired algorithm called Particle Swarm Optimization. The working principles of the classical Particle Swarm Optimisation (PSO) algorithm for a better understanding of the basic framework of a PSO algorithm will be discussed and also, learn to demonstrate the functionality of a PSO algorithm in MATLAB. A PSO algorithm's design is similar to that of bird flocking and starts with a randomly generated population group. They have fitness values to determine the population. They update the population check for optimal parameters with random strategies and update the simulation resets as well. However, not all algorithms guarantee F. B. with the Department of artificial intelligence and robotics (CAIRO), Malaysia-Japan International Institute of Technology (MJIIT), UTM, 54100, Kuala Lumpur, Malaysia (corresponding author, phone: +60-1136463246; e-mail: [email protected]). success. In displacement, velocity, and time curves, a great deal was found between the prediction and experimental works with an appropriate error as a result of the confirmation that the model can correctly measure the hysteresis damping force and the error has decreased relative to the Spencer model.

Keywords: modeling and simulation, semi-active control, MR damper RD-8040-1, particle swarm optimization, magnetorheological fluid, based spencer model

Procedia PDF Downloads 15
167 Examination of Indoor Air Quality of Naturally Ventilated Dwellings During Winters in Mega-City Kolkata

Authors: Tanya Kaur Bedi, Shankha Pratim Bhattacharya

Abstract:

The US Environmental Protection Agency defines indoor air quality as “The air quality within and around buildings, especially as it relates to the health and comfort of building occupants”. According to the 2021 report by the Energy Policy Institute at Chicago, Indian residents, a country which is home to the highest levels of air pollution in the world, lose about 5.9 years from life expectancy due to poor air quality and yet has numerous dwellings dependent on natural ventilation. Currently the urban population spends 90% of the time indoors, this scenario raises a concern for occupant health and well-being. The built environment can affect health directly and indirectly through immediate or long-term exposure to indoor air pollutants. Health effects associated with indoor air pollutants include eye/nose/throat irritation, respiratory diseases, heart disease, and even cancer. This study attempts to demonstrate the causal relationship between the indoor air quality and its determining aspects. Detailed indoor air quality audits were conducted in residential buildings located in Kolkata, India in the months of December and January 2021. According to the air pollution knowledge assessment city program in India, Kolkata is also the second most polluted mega-city after Delhi. Although the air pollution levels are alarming year-long, the winter months are most crucial due to the unfavorable environmental conditions. While emissions remain typically constant throughout the year, cold air is denser and moves slower than warm air, trapping the pollution in place for much longer and consequently is breathed in at a higher rate than the summers. The air pollution monitoring period was selected considering environmental factors and major pollution contributors like traffic and road dust. This study focuses on the relationship between the built environment and the spatial-temporal distribution of air pollutants in and around it. The measured parameters include, temperature, relative humidity, air velocity, particulate matter, volatile organic compounds, formaldehyde, and benzene. A total of 56 rooms were audited, selectively targeting the most dominant middle-income group. The data-collection was conducted using a set of instruments positioned in the human breathing-zone. The study assesses indoor air quality based on factors determining natural ventilation and air pollution dispersion such as surrounding environment, dominant wind, openable window to floor area ratio, windward or leeward side openings, and natural ventilation type in the room: single side or cross-ventilation, floor height, residents cleaning habits, etc.

Keywords: indoor air quality, occupant health, urban housing, air pollution, natural ventilation, architecture, urban issues

Procedia PDF Downloads 97
166 Variability and Stability of Bread and Durum Wheat for Phytic Acid Content

Authors: Gordana Branković, Vesna Dragičević, Dejan Dodig, Desimir Knežević, Srbislav Denčić, Gordana Šurlan-Momirović

Abstract:

Phytic acid is a major pool in the flux of phosphorus through agroecosystems and represents a sum equivalent to > 50% of all phosphorus fertilizer used annually. Nutrition rich in phytic acid can substantially decrease micronutrients apsorption as calcium, zink, iron, manganese, copper due to phytate salts excretion by human and non-ruminant animals as poultry, swine and fish, having in common very scarce phytase activity, and consequently the ability to digest and utilize phytic acid, thus phytic acid derived phosphorus in animal waste contributes to water pollution. The tested accessions consisted of 15 genotypes of bread wheat (Triticum aestivum L. ssp. vulgare) and of 15 genotypes of durum wheat (Triticum durum Desf.). The trials were sown at the three test sites in Serbia: Rimski Šančevi (RS) (45º19´51´´N; 19º50´59´´E), Zemun Polje (ZP) (44º52´N; 20º19´E) and Padinska Skela (PS) (44º57´N 20º26´E) during two vegetation seasons 2010-2011 and 2011-2012. The experimental design was randomized complete block design with four replications. The elementary plot consisted of 3 internal rows of 0.6 m2 area (3 × 0.2 m × 1 m). Grains were grinded with Laboratory Mill 120 Perten (“Perten”, Sweden) (particles size < 500 μm) and flour was used for the analysis. Phytic acid grain content was determined spectrophotometrically with the Shimadzu UV-1601 spectrophotometer (Shimadzu Corporation, Japan). Objectives of this study were to determine: i) variability and stability of the phytic acid content among selected genotypes of bread and durum wheat, ii) predominant source of variation regarding genotype (G), environment (E) and genotype × environment interaction (GEI) from the multi-environment trial, iii) influence of climatic variables on the GEI for the phytic acid content. Based on the analysis of variance it had been determined that the variation of phytic acid content was predominantly influenced by environment in durum wheat, while the GEI prevailed for the variation of the phytic acid content in bread wheat. Phytic acid content expressed on the dry mass basis was in the range 14.21-17.86 mg g-1 with the average of 16.05 mg g-1 for bread wheat and 14.63-16.78 mg g-1 with the average of 15.91 mg g-1 for durum wheat. Average-environment coordination view of the genotype by environment (GGE) biplot was used for the selection of the most desirable genotypes for breeding for low phytic acid content in the sense of good stability and lower level of phytic acid content. The most desirable genotypes of bread and durum wheat for breeding for phytic acid were Apache and 37EDUYT /07 No. 7849. Models of climatic factors in the highest percentage (> 91%) were useful in interpreting GEI for phytic acid content, and included relative humidity in June, sunshine hours in April, mean temperature in April and winter moisture reserves for genotypes of bread wheat, as well as precipitation in June and April, maximum temperature in April and mean temperature in June for genotypes of durum wheat.

Keywords: genotype × environment interaction, phytic acid, stability, variability

Procedia PDF Downloads 362
165 Analyzing the Websites of Institutions Publishing Global Rankings of Universities: A Usability Study

Authors: Nuray Baltaci, Kursat Cagiltay

Abstract:

University rankings which can be seen as nouveau topic are at the center of focus and followed closely by different parties. Students are interested in university rankings in order to make informed decisions about the selection of their candidate future universities. University administrators and academicians can utilize them to see and evaluate their universities’ relative performance compared to other institutions in terms of including but not limited to academic, economic, and international outlook issues. Local institutions may use those ranking systems, as TUBITAK (The Scientific and Technological Research Council of Turkey) and YOK (Council of Higher Education) do in Turkey, to support students and give scholarships when they want to apply for undergraduate and graduate studies abroad. When it is considered that the ranking systems are concerned by this many different parties, the importance of having clear, easy to use and well-designed websites by ranking institutions will be apprehended. In this paper, a usability study for the websites of four different global university ranking institutions, namely Academic Ranking of World Universities (ARWU), Times Higher Education, QS and University Ranking by Academic Performance (URAP), was conducted. User-based approach was adopted and usability tests were conducted with 10 graduate students at Middle East Technical University in Ankara, Turkey. Before performing the formal usability tests, a pilot study had been completed to reflect the necessary changes to the settings of the study. Participants’ demographics, task completion times, paths traced to complete tasks, and their satisfaction levels on each task and website were collected. According to the analyses of the collected data, those ranking websites were compared in terms of efficiency, effectiveness and satisfaction dimensions of usability as pointed in ISO 9241-11. Results showed that none of the selected ranking websites is superior to other ones in terms of overall effectiveness and efficiency of the website. However the only remarkable result was that the highest average task completion times for two of the designed tasks belong to the Times Higher Education Rankings website. Evaluation of the user satisfaction on each task and each website produced slightly different but rather similar results. When the satisfaction levels of the participants on each task are examined, it was seen that the highest scores belong to ARWU and URAP websites. The overall satisfaction levels of the participants for each website showed that the URAP website has highest score followed by ARWU website. In addition, design problems and powerful design features of those websites reported by the participants are presented in the paper. Since the study mainly tackles about the design problems of the URAP website, the focus is on this website. Participants reported 3 main design problems about the website which are unaesthetic and unprofessional design style of the website, improper map location on ranking pages, and improper listing of the field names on field ranking page.

Keywords: university ranking, user-based approach, website usability, design

Procedia PDF Downloads 380
164 Improving Binding Selectivity in Molecularly Imprinted Polymers from Templates of Higher Biomolecular Weight: An Application in Cancer Targeting and Drug Delivery

Authors: Ben Otange, Wolfgang Parak, Florian Schulz, Michael Alexander Rubhausen

Abstract:

The feasibility of extending the usage of molecular imprinting technique in complex biomolecules is demonstrated in this research. This technique is promising in diverse applications in areas such as drug delivery, diagnosis of diseases, catalysts, and impurities detection as well as treatment of various complications. While molecularly imprinted polymers MIP remain robust in the synthesis of molecules with remarkable binding sites that have high affinities to specific molecules of interest, extending the usage to complex biomolecules remains futile. This work reports on the successful synthesis of MIP from complex proteins: BSA, Transferrin, and MUC1. We show in this research that despite the heterogeneous binding sites and higher conformational flexibility of the chosen proteins, relying on their respective epitopes and motifs rather than the whole template produces highly sensitive and selective MIPs for specific molecular binding. Introduction: Proteins are vital in most biological processes, ranging from cell structure and structural integrity to complex functions such as transport and immunity in biological systems. Unlike other imprinting templates, proteins have heterogeneous binding sites in their complex long-chain structure, which makes their imprinting to be marred by challenges. In addressing this challenge, our attention is inclined toward the targeted delivery, which will use molecular imprinting on the particle surface so that these particles may recognize overexpressed proteins on the target cells. Our goal is thus to make surfaces of nanoparticles that specifically bind to the target cells. Results and Discussions: Using epitopes of BSA and MUC1 proteins and motifs with conserved receptors of transferrin as the respective templates for MIPs, significant improvement in the MIP sensitivity to the binding of complex protein templates was noted. Through the Fluorescence Correlation Spectroscopy FCS measurements on the size of protein corona after incubation of the synthesized nanoparticles with proteins, we noted a high affinity of MIPs to the binding of their respective complex proteins. In addition, quantitative analysis of hard corona using SDS-PAGE showed that only a specific protein was strongly bound on the respective MIPs when incubated with similar concentrations of the protein mixture. Conclusion: Our findings have shown that the merits of MIPs can be extended to complex molecules of higher biomolecular mass. As such, the unique merits of the technique, including high sensitivity and selectivity, relative ease of synthesis, production of materials with higher physical robustness, and higher stability, can be extended to more templates that were previously not suitable candidates despite their abundance and usage within the body.

Keywords: molecularly imprinted polymers, specific binding, drug delivery, high biomolecular mass-templates

Procedia PDF Downloads 24
163 Identification of Electric Energy Storage Acceptance Types: Empirical Findings from the German Manufacturing Industry

Authors: Dominik Halstrup, Marlene Schriever

Abstract:

The industry, as one of the main energy consumer, is of critical importance along the way of transforming the energy system to Renewable Energies. The distributed character of the Energy Transition demands for further flexibility being introduced to the grid. In order to shed further light on the acceptance of Electric Energy Storage (ESS) from an industrial point of view, this study therefore examines the German manufacturing industry. The analysis in this paper uses data composed of a survey amongst 101 manufacturing companies in Germany. Being part of a two-stage research design, both qualitative and quantitative data was collected. Based on a literature review an acceptance concept was developed in the paper and four user-types identified: (Dedicated) User, Impeded User, Forced User and (Dedicated) Non-User and incorporated in the questionnaire. Both descriptive and bivariate analysis is deployed to identify the level of acceptance in the different organizations. After a factor analysis has been conducted, variables were grouped to form independent acceptance factors. Out of the 22 organizations that do show a positive attitude towards ESS, 5 have already implemented ESS and show a positive attitude towards ESS. They can be therefore considered ‘Dedicated Users’. The remaining 17 organizations have a positive attitude but have not implemented ESS yet. The results suggest that profitability plays an important role as well as load-management systems that are already in place. Surprisingly, 2 organizations have implemented ESS even though they have a negative attitude towards it. This is an example for a ‘Forced User’ where reasons of overriding importance or supporters with overriding authority might have forced the company to implement ESS. By far the biggest subset of the sample shows (critical) distance and can therefore be considered ‘(Dedicated) Non-Users’. The results indicate that the majority of the respondents have not thought ESS in their own organization through yet. For the majority of the sample one can therefore not speak of critical distance but rather a distance due to insufficient information and the perceived unprofitability. This paper identifies the relative state of acceptance of ESS in the manufacturing industry as well as current reasons for hindrance and perspectives for future growth of ESS in an industrial setting from a policy level. The interest that is currently generated by the media could be channeled and taken into a more substantial and individual discussion about ESS in an industrial setting. If the current perception of profitability could be addressed and communicated accordingly, ESS and their use in for instance cooperative business models could become a topic for more organizations in Germany and other parts of the world. As price mechanisms tend to favor existing technologies, policy makers need to further access the use of ESS and acknowledge the positive effects when integrated in an energy system. The subfields of generation, transmission and distribution become increasingly intertwined. New technologies and business models, such as ESS or cooperative arrangements entering the market, increase the number of stakeholders. Organizations need to find their place within this array of stakeholders.

Keywords: acceptance, energy storage solutions, German energy transition, manufacturing industry

Procedia PDF Downloads 196
162 Association of Temperature Factors with Seropositive Results against Selected Pathogens in Dairy Cow Herds from Central and Northern Greece

Authors: Marina Sofia, Alexios Giannakopoulos, Antonia Touloudi, Dimitris C Chatzopoulos, Zoi Athanasakopoulou, Vassiliki Spyrou, Charalambos Billinis

Abstract:

Fertility of dairy cattle can be affected by heat stress when the ambient temperature increases above 30°C and the relative humidity ranges from 35% to 50%. The present study was conducted on dairy cattle farms during summer months in Greece and aimed to identify the serological profile against pathogens that could affect fertility and to associate the positive serological results at herd level with temperature factors. A total of 323 serum samples were collected from clinically healthy dairy cows of 8 herds, located in Central and Northern Greece. ELISA tests were performed to detect antibodies against selected pathogens that affect fertility, namely Chlamydophila abortus, Coxiella burnetii, Neospora caninum, Toxoplasma gondii and Infectious Bovine Rhinotracheitis Virus (IBRV). Eleven climatic variables were derived from the WorldClim version 1.4. and ArcGIS V.10.1 software was used for analysis of the spatial information. Five different MaxEnt models were applied to associate the temperature variables with the locations of seropositive Chl. abortus, C. burnetii, N. caninum, T. gondii and IBRV herds (one for each pathogen). The logistic outputs were used for the interpretation of the results. ROC analyses were performed to evaluate the goodness of fit of the models’ predictions. Jackknife tests were used to identify the variables with a substantial contribution to each model. The seropositivity rates of pathogens varied among the 8 herds (0.85-4.76% for Chl. abortus, 4.76-62.71% for N. caninum, 3.8-43.47% for C. burnetii, 4.76-39.28% for T. gondii and 47.83-78.57% for IBRV). The variables of annual temperature range, mean diurnal range and maximum temperature of the warmest month gave a contribution to all five models. The regularized training gains, the training AUCs and the unregularized training gains were estimated. The mean diurnal range gave the highest gain when used in isolation and decreased the gain the most when it was omitted in the two models for seropositive Chl.abortus and IBRV herds. The annual temperature range increased the gain when used alone and decreased the gain the most when it was omitted in the models for seropositive C. burnetii, N. caninum and T. gondii herds. In conclusion, antibodies against Chl. abortus, C. burnetii, N. caninum, T. gondii and IBRV were detected in most herds suggesting circulation of pathogens that could cause infertility. The results of the spatial analyses demonstrated that the annual temperature range, mean diurnal range and maximum temperature of the warmest month could affect positively the possible pathogens’ presence. Acknowledgment: This research has been co‐financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH–CREATE–INNOVATE (project code: T1EDK-01078).

Keywords: dairy cows, seropositivity, spatial analysis, temperature factors

Procedia PDF Downloads 170
161 An Early Intervention Framework for Supporting Students’ Mathematical Development in the Transition to University STEM Programmes

Authors: Richard Harrison

Abstract:

Developing competency in mathematics and related critical thinking skills is essential to the education of undergraduate students of Science, Technology, Engineering and Mathematics (STEM). Recently, the HE sector has been impacted by a seemingly widening disconnect between the mathematical competency of incoming first-year STEM students and their entrance qualification tariffs. Despite relatively high grades in A-Level Mathematics, students may initially lack fundamental skills in key areas such as algebraic manipulation and have limited capacity to apply problem solving strategies. Compounded by compensatory measures applied to entrance qualifications during the pandemic, there has been an associated decline in student performance on introductory university mathematics modules. In the UK, a number of online resources have been developed to help scaffold the transition to university mathematics. However, in general, these do not offer a structured learning journey focused on individual developmental needs, nor do they offer an experience coherent with the teaching and learning characteristics of the destination institution. In order to address some of these issues, a bespoke framework has been designed and implemented on our VLE in the Faculty of Engineering & Physical Sciences (FEPS) at the University of Surrey. Called the FEPS Maths Support Framework, it was conceived to scaffold the mathematical development of individuals prior to entering the university and during the early stages of their transition to undergraduate studies. More than 90% of our incoming STEM students voluntarily participate in the process. Students complete a set of initial diagnostic questions in the late summer. Based on their performance and feedback on these questions, they are subsequently guided to self-select specific mathematical topic areas for review using our proprietary resources. This further assists students in preparing for discipline related diagnostic tests. The framework helps to identify students who are mathematically weak and facilitates early intervention to support students according to their specific developmental needs. This paper presents a summary of results from a rich data set captured from the framework over a 3-year period. Quantitative data provides evidence that students have engaged and developed during the process. This is further supported by process evaluation feedback from the students. Ranked performance data associated with seven key mathematical topic areas and eight engineering and science discipline areas reveals interesting patterns which can be used to identify more generic relative capabilities of the discipline area cohorts. In turn, this facilitates evidence based management of the mathematical development of the new cohort, informing any associated adjustments to teaching and learning at a more holistic level. Evidence is presented establishing our framework as an effective early intervention strategy for addressing the sector-wide issue of supporting the mathematical development of STEM students transitioning to HE

Keywords: competency, development, intervention, scaffolding

Procedia PDF Downloads 43
160 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 229
159 Strategic Interventions to Address Health Workforce and Current Disease Trends, Nakuru, Kenya

Authors: Paul Moses Ndegwa, Teresia Kabucho, Lucy Wanjiru, Esther Wanjiru, Brian Githaiga, Jecinta Wambui

Abstract:

Health outcome has improved in the country since 2013 following the adoption of the new constitution in Kenya with devolved governance with administration and health planning functions transferred to county governments. 2018-2022 development agenda prioritized universal healthcare coverage, food security, and nutrition, however, the emergence of Covid-19 and the increase of non-communicable diseases pose a challenge and constrain in an already overwhelmed health system. A study was conducted July-November 2021 to establish key challenges in achieving universal healthcare coverage within the county and best practices for improved non-communicable disease control. 14 health workers ranging from nurses, doctors, public health officers, clinical officers, and pharmaceutical technologists were purposely engaged to provide critical information through questionnaires by a trained duo observing ethical procedures on confidentiality. Data analysis. Communicable diseases are major causes of morbidity and mortality. Non-communicable diseases contribute to approximately 39% of deaths. More than 45% of the population does not have access to safe drinking water. Study noted geographic inequality with respect to distribution and use of health resources including competing non-health priorities. 56% of health workers are nurses, 13% clinical officers, 7% doctors, 9%public health workers, 2% are pharmaceutical technologists. Poor-quality data limits the validity of disease-burdened estimates and research activities. Risk factors include unsafe water, sanitation, hand washing, unsafe sex, and malnutrition. Key challenge in achieving universal healthcare coverage is the rise in the relative contribution of non-communicable diseases. Improve targeted disease control with effective and equitable resource allocation. Develop high infectious disease control mechanisms. Improvement of quality data for decision making. Strengthen electronic data-capture systems. Increase investments in the health workforce to improve health service provision and achievement of universal health coverage. Create a favorable environment to retain health workers. Fill in staffing gaps resulting in shortages of doctors (7%). Develop a multi-sectional approach to health workforce planning and management. Need to invest in mechanisms that generate contextual evidence on current and future health workforce needs. Ensure retention of qualified, skilled, and motivated health workforce. Deliver integrated people-centered health services.

Keywords: multi-sectional approach, equity, people-centered, health workforce retention

Procedia PDF Downloads 76
158 The Forms of Representation in Architectural Design Teaching: The Cases of Politecnico Di Milano and Faculty of Architecture of the University of Porto

Authors: Rafael Sousa Santos, Clara Pimena Do Vale, Barbara Bogoni, Poul Henning Kirkegaard

Abstract:

The representative component, a determining aspect of the architect's training, has been marked by an exponential and unprecedented development. However, the multiplication of possibilities has also multiplied uncertainties about architectural design teaching, and by extension, about the very principles of architectural education. In this paper, it is intended to present the results of a research developed on the following problem: the relation between the forms of representation and the architectural design teaching-learning processes. The research had as its object the educational model of two schools – the Politecnico di Milano (POLIMI) and the Faculty of Architecture of the University of Porto (FAUP) – and was led by three main objectives: to characterize the educational model followed in both schools focused on the representative component and its role; to interpret the relation between forms of representation and the architectural design teaching-learning processes; to consider their possibilities of valorisation. Methodologically, the research was conducted according to a qualitative embedded multiple-case study design. The object – i.e., the educational model – was approached in both POLIMI and FAUP cases considering its Context and three embedded unities of analysis: the educational Purposes, Principles, and Practices. In order to guide the procedures of data collection and analysis, a Matrix for the Characterization (MCC) was developed. As a methodological tool, the MCC allowed to relate the three embedded unities of analysis with the three main sources of evidence where the object manifests itself: the professors, expressing how the model is assumed; the architectural design classes, expressing how the model is achieved; and the students, expressing how the model is acquired. The main research methods used were the naturalistic and participatory observation, in-person-interview and documentary and bibliographic review. The results reveal the importance of the representative component in the educational model of both cases, despite the differences in its role. In POLIMI's model, representation is particularly relevant in the teaching of architectural design, while in FAUP’s model, it plays a transversal role – according to an idea of 'general training through hand drawing'. In fact, the difference between models relative to representation can be partially understood by the level of importance that each gives to hand drawing. Regarding the teaching of architectural design, the two cases are distinguished in the relation with the representative component: while in POLIMI the forms of representation serve essentially an instrumental purpose, in FAUP they tend to be considered also for their methodological dimension. It seems that the possibilities for valuing these models reside precisely in the relation between forms of representation and architectural design teaching. It is expected that the knowledge base developed in this research may have three main contributions: to contribute to the maintenance of the educational model of POLIMI and FAUP; through the precise description of the methodological procedures, to contribute by transferability to similar studies; through the critical and objective framework of the problem underlying the forms of representation and its relation with architectural design teaching, to contribute to the broader discussion concerning the contemporary challenges on architectural education.

Keywords: architectural design teaching, architectural education, educational models, forms of representation

Procedia PDF Downloads 94
157 The Influence of the State on the Internal Governance of Universities: A Comparative Study of Quebec (Canada) and Western Systems

Authors: Alexandre Beaupré-Lavallée, Pier-André Bouchard St-Amant, Nathalie Beaulac

Abstract:

The question of internal governance of universities is a political and scientific debate in the province of Quebec (Canada). Governments have called or set up inquiries on the subject on three separate occasions since the complete overhaul of the educational system in the 1960s: the Parent Commission (1967), the Angers Commission (1979) and the Summit on Higher Education (2013). All three produced reports that highlight the constant tug-of-war for authority and legitimacy within universities. Past and current research that cover Quebec universities have studied several aspects regarding internal governance: the structure as a whole or only some parts of it, the importance of certain key aspects such as collegiality or strategic planning, or of stakeholders, such as students or administrators. External governance has also been studied, though, as with internal governance, research so far as only covered well delineated topics like financing policies or overall impacts from wider societal changes such as New Public Management. The latter, NPM, is often brought up as a factor that influenced overall State policies like “steering-at-a-distance” or internal shifts towards “managerialism”. Yet, to the authors’ knowledge, there is not study that specifically maps how the Quebec State formally influences internal governance. In addition, most studies about the Quebec university system are not comparative in nature. This paper presents a portion of the results produced by a 2022- 2023 study that aims at filling these last two gaps in knowledge. Building on existing governmental, institutional, and scientific papers, we documented the legal and regulatory framework of the Quebec university system and of twenty-one other university systems in North America and Europe (2 in Canada, 2 in the USA, 16 in Europe, with the addition of the European Union as a distinct case). This allowed us to map the presence (or absence) of mandatory structures of governance enforced by States, as well as their composition. Then, using Clark’s “triangle of coordination”, we analyzed each system to assess the relative influences of the market, the State and the collegium upon the governance model put in place. Finally, we compared all 21 non-Quebec systems to characterize the province’s policies in an internal perspective. Preliminary findings are twofold. First, when all systems are placed on a continuum ranging from “no State interference in internal governance” to “State-run universities”, Quebec comes in the middle of the pack, albeit with a slight lean towards institutional freedom. When it comes to overall governance (like Boards and Senates), the dual nature of the Quebec system, with its public university and its coopted yet historically private (or ecclesiastic) institutions, in fact mimics the duality of all university systems. Second, however, is the sheer abundance of legal and regulatory mandates from the State that, while not expressly addressing internal governance, seems to require de facto modification of internal governance structure and dynamics to ensure institutional conformity with said mandates. This study is only a fraction of the research that is needed to better understand State-universities interactions regarding governance. We hope it will set the stage for future studies.

Keywords: internal governance, legislation, Quebec, universities

Procedia PDF Downloads 56
156 Relationships of Plasma Lipids, Lipoproteins and Cardiovascular Outcomes with Climatic Variations: A Large 8-Year Period Brazilian Study

Authors: Vanessa H. S. Zago, Ana Maria H. de Avila, Paula P. Costa, Welington Corozolla, Liriam S. Teixeira, Eliana C. de Faria

Abstract:

Objectives: The outcome of cardiovascular disease is affected by environment and climate. This study evaluated the possible relationships between climatic and environmental changes and the occurrence of biological rhythms in serum lipids and lipoproteins in a large population sample in the city of Campinas, State of Sao Paulo, Brazil. In addition, it determined the temporal variations of death due to atherosclerotic events in Campinas during the time window examined. Methods: A large 8-year retrospective study was carried out to evaluate the lipid profiles of individuals attended at the University of Campinas (Unicamp). The study population comprised 27.543 individuals of both sexes and of all ages. Normolipidemic and dyslipidemic individuals classified according to Brazilian guidelines on dyslipidemias, participated in the study. For the same period, the temperature, relative humidity and daily brightness records were obtained from the Centro de Pesquisas Meteorologicas e Climaticas Aplicadas a Agricultura/Unicamp and frequencies of death due to atherosclerotic events in Campinas were acquired from the Brazilian official database DATASUS, according to the International Classification of Diseases. Statistical analyses were performed using both Cosinor and ARIMA temporal analysis methods. For cross-correlation analysis between climatic and lipid parameters, cross-correlation functions were used. Results: Preliminary results indicated that rhythmicity was significant for LDL-C and HDL-C in the cases of both normolipidemic and dyslipidemic subjects (n =respectively 11.892 and 15.651 both measures increasing in the winter and decreasing in the summer). On the other hand, for dyslipidemic subjects triglycerides increased in summer and decreased in winter, in contrast to normolipidemic ones, in which triglycerides did not show rhythmicity. The number of deaths due to atherosclerotic events showed significant rhythmicity, with maximum and minimum frequencies in winter and summer, respectively. Cross-correlation analyzes showed that low humidity and temperature, higher thermal amplitude and dark cycles are associated with increased levels of LDL-C and HDL-C during winter. In contrast, TG showed moderate cross-correlations with temperature and minimum humidity in an inverse way: maximum temperature and humidity increased TG during the summer. Conclusions: This study showed a coincident rhythmicity between low temperatures and high concentrations of LDL-C and HDL-C and the number of deaths due to atherosclerotic cardiovascular events in individuals from the city of Campinas. The opposite behavior of cholesterol and TG suggest different physiological mechanisms in their metabolic modulation by climate parameters change. Thus, new analyses are underway to better elucidate these mechanisms, as well as variations in lipid concentrations in relation to climatic variations and their associations with atherosclerotic disease and death outcomes in Campinas.

Keywords: atherosclerosis, climatic variations, lipids and lipoproteins, associations

Procedia PDF Downloads 98