Search results for: levels of knowledge
709 Understanding Governance of Biodiversity-Supporting and Edible Landscapes Using Network Analysis in a Fast Urbanising City of South India
Authors: M. Soubadra Devy, Savitha Swamy, Chethana V. Casiker
Abstract:
Sustainable smart cities are emerging as an important concept in response to the exponential rise in the world’s urbanizing population. While earlier, only technical, economic and governance based solutions were considered, more and more layers are being added in recent times. With the prefix of 'sustainability', solutions which help in judicious use of resources without negatively impacting the environment have become critical. We present a case study of Bangalore city which has transformed from being a garden city and pensioners' paradise to being an IT city with a huge, young population from different regions and diverse cultural backgrounds. This has had a big impact on the green spaces in the city and the biodiversity that they support, as well as on farming/gardening practices. Edible landscapes comprising farms lands, home gardens and neighbourhood parks (NPs henceforth) were examined. The land prices of areas having NPs were higher than those that did not indicate an appreciation of their aesthetic value. NPs were part of old and new residential areas largely managed by the municipality. They comprised manicured gardens which were similar in vegetation structure and composition. Results showed that NPs that occurred in higher density supported reasonable levels of biodiversity. In situations where NPs occurred in lower density, the presence of a larger green space such as a heritage park or botanical garden enhanced the biodiversity of these parks. In contrast, farm lands and home gardens which were common within the city are being lost at an unprecedented scale to developmental projects. However, there is also the emergence of a 'neo-culture' of home-gardening that promotes 'locovory' or consumption of locally grown food as a means to a sustainable living and reduced carbon footprint. This movement overcomes the space constraint by using vertical and terrace gardening techniques. Food that is grown within cities comprises of vegetables and fruits which are largely pollinator dependent. This goes hand in hand with our landscape-level study that has shown that cities support pollinator diversity. Maintaining and improving these man-made ecosystems requires analysing the functioning and characteristics of the existing structures of governance. Social network analysis tool was applied to NPs to examine relationships, between actors and ties. The management structures around NPs, gaps, and means to strengthen the networks from the current state to a near-ideal state were identified for enhanced services. Learnings from NPs were used to build a hypothetical governance structure and functioning of integrated governance of NPs and edible landscapes to enhance ecosystem services such as biodiversity support, food production, and aesthetic value. They also contribute to the sustainability axis of smart cities.Keywords: biodiversity support, ecosystem services, edible green spaces, neighbourhood parks, sustainable smart city
Procedia PDF Downloads 138708 Hydrogen Purity: Developing Low-Level Sulphur Speciation Measurement Capability
Authors: Sam Bartlett, Thomas Bacquart, Arul Murugan, Abigail Morris
Abstract:
Fuel cell electric vehicles provide the potential to decarbonise road transport, create new economic opportunities, diversify national energy supply, and significantly reduce the environmental impacts of road transport. A potential issue, however, is that the catalyst used at the fuel cell cathode is susceptible to degradation by impurities, especially sulphur-containing compounds. A recent European Directive (2014/94/EU) stipulates that, from November 2017, all hydrogen provided to fuel cell vehicles in Europe must comply with the hydrogen purity specifications listed in ISO 14687-2; this includes reactive and toxic chemicals such as ammonia and total sulphur-containing compounds. This requirement poses great analytical challenges due to the instability of some of these compounds in calibration gas standards at relatively low amount fractions and the difficulty associated with undertaking measurements of groups of compounds rather than individual compounds. Without the available reference materials and analytical infrastructure, hydrogen refuelling stations will not be able to demonstrate compliance to the ISO 14687 specifications. The hydrogen purity laboratory at NPL provides world leading, accredited purity measurements to allow hydrogen refuelling stations to evidence compliance to ISO 14687. Utilising state-of-the-art methods that have been developed by NPL’s hydrogen purity laboratory, including a novel method for measuring total sulphur compounds at 4 nmol/mol and a hydrogen impurity enrichment device, we provide the capabilities necessary to achieve these goals. An overview of these capabilities will be given in this paper. As part of the EMPIR Hydrogen co-normative project ‘Metrology for sustainable hydrogen energy applications’, NPL are developing a validated analytical methodology for the measurement of speciated sulphur-containing compounds in hydrogen at low amount fractions pmol/mol to nmol/mol) to allow identification and measurement of individual sulphur-containing impurities in real samples of hydrogen (opposed to a ‘total sulphur’ measurement). This is achieved by producing a suite of stable gravimetrically-prepared primary reference gas standards containing low amount fractions of sulphur-containing compounds (hydrogen sulphide, carbonyl sulphide, carbon disulphide, 2-methyl-2-propanethiol and tetrahydrothiophene have been selected for use in this study) to be used in conjunction with novel dynamic dilution facilities to enable generation of pmol/mol to nmol/mol level gas mixtures (a dynamic method is required as compounds at these levels would be unstable in gas cylinder mixtures). Method development and optimisation are performed using gas chromatographic techniques assisted by cryo-trapping technologies and coupled with sulphur chemiluminescence detection to allow improved qualitative and quantitative analyses of sulphur-containing impurities in hydrogen. The paper will review the state-of-the art gas standard preparation techniques, including the use and testing of dynamic dilution technologies for reactive chemical components in hydrogen. Method development will also be presented highlighting the advances in the measurement of speciated sulphur compounds in hydrogen at low amount fractions.Keywords: gas chromatography, hydrogen purity, ISO 14687, sulphur chemiluminescence detector
Procedia PDF Downloads 223707 Phorbol 12-Myristate 13-Acetate (PMA)-Differentiated THP-1 Monocytes as a Validated Microglial-Like Model in Vitro
Authors: Amelia J. McFarland, Andrew K. Davey, Shailendra Anoopkumar-Dukie
Abstract:
Microglia are the resident macrophage population of the central nervous system (CNS), contributing to both innate and adaptive immune response, and brain homeostasis. Activation of microglia occurs in response to a multitude of pathogenic stimuli in their microenvironment; this induces morphological and functional changes, resulting in a state of acute neuroinflammation which facilitates injury resolution. Adequate microglial function is essential for the health of the neuroparenchyma, with microglial dysfunction implicated in numerous CNS pathologies. Given the critical role that these macrophage-derived cells play in CNS homeostasis, there is a high demand for microglial models suitable for use in neuroscience research. The isolation of primary human microglia, however, is both difficult and costly, with microglial activation an unwanted but inevitable result of the extraction process. Consequently, there is a need for the development of alternative experimental models which exhibit morphological, biochemical and functional characteristics of human microglia without the difficulties associated with primary cell lines. In this study, our aim was to evaluate whether THP-1 human peripheral blood monocytes would display microglial-like qualities following an induced differentiation, and, therefore, be suitable for use as surrogate microglia. To achieve this aim, THP-1 human peripheral blood monocytes from acute monocytic leukaemia were differentiated with a range of phorbol 12-myristate 13-acetate (PMA) concentrations (50-200 nM) using two different protocols: a 5-day continuous PMA exposure or a 3-day continuous PMA exposure followed by a 5-day rest in normal media. In each protocol and at each PMA concentration, microglial-like cell morphology was assessed through crystal violet staining and the presence of CD-14 microglial / macrophage cell surface marker. Lipopolysaccharide (LPS) from Escherichia coli (055: B5) was then added at a range of concentrations from 0-10 mcg/mL to activate the PMA-differentiated THP-1 cells. Functional microglial-like behavior was evaluated by quantifying the release of prostaglandin (PG)-E2 and pro-inflammatory cytokines interleukin (IL)-1β and tumour necrosis factor (TNF)-α using mediator-specific ELISAs. Furthermore, production of global reactive oxygen species (ROS) and nitric oxide (NO) were determined fluorometrically using dichlorodihydrofluorescein diacetate (DCFH-DA) and diaminofluorescein diacetate (DAF-2-DA) respectively. Following PMA-treatment, it was observed both differentiation protocols resulted in cells displaying distinct microglial morphology from 10 nM PMA. Activation of differentiated cells using LPS significantly augmented IL-1β, TNF-α and PGE2 release at all LPS concentrations under both differentiation protocols. Similarly, a significant increase in DCFH-DA and DAF-2-DA fluorescence was observed, indicative of increases in ROS and NO production. For all endpoints, the 5-day continuous PMA treatment protocol yielded significantly higher mediator levels than the 3-day treatment and 5-day rest protocol. Our data, therefore, suggests that the differentiation of THP-1 human monocyte cells with PMA yields a homogenous microglial-like population which, following stimulation with LPS, undergo activation to release a range of pro-inflammatory mediators associated with microglial activation. Thus, the use of PMA-differentiated THP-1 cells represents a suitable microglial model for in vitro research.Keywords: differentiation, lipopolysaccharide, microglia, monocyte, neuroscience, THP-1
Procedia PDF Downloads 386706 Estimating Industrial Pollution Load in Phnom Penh by Industrial Pollution Projection System
Authors: Vibol San, Vin Spoann
Abstract:
Manufacturing plays an important role in job creation around the world. In 2013, it is estimated that there were more than half a billion jobs in manufacturing. In Cambodia in 2015, the primary industry occupies 26.18% of the total economy, while agriculture is contributing 29% and the service sector 39.43%. The number of industrial factories, which are dominated by garment and textiles, has increased since 1994, mainly in Phnom Penh city. Approximately 56% out of total 1302 firms are operated in the Capital city in Cambodia. Industrialization to achieve the economic growth and social development is directly responsible for environmental degradation, threatening the ecosystem and human health issues. About 96% of total firms in Phnom Penh city are the most and moderately polluting firms, which have contributed to environmental concerns. Despite an increasing array of laws, strategies and action plans in Cambodia, the Ministry of Environment has encountered some constraints in conducting the monitoring work, including lack of human and financial resources, lack of research documents, the limited analytical knowledge, and lack of technical references. Therefore, the necessary information on industrial pollution to set strategies, priorities and action plans on environmental protection issues is absent in Cambodia. In the absence of this data, effective environmental protection cannot be implemented. The objective of this study is to estimate industrial pollution load by employing the Industrial Pollution Projection System (IPPS), a rapid environmental management tool for assessment of pollution load, to produce a scientific rational basis for preparing future policy direction to reduce industrial pollution in Phnom Penh city. Due to lack of industrial pollution data in Phnom Penh, industrial emissions to the air, water and land as well as the sum of emissions to all mediums (air, water, land) are estimated using employment economic variable in IPPS. Due to the high number of employees, the total environmental load generated in Phnom Penh city is estimated to be 476.980.93 tons in 2014, which is the highest industrial pollution compared to other locations in Cambodia. The result clearly indicates that Phnom Penh city is the highest emitter of all pollutants in comparison with environmental pollutants released by other provinces. The total emission of industrial pollutants in Phnom Penh shares 55.79% of total industrial pollution load in Cambodia. Phnom Penh city generates 189,121.68 ton of VOC, 165,410.58 ton of toxic chemicals to air, 38,523.33 ton of toxic chemicals to land and 28,967.86 ton of SO2 in 2014. The results of the estimation show that Textile and Apparel sector is the highest generators of toxic chemicals into land and air, and toxic metals into land, air and water, while Basic Metal sector is the highest contributor of toxic chemicals to water. Textile and Apparel sector alone emits 436,015.84 ton of total industrial pollution loads. The results suggest that reduction in industrial pollution could be achieved by focusing on the most polluting sectors.Keywords: most polluting area, polluting industry, pollution load, pollution intensity
Procedia PDF Downloads 259705 Concentration and Stability of Fatty Acids and Ammonium in the Samples from Mesophilic Anaerobic Digestion
Authors: Mari Jaakkola, Jasmiina Haverinen, Tiina Tolonen, Vesa Virtanen
Abstract:
These process monitoring of biogas plant gives valuable information of the function of the process and help to maintain a stable process. The costs of basic monitoring are often much lower than the costs associated with re-establishing a biologically destabilised plant. Reactor acidification through reactor overload is one of the most common reasons for process deterioration in anaerobic digesters. This occurs because of a build-up of volatile fatty acids (VFAs) produced by acidogenic and acetogenic bacteria. VFAs cause pH values to decrease, and result in toxic conditions in the reactor. Ammonia ensures an adequate supply of nitrogen as a nutrient substance for anaerobic biomass and increases system's buffer capacity, counteracting acidification lead by VFA production. However, elevated ammonia concentration is detrimental to the process due to its toxic effect. VFAs are considered the most reliable analytes for process monitoring. To obtain accurate results, sample storage and transportation need to be carefully controlled. This may be a challenge for off-line laboratory analyses especially when the plant is located far away from the laboratory. The aim of this study was to investigate the correlation between fatty acids, ammonium, and bacteria in the anaerobic digestion samples obtained from an industrial biogas factory. The stability of the analytes was studied comparing the results of the on-site analyses performed in the factory site to the results of the samples stored at room temperature and -18°C (up to 30 days) after sampling. Samples were collected in the biogas plant consisting of three separate mesofilic AD reactors (4000 m³ each) where the main feedstock was swine slurry together with a complex mixture of agricultural plant and animal wastes. Individual VFAs, ammonium, and nutrients (K, Ca, Mg) were studied by capillary electrophoresis (CE). Longer chain fatty acids (oleic, hexadecanoic, and stearic acids) and bacterial profiles were studied by GC-MSD (Gas Chromatography-Mass Selective Detector) and 16S rDNA, respectively. On-site monitoring of the analytes was performed by CE. The main VFA in all samples was acetic acid. However, in one reactor sample elevated levels of several individual VFAs and long chain fatty acids were detected. Also bacterial profile of this sample differed from the profiles of other samples. Acetic acid decomposed fast when the sample was stored in a room temperature. All analytes were stable when stored in a freezer. Ammonium was stable even at a room temperature for the whole testing period. One reactor sample had higher concentration of VFAs and long chain fatty acids than other samples. CE was utilized successfully in the on-site analysis of separate VFAs and NH₄ in the biogas production site. Samples should be analysed in the sampling day if stored in RT or freezed for longer storage time. Fermentation reject can be stored (and transported) at ambient temperature at least for one month without loss of NH₄. This gives flexibility to the logistic solutions when reject is used as a fertilizer.Keywords: anaerobic digestion, capillary electrophoresis, ammonium, bacteria
Procedia PDF Downloads 168704 Insufficiency of Cardioprotection at Adaptation to Chronic Hypoxia and at Remote Postconditioning in Young and Aged Rats with Metabolic Syndrome, the Role of Metabolic Disorders or Opioid Signaling
Authors: Natalia V. Naryzhnaya, Alexandr V. Mukhomedzyanov, Ivan A. Derkachev, Boris K. Kurbatov, Leonid N. Maslov
Abstract:
Background: Techniques of adaptation to hypoxia and remote postconditioning (RPost) have great prospects for use in the clinic. However, recent studies have shown low efficacy of remote postconditioning in patients with AMI. We hypothesize that the reasons for this inefficiency may be metabolic disorders, which are very common, especially in patients with cardiovascular disease, and age of patients. The purpose of the study was to reveal the effectiveness of adaptation to chronic hypoxia and RPost. To determine the possible relationship between the decrease in the effectiveness of projective impacts and disorders of carbohydrate and lipid metabolism. Design: The study was carried out on Wistar rats 60 day old. MetS was induced by high-carbohydrate, high-fat diet (HСHFD). Modeling MS led to the formation of obesity, hypertension, impaired lipid and carbohydrate metabolism, hyperleptinemia, and moderate stress. Groups with adaptation to chronic hypoxia were subjected to hypoxia for 21 days at 12% O2 and 0.3% CO2 after complete of HСHFD. All animals were subjected to 45 min coronary occlusion and 120 min reperfusion. Groups with RPost, immediately after the end of ischemia, tourniquets were applied to the hind limbs in the area of the hip joint (3 times in the mode of 5 min ischemia, 5 min reperfusion). Results: RPost led to a twofold reduction of infarct size in rats with intact metabolism (р < 0.0001), while in rats with MetS, a decrease in infarct size during RPost was 25 % (p = 0.00003). A direct correlation was found between of infarct size during RPost and the serum leptin level of rats with MetC (r = 0.85). The presented data suggested that a decrease in the efficiency of remote postconditioning in rats with diet-induced metabolic syndrome depends on serum leptin. Chronic hypoxia resulted in a 38% reduced in infarct size in metabolically intact rats. The decrease of cardioprotection was observed in rats with chronic hypoxia and MetS. Infarct size showed a direct correlation with impaired glucose tolerance (AUC, glucose tolerance test, r = 0.034) and serum triglyceride levels (r = 0.39). Our study showed the dependence of cardioprotection in rats with metabolic syndrome during chronic hypoxia and DPost on opioids in the blood serum and myocardium, protein kinase C and NO synthase activity. Conclusion: The results obtained showed that the infarct-limiting efficiency of adaptation to hypoxia and remote postconditioning is reduced or completely absent in animals with metabolic syndrome. The increase in the infarction, in this case, directly depends on the disturbances in carbohydrate. lipid metabolism and opioids signaling. Funding: Investigation of effectiveness of chronic hypoxia at the metabolic syndrome was carried out within the support of Russian Science Foundation Grant 22-15-00048. Studies of the mechanisms of arterial hypertension in induced metabolic syndrome were carried out within the framework of the state assignment (122020300042-4). The work was performed using the Center for Collective Use "Medical Genomics".Keywords: chronic hypoxia, opioids, remote postconditioning, metabolic syndrome
Procedia PDF Downloads 77703 Community Participation and Place Identity as Mediators on the Impact of Resident Social Capital on Support Intention for Festival Tourism
Authors: Nien-Te Kuo, Yi-Sung Cheng, Kuo-Chien Chang
Abstract:
Cultural festival tourism is now seen by many as an opportunity to facilitate community development because it has significant influences on the economic, social, cultural, and political aspects of local communities. The potential for tourist attraction has been recognized as a useful tool to strengthen local economies from governments. However, most community festivals in Taiwan are short-lived, often only lasting for a few years or occasionally not making it past a one-off event. Researchers suggested that most governments and other stakeholders do not recognize the importance of building a partnership with residents when developing community tourism. Thus, the sustainable community tourism development still remains a key issue in the existing literature. The success of community tourism is related to the attitudes and lifestyles of local residents. In order to maintain sustainable tourism, residents need to be seen as development partners. Residents’ support intention for tourism development not only helps to increase awareness of local culture, history, the natural environment, and infrastructure, but also improves the interactive relationship between the host community and tourists. Furthermore, researchers have identified the social capital theory as the core of sustainable community tourism development. The social capital of residents has been seen as a good way to solve issues of tourism governance, forecast the participation behavior and improve support intention of residents. In addition, previous studies have pointed out the role of community participation and place identity in increasing resident support intention for tourism development. A lack of place identity is one of the main reasons that community tourism has become a mere formality and is not sustainable. It refers to how much residents participate during tourism development and is mainly influenced by individual interest. Scholars believed that the place identity of residents is the soul of community festivals. It shows the community spirit to visitors and has significant impacts on tourism benefits and support intention of residents in community tourism development. Although the importance of community participation and place identity have been confirmed by both governmental and non-governmental organizations, real-life execution still needs to be improved. This study aimed to use social capital theory to investigate the social structure between community residents, participation levels in festival tourism, degrees of place identity, and resident support intention for future community tourism development, and the causal relationship that these factors have with cultural festival tourism. A quantitative research approach was employed to examine the proposed model. Structural equation model was used to test and verify the proposed hypotheses. This was a case study of the Kaohsiung Zuoying Wannian Folklore Festival. The festival was located in the Zuoying District of Kaohsiung City, Taiwan. The target population of this study was residents who attended the festival. The results reveal significant correlations among social capital, community participation, place identity and support intention. The results also confirm that impacts of social capital on support intention were significantly mediated by community participation and place identity. Practical suggestions were provided for tourism operators and policy makers. This work was supported by the Ministry of Science and Technology of Taiwan, Republic of China, under the grant MOST-105-2410-H-328-013.Keywords: community participation, place identity, social capital, support intention
Procedia PDF Downloads 326702 Adaptation of Retrofit Strategies for the Housing Sector in Northern Cyprus
Authors: B. Ozarisoy, E. Ampatzi, G. Z. Lancaster
Abstract:
This research project is undertaken in the Turkish Republic of Northern Cyprus (T.R.N.C). The study focuses on identifying refurbishment activities capable of diagnosing and detecting the underlying problems alongside the challenges offered by the buildings’ typology in addition to identifying the correct construction materials in the refurbishment process which allow for the maximisation of expected energy savings. Attention is drawn to, the level of awareness and understanding of refurbishment activity that needs to be raised in the current construction process alongside factors that include the positive environmental impact and the saving of energy. The approach here is to look at buildings that have been built by private construction companies that have already been refurbished by occupants and to suggest additional control mechanisms for retrofitting that can further enhance the process of renewal. The objective of the research is to investigate the occupants’ behaviour and role in the refurbishment activity; to explore how and why occupants decide to change building components and to understand why and how occupants consider using energy-efficient materials. The present work is based on data from this researcher’s first-hand experience and incorporates the preliminary data collection on recent housing sector statistics, including the year in which housing estates were built, an examination of the characteristics that define the construction industry in the T.R.N.C., building typology and the demographic structure of house owners. The housing estates are chosen from 16 different projects in four different regions of the T.R.N.C. that include urban and suburban areas. There is, therefore, a broad representation of the common drivers in the property market, each with different levels of refurbishment activity and this is coupled with different samplings from different climatic regions within the T.R.N.C. The study is conducted through semi-structured interviews to identify occupants’ behaviour as it is associated with refurbishment activity. The interviews provide all the occupants’ demographic information, needs and intentions as they relate to various aspects of the refurbishment process. This research paper presents the results of semi-structured interviews with 70 homeowners in a selected group of 16 housing estates in five different parts of the T.R.N.C. The people who agreed to be interviewed in this study are all residents of single or multi-family housing units. Alongside the construction process and its impact on the environment, the results point out the need for control mechanisms in the housing sector to promote and support the adoption of retrofit strategies and minimize non-controlled refurbishment activities, in line with diagnostic information of the selected buildings. The expected solutions should be effective, environmentally acceptable and feasible given the type of housing projects under review, with due regard for their location, the climatic conditions within which they were undertaken, the socio-economic standing of the house owners and their attitudes, local resources and legislative constraints. Furthermore, the study goes on to insist on the practical and long-term economic benefits of refurbishment under the proper conditions and why this should be fully understood by the householders.Keywords: construction process, energy-efficiency, refurbishment activity, retrofitting
Procedia PDF Downloads 323701 Teaching English as a Foreign Language: Insights from the Philippine Context
Authors: Arlene Villarama, Micol Grace Guanzon, Zenaida Ramos
Abstract:
This paper provides insights into teaching English as a Foreign Language in the Philippines. The authors reviewed relevant theories and literature, and provide an analysis of the issues in teaching English in the Philippine setting in the light of these theories. The authors made an investigation in Bagong Barrio National High School (BBNHS) - a public school in Caloocan City. The institution has a population of nearly 3,000 students. The performances of randomly chosen 365 respondents were scrutinised. The study regarding the success of teaching English as a foreign language to Filipino children were highlighted. This includes the respondents’ family background, surroundings, way of living, and their behavior and understanding regarding education. The results show that there is a significant relationship between demonstrative, communal, and logical areas that touch the efficacy of introducing English as a foreign Dialectal. Filipino children, by nature, are adventurous and naturally joyful even for little things. They are born with natural skills and capabilities to discover new things. They highly consider activities and work that ignite their curiosity. They love to be recognised and are inspired the most when given the assurance of acceptance and belongingness. Fun is the appealing influence to ignite and motivate learning. The magic word is excitement. The study reveals the many facets of the accumulation and transmission of erudition, in introduction and administration of English as a foreign phonological; it runs and passes through different channels of diffusion. Along the way, there are particles that act as obstructions in protocols where knowledge are to be gathered. Data gained from the respondents conceals a reality that is beyond one’s imagination. One significant factor that touches the inefficacy of understanding and using English as a foreign language is an erroneous outset gained from an old belief handed down from generation to generation. This accepted perception about the power and influence of the use of language, gives the novices either a negative or a positive notion. The investigation shows that a higher number of dislikes in the use of English can be tracked down from the belief of the story on how the English language came into existence. The belief that only the great and the influential have the right to use English as a means of communication kills the joy of acceptance. A significant notation has to be examined so as to provide a solution or if not eradicate the misconceptions that lie behind the substance of the matter. The result of the authors’ research depicts a substantial correlation between the emotional (demonstrative), social (communal), and intellectual (logical). The focus of this paper is to bring out the right notation and disclose the misconceptions with regards to teaching English as a foreign language. This will concentrate on the emotional, social, and intellectual areas of the Filipino learners and how these areas affect the transmittance and accumulation of learning. The authors’ aim is to formulate logical ways and techniques that would open up new beginnings in understanding and acceptance of the subject matter.Keywords: accumulation, behaviour, facets, misconceptions, transmittance
Procedia PDF Downloads 203700 Inclusion Advances of Disabled People in Higher Education: Possible Alignment with the Brazilian Statute of the Person with Disabilities
Authors: Maria Cristina Tommaso, Maria Das Graças L. Silva, Carlos Jose Pacheco
Abstract:
Have the advances of the Brazilian legislation reflected or have been consonant with the inclusion of PwD in higher education? In 1990 the World Declaration on Education for All, a document organized by the United Nations Educational, Scientific and Cultural Organization (UNESCO), stated that the basic learning needs of people with disabilities, as they were called, required special attention. Since then, legislation in signatory countries such as Brazil has made considerable progress in guaranteeing, in a gradual and increasing manner, the rights of persons with disabilities to education. Principles, policies, and practices of special educational needs were created and guided action at the regional, national and international levels on the structure of action in Special Education such as administration, recruitment of educators and community involvement. Brazilian Education Law No. 3.284 of 2003 ensures inclusion of people with disabilities in Brazilian higher education institutions and also in 2015 the Law 13,146/2015 - Brazilian Law on the Inclusion of Persons with Disabilities (Statute of the Person with Disabilities) regulates the inclusion of PwD by the guarantee of their rights. This study analyses data related to people with disability inclusion in High Education in the south region of Rio de Janeiro State - Brazil during the period between 2008 and 2018, based in its correlation with the changes in the Brazilian legislation in the last ten years that were subjected by PwD inclusion processes in the Brazilian High Education Systems. The region studied is composed by sixteen cities and this research refers to the largest one, Volta Redonda that represents 25 percent of the total regional population. The PwD reception process had the dicing data at the Volta Redonda University Center with 35 percent of high education students in this territorial area. The research methodology analyzed the changes occurring in the legislation about the inclusion of people with disability in High Education in the last ten years and its impacts on the samples of this study during the period between 2008 and 2018. It was verified an expressive increasing of the number of PwD students, from two in 2008 to 190 PwD students in 2018. The data conclusions are presented in quantitative terms and the aim of this study was to verify the effectiveness of the PwD inclusion in High Education, allowing visibility of this social group. This study verified that the fundamental human rights guarantees have a strong relation to the advances of legislation and the State as a guarantor instance of the rights of the people with disability and must be considered a mean of consolidation of their education opportunities isonomy. The recognition of full rights and the inclusion of people with disabilities requires the efforts of those who have decision-making power. This study aimed to demonstrate that legislative evolution is an effective instrument in the social integration of people with disabilities. The study confirms the fundamental role of the state in guaranteeing human rights and demonstrates that legislation not only protects the interests of vulnerable social groups, but can also, and this is perhaps its main mission, to change behavior patterns and provoke the social transformation necessary to the reduction of inequality of opportunity.Keywords: high education, inclusion, legislation, people with disability
Procedia PDF Downloads 151699 Construction of a Dynamic Migration Model of Extracellular Fluid in Brain for Future Integrated Control of Brain State
Authors: Tomohiko Utsuki, Kyoka Sato
Abstract:
In emergency medicine, it is recognized that brain resuscitation is very important for the reduction of mortality rate and neurological sequelae. Especially, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) are most required for stabilizing brain’s physiological state in the treatment for such as brain injury, stroke, and encephalopathy. However, the manual control of BT, ICP, and CBF frequently requires the decision and operation of medical staff, relevant to medication and the setting of therapeutic apparatus. Thus, the integration and the automation of the control of those is very effective for not only improving therapeutic effect but also reducing staff burden and medical cost. For realizing such integration and automation, a mathematical model of brain physiological state is necessary as the controlled object in simulations, because the performance test of a prototype of the control system using patients is not ethically allowed. A model of cerebral blood circulation has already been constructed, which is the most basic part of brain physiological state. Also, a migration model of extracellular fluid in brain has been constructed, however the condition that the total volume of intracranial cavity is almost changeless due to the hardness of cranial bone has not been considered in that model. Therefore, in this research, the dynamic migration model of extracellular fluid in brain was constructed on the consideration of the changelessness of intracranial cavity’s total volume. This model is connectable to the cerebral blood circulation model. The constructed model consists of fourteen compartments, twelve of which corresponds to perfused area of bilateral anterior, middle and posterior cerebral arteries, the others corresponds to cerebral ventricles and subarachnoid space. This model enable to calculate the migration of tissue fluid from capillaries to gray matter and white matter, the flow of tissue fluid between compartments, the production and absorption of cerebrospinal fluid at choroid plexus and arachnoid granulation, and the production of metabolic water. Further, the volume, the colloid concentration, and the tissue pressure of/in each compartment are also calculable by solving 40-dimensional non-linear simultaneous differential equations. In this research, the obtained model was analyzed for its validation under the four condition of a normal adult, an adult with higher cerebral capillary pressure, an adult with lower cerebral capillary pressure, and an adult with lower colloid concentration in cerebral capillary. In the result, calculated fluid flow, tissue volume, colloid concentration, and tissue pressure were all converged to suitable value for the set condition within 60 minutes at a maximum. Also, because these results were not conflict with prior knowledge, it is certain that the model can enough represent physiological state of brain under such limited conditions at least. One of next challenges is to integrate this model and the already constructed cerebral blood circulation model. This modification enable to simulate CBF and ICP more precisely due to calculating the effect of blood pressure change to extracellular fluid migration and that of ICP change to CBF.Keywords: dynamic model, cerebral extracellular migration, brain resuscitation, automatic control
Procedia PDF Downloads 152698 The Asymptotic Hole Shape in Long Pulse Laser Drilling: The Influence of Multiple Reflections
Authors: Torsten Hermanns, You Wang, Stefan Janssen, Markus Niessen, Christoph Schoeler, Ulrich Thombansen, Wolfgang Schulz
Abstract:
In long pulse laser drilling of metals, it can be demonstrated that the ablation shape approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from ultra short pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in long pulse drilling of metals is identified, a model for the description of the asymptotic hole shape numerically implemented, tested and clearly confirmed by comparison with experimental data. The model assumes a robust process in that way that the characteristics of the melt flow inside the arising melt film does not change qualitatively by changing the laser or processing parameters. Only robust processes are technically controllable and thus of industrial interest. The condition for a robust process is identified by a threshold for the mass flow density of the assist gas at the hole entrance which has to be exceeded. Within a robust process regime the melt flow characteristics can be captured by only one model parameter, namely the intensity threshold. In analogy to USP ablation (where it is already known for a long time that the resulting hole shape results from a threshold for the absorbed laser fluency) it is demonstrated that in the case of robust long pulse ablation the asymptotic shape forms in that way that along the whole contour the absorbed heat flux density is equal to the intensity threshold. The intensity threshold depends on the special material and radiation properties and has to be calibrated be one reference experiment. The model is implemented in a numerical simulation which is called AsymptoticDrill and requires such a few amount of resources that it can run on common desktop PCs, laptops or even smart devices. Resulting hole shapes can be calculated within seconds what depicts a clear advantage over other simulations presented in literature in the context of industrial every day usage. Against this background the software additionally is equipped with a user-friendly GUI which allows an intuitive usage. Individual parameters can be adjusted using sliders while the simulation result appears immediately in an adjacent window. A platform independent development allow a flexible usage: the operator can use the tool to adjust the process in a very convenient manner on a tablet during the developer can execute the tool in his office in order to design new processes. Furthermore, at the best knowledge of the authors AsymptoticDrill is the first simulation which allows the import of measured real beam distributions and thus calculates the asymptotic hole shape on the basis of the real state of the specific manufacturing system. In this paper the emphasis is placed on the investigation of the effect of multiple reflections on the asymptotic hole shape which gain in importance when drilling holes with large aspect ratios.Keywords: asymptotic hole shape, intensity threshold, long pulse laser drilling, robust process
Procedia PDF Downloads 210697 Molecular Dynamics Simulation of Realistic Biochar Models with Controlled Microporosity
Authors: Audrey Ngambia, Ondrej Masek, Valentina Erastova
Abstract:
Biochar is an amorphous carbon-rich material generated from the pyrolysis of biomass with multifarious properties and functionality. Biochar has shown proven applications in the treatment of flue gas and organic and inorganic pollutants in soil and water/wastewater as a result of its multiple surface functional groups and porous structures. These properties have also shown potential in energy storage and carbon capture. The availability of diverse sources of biomass to produce biochar has increased interest in it as a sustainable and environmentally friendly material. The properties and porous structures of biochar vary depending on the type of biomass and high heat treatment temperature (HHT). Biochars produced at HHT between 400°C – 800°C generally have lower H/C and O/C ratios, higher porosities, larger pore sizes and higher surface areas with temperature. While all is known experimentally, there is little knowledge on the porous role structure and functional groups play on processes occurring at the atomistic scale, which are extremely important for the optimization of biochar for application, especially in the adsorption of gases. Atomistic simulations methods have shown the potential to generate such amorphous materials; however, most of the models available are composed of only carbon atoms or graphitic sheets, which are very dense or with simple slit pores, all of which ignore the important role of heteroatoms such as O, N, S and pore morphologies. Hence, developing realistic models that integrate these parameters are important to understand their role in governing adsorption mechanisms that will aid in guiding the design and optimization of biochar materials for target applications. In this work, molecular dynamics simulations in the isobaric ensemble are used to generate realistic biochar models taking into account experimentally determined H/C, O/C, N/C, aromaticity, micropore size range, micropore volumes and true densities of biochars. A pore generation approach was developed using virtual atoms, which is a Lennard-Jones sphere of varying van der Waals radius and softness. Its interaction via a soft-core potential with the biochar matrix allows the creation of pores with rough surfaces while varying the van der Waals radius parameters gives control to the pore-size distribution. We focused on microporosity, creating average pore sizes of 0.5 - 2 nm in diameter and pore volumes in the range of 0.05 – 1 cm3/g, which corresponds to experimental gas adsorption micropore sizes of amorphous porous biochars. Realistic biochar models with surface functionalities, micropore size distribution and pore morphologies were developed, and they could aid in the study of adsorption processes in confined micropores.Keywords: biochar, heteroatoms, micropore size, molecular dynamics simulations, surface functional groups, virtual atoms
Procedia PDF Downloads 69696 Motivations, Communication Dimensions, and Perceived Outcomes in the Multi-Sectoral Collaboration of the Visitor Management Program of Mount Makiling Forest Reserve in Los Banos, Laguna, Philippines
Authors: Charmaine B. Distor
Abstract:
Collaboration has long been recognized in different fields, but there’s been little research on operationalizing it especially on a multi-sectoral setting as per the author’s best knowledge. Also, communication is one of the factors that is usually overlooked when studying it. Specifically, this study aimed to describe the organizational profile and tasks of collaborators in the visitor management program of Make It Makiling (MIM). It also identified the factors that motivated collaborators to collaborate in MIM while determining the communication dimensions in the collaborative process. It also determined the communication channels used by collaborators in MIM while identifying the outcomes of collaboration in MIM. This study also found out if a relationship exists between collaborators’ motivations for collaboration and their perceived outcomes of collaboration, and collaborators' communication dimensions and their perceived outcomes of collaboration. Lastly, it also provided recommendations to improve the communication in MIM. Data were gathered using a self-administered survey that was patterned after Mattessich and Monsey’s (1992) collaboration experience questionnaire. Interviews and secondary sources mainly provided by the Makiling Center for Mountain Ecosystems (MCME) were also used. From the seven MIM collaborating organizations that were selected through purposive sampling, 86 respondents were chosen. Then, data were analyzed through frequency counts, percentages, measures of central tendencies, and Pearson’s and Spearman rho correlations. Collaborators’ length of collaboration ranged from seven to twenty years. Furthermore, six out of seven of the collaborators were involved in the task of 'emergency, rescue, and communication'. For the other aspect of the antecedents, the history of previous collaboration efforts ranked as the highest rated motivation for collaboration. In line with this, the top communication dimension is the governance while perceived effectiveness garnered the highest overall average among the perceived outcomes of collaboration. Results also showed that the collaborators highly rely on formal communication channels. Meetings and memos were the most commonly used communication channels throughout all tasks under the four phases of MIM. Additionally, although collaborators have a high view towards their co-collaborators, they still rely on MCME to act as their manager in coordinating with one another indirectly. Based on the correlation analysis, antecedent (motivations)-outcome relationship generally had positive relationships. However, for the process (communication dimensions)-outcome relationship, both positive and negative relationships were observed. In conclusion, this study exhibited the same trend with existing literature which also used the same framework. For the antecedent-outcome relationship, it can be deduced that MCME, as the main organizer of MIM, can focus on these variables to achieve their desired outcomes because of the positive relationships. For the process-outcome relationship, MCME should also take note that there were negative relationships where an increase in the said communication dimension may result in a decrease in the desired outcome. Recommendations for further study include a methodology that contains: complete enumeration or any parametric sampling, a researcher-administered survey, and direct observations. These might require additional funding, but all may yield to richer data.Keywords: antecedent-outcome relationship, carrying capacity, organizational communication, process-outcome relationship
Procedia PDF Downloads 122695 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea
Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal
Abstract:
Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism
Procedia PDF Downloads 265694 Evaluation of Redundancy Architectures Based on System on Chip Internal Interfaces for Future Unmanned Aerial Vehicles Flight Control Computer
Authors: Sebastian Hiergeist
Abstract:
It is a common view that Unmanned Aerial Vehicles (UAV) tend to migrate into the civil airspace. This trend is challenging UAV manufacturer in plenty ways, as there come up a lot of new requirements and functional aspects. On the higher application levels, this might be collision detection and avoidance and similar features, whereas all these functions only act as input for the flight control components of the aircraft. The flight control computer (FCC) is the central component when it comes up to ensure a continuous safe flight and landing. As these systems are flight critical, they have to be built up redundantly to be able to provide a Fail-Operational behavior. Recent architectural approaches of FCCs used in UAV systems are often based on very simple microprocessors in combination with proprietary Application-Specific Integrated Circuit (ASIC) or Field Programmable Gate Array (FPGA) extensions implementing the whole redundancy functionality. In the future, such simple microprocessors may not be available anymore as they are more and more replaced by higher sophisticated System on Chip (SoC). As the avionic industry cannot provide enough market power to significantly influence the development of new semiconductor products, the use of solutions from foreign markets is almost inevitable. Products stemming from the industrial market developed according to IEC 61508, or automotive SoCs, according to ISO 26262, can be seen as candidates as they have been developed for similar environments. Current available SoC from the industrial or automotive sector provides quite a broad selection of interfaces like, i.e., Ethernet, SPI or FlexRay, that might come into account for the implementation of a redundancy network. In this context, possible network architectures shall be investigated which could be established by using the interfaces stated above. Of importance here is the avoidance of any single point of failures, as well as a proper segregation in distinct fault containment regions. The performed analysis is supported by the use of guidelines, published by the aviation authorities (FAA and EASA), on the reliability of data networks. The main focus clearly lies on the reachable level of safety, but also other aspects like performance and determinism play an important role and are considered in the research. Due to the further increase in design complexity of recent and future SoCs, also the risk of design errors, which might lead to common mode faults, increases. Thus in the context of this work also the aspect of dissimilarity will be considered to limit the effect of design errors. To achieve this, the work is limited to broadly available interfaces available in products from the most common silicon manufacturer. The resulting work shall support the design of future UAV FCCs by giving a guideline on building up a redundancy network between SoCs, solely using on board interfaces. Therefore the author will provide a detailed usability analysis on available interfaces provided by recent SoC solutions, suggestions on possible redundancy architectures based on these interfaces and an assessment of the most relevant characteristics of the suggested network architectures, like e.g. safety or performance.Keywords: redundancy, System-on-Chip, UAV, flight control computer (FCC)
Procedia PDF Downloads 218693 A Study of the Effect of the Flipped Classroom on Mixed Abilities Classes in Compulsory Secondary Education in Italy
Authors: Giacoma Pace
Abstract:
The research seeks to evaluate whether students with impairments can achieve enhanced academic progress by actively engaging in collaborative problem-solving activities with teachers and peers, to overcome the obstacles rooted in socio-economic disparities. Furthermore, the research underscores the significance of fostering students' self-awareness regarding their learning process and encourages teachers to adopt a more interactive teaching approach. The research also posits that reducing conventional face-to-face lessons can motivate students to explore alternative learning methods, such as collaborative teamwork and peer education within the classroom. To address socio-cultural barriers it is imperative to assess their internet access and possession of technological devices, as these factors can contribute to a digital divide. The research features a case study of a Flipped Classroom Learning Unit, administered to six third-year high school classes: Scientific Lyceum, Technical School, and Vocational School, within the city of Turin, Italy. Data are about teachers and the students involved in the case study, some impaired students in each class, level of entry, students’ performance and attitude before using Flipped Classrooms, level of motivation, family’s involvement level, teachers’ attitude towards Flipped Classroom, goal obtained, the pros and cons of such activities, technology availability. The selected schools were contacted; meetings for the English teachers to gather information about their attitude and knowledge of the Flipped Classroom approach. Questionnaires to teachers and IT staff were administered. The information gathered, was used to outline the profile of the subjects involved in the study and was further compared with the second step of the study made up of a study conducted with the classes of the selected schools. The learning unit is the same, structure and content are decided together with the English colleagues of the classes involved. The pacing and content are matched in every lesson and all the classes participate in the same labs, use the same materials, homework, same assessment by summative and formative testing. Each step follows a precise scheme, in order to be as reliable as possible. The outcome of the case study will be statistically organised. The case study is accompanied by a study on the literature concerning EFL approaches and the Flipped Classroom. Document analysis method was employed, i.e. a qualitative research method in which printed and/or electronic documents containing information about the research subject are reviewed and evaluated with a systematic procedure. Articles in the Web of Science Core Collection, Education Resources Information Center (ERIC), Scopus and Science Direct databases were searched in order to determine the documents to be examined (years considered 2000-2022).Keywords: flipped classroom, impaired, inclusivity, peer instruction
Procedia PDF Downloads 52692 Changes in Physicochemical Characteristics of a Serpentine Soil and in Root Architecture of a Hyperaccumulating Plant Cropped with a Legume
Authors: Ramez F. Saad, Ahmad Kobaissi, Bernard Amiaud, Julien Ruelle, Emile Benizri
Abstract:
Agromining is a new technology that establishes agricultural systems on ultramafic soils in order to produce valuable metal compounds such as nickel (Ni), with the final aim of restoring a soil's agricultural functions. But ultramafic soils are characterized by low fertility levels and this can limit yields of hyperaccumulators and metal phytoextraction. The objectives of the present work were to test if the association of a hyperaccumulating plant (Alyssum murale) and a Fabaceae (Vicia sativa var. Prontivesa) could induce changes in physicochemical characteristics of a serpentine soil and in root architecture of a hyperaccumulating plant then lead to efficient agromining practices through soil quality improvement. Based on standard agricultural systems, consisting in the association of legumes and another crop such as wheat or rape, a three-month rhizobox experiment was carried out to study the effect of the co-cropping (Co) or rotation (Ro) of a hyperaccumulating plant (Alyssum murale) with a legume (Vicia sativa) and incorporating legume biomass to soil, in comparison with mineral fertilization (FMo), on the structure and physicochemical properties of an ultramafic soil and on root architecture. All parameters measured (biomass, C and N contents, and taken-up Ni) on Alyssum murale conducted in co-cropping system showed the highest values followed by the mineral fertilization and rotation (Co > FMo > Ro), except for root nickel yield for which rotation was better than the mineral fertilization (Ro > FMo). The rhizosphere soil of Alyssum murale in co-cropping had larger soil particles size and better aggregates stability than other treatments. Using geostatistics, co-cropped Alyssum murale showed a greater root surface area spatial distribution. Moreover, co-cropping and rotation-induced lower soil DTPA-extractable nickel concentrations than other treatments, but higher pH values. Alyssum murale co-cropped with a legume showed a higher biomass production, improved soil physical characteristics and enhanced nickel phytoextraction. This study showed that the introduction of a legume into Ni agromining systems could improve yields of dry biomass of the hyperaccumulating plant used and consequently, the yields of Ni. Our strategy can decrease the need to apply fertilizers and thus minimizes the risk of nitrogen leaching and underground water pollution. Co-cropping of Alyssum murale with the legume showed a clear tendency to increase nickel phytoextraction and plant biomass in comparison to rotation treatment and fertilized mono-culture. In addition, co-cropping improved soil physical characteristics and soil structure through larger and more stabilized aggregates. It is, therefore, reasonable to conclude that the use of legumes in Ni-agromining systems could be a good strategy to reduce chemical inputs and to restore soil agricultural functions. Improving the agromining system by the replacement of inorganic fertilizers could simultaneously be a safe way of rehabilitating degraded soils and a method to restore soil quality and functions leading to the recovery of ecosystem services.Keywords: plant association, legumes, hyperaccumulating plants, ultramafic soil physicochemical properties
Procedia PDF Downloads 164691 Water Ingress into Underground Mine Voids in the Central Rand Goldfields Area, South Africa-Fluid Induced Seismicity
Authors: Artur Cichowicz
Abstract:
The last active mine in the Central Rand Goldfields area (50 km x 15 km) ceased operations in 2008. This resulted in the closure of the pumping stations, which previously maintained the underground water level in the mining voids. As a direct consequence of the water being allowed to flood the mine voids, seismic activity has increased directly beneath the populated area of Johannesburg. Monitoring of seismicity in the area has been on-going for over five years using the network of 17 strong ground motion sensors. The objective of the project is to improve strategies for mine closure. The evolution of the seismicity pattern was investigated in detail. Special attention was given to seismic source parameters such as magnitude, scalar seismic moment and static stress drop. Most events are located within historical mine boundaries. The seismicity pattern shows a strong relationship between the presence of the mining void and high levels of seismicity; no seismicity migration patterns were observed outside the areas of old mining. Seven years after the pumping stopped, the evolution of the seismicity has indicated that the area is not yet in equilibrium. The level of seismicity in the area appears to not be decreasing over time since the number of strong events, with Mw magnitudes above 2, is still as high as it was when monitoring began over five years ago. The average rate of seismic deformation is 1.6x1013 Nm/year. Constant seismic deformation was not observed over the last 5 years. The deviation from the average is in the order of 6x10^13 Nm/year, which is a significant deviation. The variation of cumulative seismic moment indicates that a constant deformation rate model is not suitable. Over the most recent five year period, the total cumulative seismic moment released in the Central Rand Basin was 9.0x10^14 Nm. This is equivalent to one earthquake of magnitude 3.9. This is significantly less than what was experienced during the mining operation. Characterization of seismicity triggered by a rising water level in the area can be achieved through the estimation of source parameters. Static stress drop heavily influences ground motion amplitude, which plays an important role in risk assessments of potential seismic hazards in inhabited areas. The observed static stress drop in this study varied from 0.05 MPa to 10 MPa. It was found that large static stress drops could be associated with both small and large events. The temporal evolution of the inter-event time provides an understanding of the physical mechanisms of earthquake interaction. Changes in the characteristics of the inter-event time are produced when a stress change is applied to a group of faults in the region. Results from this study indicate that the fluid-induced source has a shorter inter-event time in comparison to a random distribution. This behaviour corresponds to a clustering of events, in which short recurrence times tend to be close to each other, forming clusters of events.Keywords: inter-event time, fluid induced seismicity, mine closure, spectral parameters of seismic source
Procedia PDF Downloads 283690 Effects of Temperature and Mechanical Abrasion on Microplastics
Authors: N. Singh, G. K. Darbha
Abstract:
Since the last decade, a wave of research has begun to study the prevalence and impact of ever-increasing plastic pollution in the environment. The wide application and ubiquitous distribution of plastic have become a global concern due to its persistent nature. The disposal of plastics has emerged as one of the major challenges for waste management landfills. Microplastics (MPs) have found its existence in almost every environment, from the high altitude mountain lake to the deep sea sediments, polar icebergs, coral reefs, estuaries, beaches, and river, etc. Microplastics are fragments of plastics with size less than 5 mm. Microplastics can be classified as primary microplastics and secondary microplastics. Primary microplastics includes purposefully introduced microplastics into the end products for consumers (microbeads used in facial cleansers, personal care product, etc.), pellets (used in manufacturing industries) or fibres (from textile industries) which finally enters into the environment. Secondary microplastics are formed by disintegration of larger fragments under the exposure of sunlight, mechanical abrasive forces by rain, waves, wind and/or water. A number of factors affect the quantity of microplastic present in freshwater environments. In addition to physical forces, human population density proximal to the water body, proximity to urban centres, water residence time, and size of the water body also affects plastic properties. With time, other complex processes in nature such as physical, chemical and biological break down plastics by interfering with its structural integrity. Several studies demonstrate that microplastics found in wastewater sludge being used as manure for agricultural fields, thus having the tendency to alter the soil environment condition influencing the microbial population as well. Inadequate data are available on the fate and transport of microplastics under varying environmental conditions that are required to supplement important information for further research. In addition, microplastics have the tendency to absorb heavy metals and hydrophobic organic contaminants such as PAHs and PCBs from its surroundings and thus acting as carriers for these contaminants in the environment system. In this study, three kinds of microplastics (polyethylene, polypropylene and expanded polystyrene) of different densities were chosen. Plastic samples were placed in sand with different aqueous media (distilled water, surface water, groundwater and marine water). It was incubated at varying temperatures (25, 35 and 40 °C) and agitation levels (rpm). The results show that the number of plastic fragments enhanced with increase in temperature and agitation speed. Moreover, the rate of disintegration of expanded polystyrene is high compared to other plastics. These results demonstrate that temperature, salinity, and mechanical abrasion plays a major role in degradation of plastics. Since weathered microplastics are more harmful as compared to the virgin microplastics, long-term studies involving other environmental factors are needed to have a better understanding of degradation of plastics.Keywords: environmental contamination, fragmentation, microplastics, temperature, weathering
Procedia PDF Downloads 169689 Cross Cultural Adaptation and Content Validation of the Assessment Instrument Preschooler Awareness of Stuttering Survey
Authors: Catarina Belchior, Catarina Martins, Sara Mendes, Ana Rita S. Valente, Elsa Marta Soares
Abstract:
Introduction: The negative feelings and attitudes that a person who stutters can develop are extremely relevant when considering assessment and intervention in Speech and Language Therapy. This relates to the fact that the person who stutters can experience feelings such as shame, fear and negative beliefs when communicating. Considering the complexity and importance of integrating diverse aspects in stuttering intervention, it is central to identify those emotions as early as possible. Therefore, this research aimed to achieve the translation, adaptation to European Portuguese and to analyze the content validation of the Preschooler Awareness Stuttering Survey (Abbiati, Guitar & Hutchins, 2015), an instrument that allows the assessment of the impact of stuttering on preschool children who stutter considering feelings and attitudes. Methodology: Cross-sectional descriptive qualitative research. The following methodological procedures were followed: translation, back-translation, panel of experts and pilot study. This abstract describes the results of the first three phases of this process. The translation was accomplished by two Speech Language Therapists (SLT). Both professionals have more than five years of experience and are users of English language. One of them has a broad experience in the field of stuttering. Back-translation was conducted by two bilingual individuals without experience in health or any knowledge about the instrument. The panel of experts was composed by 3 different SLT, experts in the field of stuttering. Results and Discussion: In the translation and back-translation process it was possible to verify differences in semantic and idiomatic equivalences of several concepts and expressions, as well as the need to include new information to enhance the understanding of the application of the instrument. The meeting between the two translators and the researchers allowed the achievement of a consensus version that was used in back-translation. Considering adaptation and content validation, the main change made by the experts was the conceptual equivalence of the questions and answers of the instrument's sheets. Considering that in the translated consensus version the questions began with various nouns such as 'is' or 'the cow' and that the answers did not contain the adverb 'much' as in the original instrument, the panel agreed that it would be more appropriate if the questions all started with 'how' and that all the answers should present the adverb 'much'. This decision was made to ensure that the translate instrument would be similar to the original and so that the results obtained could be comparable between the original and the translated instrument. There was also elaborated one semantic equivalence between concepts. The panel of experts found that all other items and specificities of the instrument were adequate, concluding the adequacy of the instrument considering its objectives and its intended target population. Conclusion: This research aspires to diversify the existing validated resources in this scope, adding a new instrument that allows the assessment of preschool children who stutter. Consequently, it is hoped that this instrument will provide a real and reliable assessment that can lead to an appropriate therapeutic intervention according to the characteristics and needs of each child.Keywords: stuttering, assessment, feelings and attitudes, speech language therapy
Procedia PDF Downloads 149688 Assessing Sustainability of Bike Sharing Projects Using Envision™ Rating System
Authors: Tamar Trop
Abstract:
Bike sharing systems can be important elements of smart cities as they have the potential for impact on multiple levels. These systems can add a significant alternative to other modes of mass transit in cities that are continuously looking for measures to become more livable and maintain their attractiveness for citizens, businesses and tourism. Bike-sharing began in Europe in 1965, and a viable format emerged in the mid-2000s thanks to the introduction of information technology. The rate of growth in bike-sharing schemes and fleets has been very rapid since 2008 and has probably outstripped growth in every other form of urban transport. Today, public bike-sharing systems are available on five continents, including over 700 cities, operating more than 800,000 bicycles at approximately 40,000 docking stations. Since modern bike sharing systems have become prevalent only in the last decade, the existing literature analyzing these systems and their sustainability is relatively new. The purpose of the presented study is to assess the sustainability of these newly emerging transportation systems, by using the Envision™ rating system as a methodological framework and the Israeli 'Tel -O-Fun' – bike sharing project as a case study. The assessment was conducted by project team members. Envision™ is a new guidance and rating system used to assess and improve the sustainability of all types and sizes of infrastructure projects. This tool provides a holistic framework for evaluating and rating the community, environmental, and economic benefits of infrastructure projects over the course of their life cycle. This evaluation method has 60 sustainability criteria divided into five categories: Quality of life, leadership, resource allocation, natural world, and climate and risk. 'Tel -O-Fun' project was launched in Tel Aviv-Yafo on 2011 and today provides about 1,800 bikes for rent, at 180 rental stations across the city. The system is based on a complex computer terminal that is located in the docking stations. The highest-rated sustainable features that the project scored include: (a) Improving quality of life by: offering a low cost and efficient form of public transit, improving community mobility and access, enabling the flexibility of travel within a multimodal transportation system, saving commuters time and money, enhancing public health and reducing air and noise pollution; (b) improving resource allocation by: offering inexpensive and flexible last-mile connectivity, reducing space, materials and energy consumption, reducing wear and tear on public roads, and maximizing the utility of existing infrastructure, and (c) reducing of greenhouse gas emissions from transportation. Overall, 'Tel -O-Fun' project was highly scored as an environmentally sustainable and socially equitable infrastructure. The use of this practical framework for evaluation also yielded various interesting insights on the shortcoming of the system and the characteristics of good solutions. This can contribute to the improvement of the project and may assist planners and operators of bike sharing systems to develop a sustainable, efficient and reliable transportation infrastructure within smart cities.Keywords: bike sharing, Envision™, sustainability rating system, sustainable infrastructure
Procedia PDF Downloads 339687 Literacy Practices in Immigrant Detention Centers: A Conceptual Exploration of Access, Resistance, and Connection
Authors: Mikel W. Cole, Stephanie M. Madison, Adam Henze
Abstract:
Since 2004, the U.S. immigrant detention system has imprisoned more than five million people. President John F. Kennedy famously dubbed this country a “Nation of Immigrants.” Like many of the nation’s imagined ideals, the historical record finds its practices have never lived up to the tenets championed as defining qualities.The United Nations High Commission on Refugees argues the educational needs of people in carceral spaces, especially those in immigrant detention centers, are urgent and supported by human rights guarantees. However, there is a genuine dearth of literacy research in immigrant detention centers, compounded by a general lack of access to these spaces. Denying access to literacy education in detention centers is one way the history of xenophobic immigration policy persists. In this conceptual exploration, first-hand accounts from detained individuals, their families, and the organizations that work with them have been shared with the authors. In this paper, the authors draw on experiences, reflections, and observations from serving as volunteers to develop a conceptual framework for the ways in which literacy practices are enacted in detention centers. Literacy is an essential tool for accessing those detained in immigrant detention centers and a critical tool for those being detained to access legal and other services. One of the most striking things about the detention center is how to behave; gaining access for a visit is neither intuitive nor straightforward. The men experiencing detention are also at a disadvantage. The lack of access to their own documents is a profound barrier to men navigating the complex immigration process. Literacy is much more than a skill for gathering knowledge or accessing carceral spaces; literacy is fundamentally a source of personal empowerment. Frequently men find a way to reclaim their sense of dignity through work on their own terms by exchanging their literacy services for products or credits at the commissary. They write cards and letters for fellow detainees, read mail, and manage the exchange of information between the men and their families. In return, the men who have jobs trade items from the commissary or transfer money to the accounts of the men doing the reading, writing, and drawing. Literacy serves as a form of resistance by providing an outlet for productive work. At its core, literacy is the exchange of ideas between an author and a reader and is a primary source of human connection for individuals in carceral spaces. Father’s Day and Christmas are particularly difficult at detention centers. Men weep when speaking about their children and the overwhelming hopelessness they feel by being separated from them. Yet card-writing campaigns have provided these men with words of encouragement as thousands of hand-written cards make their way to the detention center. There are undoubtedly more literacies being practiced in the immigrant detention center where we work and at other detention centers across the country, and these categories are early conceptions with which we are still wrestling.Keywords: detention centers, education, immigration, literacy
Procedia PDF Downloads 126686 Consumer Utility Analysis of Halal Certification on Beef Using Discrete Choice Experiment: A Case Study in the Netherlands
Authors: Rosa Amalia Safitri, Ine van der Fels-Klerx, Henk Hogeveen
Abstract:
Halal is a dietary law observed by people following Islamic faith. It is considered as a type of credence food quality which cannot be easily assured by consumers even upon and after consumption. Therefore, Halal certification takes place as a practical tool for the consumers to make an informed choice particularly in a non-Muslim majority country, including the Netherlands. Discrete choice experiment (DCE) was employed in this study for its ability to assess the importance of attributes attached to Halal beef in the Dutch market and to investigate consumer utilities. Furthermore, willingness to pay (WTP) for the desired Halal certification was estimated. Four most relevant attributes were selected, i.e., the slaughter method, traceability information, place of purchase, and Halal certification. Price was incorporated as an attribute to allow estimation of willingness to pay for Halal certification. There were 242 Muslim respondents who regularly consumed Halal beef completed the survey, from Dutch (53%) and non-Dutch consumers living in the Netherlands (47%). The vast majority of the respondents (95%) were within the age of 18-45 years old, with the largest group being student (43%) followed by employee (30%) and housewife (12%). Majority of the respondents (76%) had disposable monthly income less than € 2,500, while the rest earned more than € 2,500. The respondents assessed themselves of having good knowledge of the studied attributes, except for traceability information with 62% of the respondents considered themselves not knowledgeable. The findings indicated that slaughter method was valued as the most important attribute, followed by Halal certificate, place of purchase, price, and traceability information. This order of importance varied across sociodemographic variables, except for the slaughter method. Both Dutch and non-Dutch subgroups valued Halal certification as the third most important attributes. However, non-Dutch respondents valued it with higher importance (0,20) than their Dutch counterparts (0,16). For non-Dutch, the price was more important than Halal certification. The ideal product preferred by the consumers indicated the product serving the highest utilities for consumers, and characterized by beef obtained without pre-slaughtering stunning, with traceability info, available at Halal store, certified by an official certifier, and sold at 2.75 € per 500 gr. In general, an official Halal certifier was mostly preferred. However, consumers were not willing to pay for premium for any type of Halal certifiers, indicated by negative WTP of -0.73 €, -0.93 €, and -1,03€ for small, official, and international certifiers, respectively. This finding indicated that consumers tend to lose their utility when confronted with price. WTP estimates differ across socio-demographic variables with male and non-Dutch respondents had the lowest WTP. The unfamiliarity to traceability information might cause respondents to perceive it as the least important attribute. In the context of Halal certified meat, adding traceability information into meat packaging can serve two functions, first consumers can justify for themselves whether the processes comply with Halal requirements, for example, the use of pre-slaughtering stunning, and secondly to assure its safety. Therefore, integrating traceability info into meat packaging can help to make informed decision for both Halal status and food safety.Keywords: consumer utilities, discrete choice experiments, Halal certification, willingness to pay
Procedia PDF Downloads 127685 A development of Innovator Teachers Training Curriculum to Create Instructional Innovation According to Active Learning Approach to Enhance learning Achievement of Private School in Phayao Province
Authors: Palita Sooksamran, Katcharin Mahawong
Abstract:
This research aims to offer the development of innovator teachers training curriculum to create instructional innovation according to active learning approach to enhance learning achievement. The research and development process is carried out in 3 steps: Step 1 The study of the needs necessary to develop a training curriculum: the inquiry was conducted by a sample of teachers in private schools in Phayao province that provide basic education at the level of education. Using a questionnaire of 176 people, the sample was defined using a table of random numbers and stratified samples, using the school as a random layer. Step 2 Training curriculum development: the tools used are developed training curriculum and curriculum assessments, with nine experts checking the appropriateness of the draft curriculum. The statistic used in data analysis is the average ( ) and standard deviation (S.D.) Step 3 study on effectiveness of training curriculum: one group pretest/posttest design applied in this study. The sample consisted of 35 teachers from private schools in Phayao province. The participants volunteered to attend on their own. The results of the research showed that: 1.The essential demand index needed with the list of essential needs in descending order is the choice and create of multimedia media, videos, application for learning management at the highest level ,Developed of multimedia, video and applications for learning management and selection of innovative learning management techniques and methods of solve the problem Learning , respectively. 2. The components of the training curriculum include principles, aims, scope of content, training activities, learning materials and resources, supervision evaluation. The scope of the curriculum consists of basic knowledge about learning management innovation, active learning, lesson plan design, learning materials and resources, learning measurement and evaluation, implementation of lesson plans into classroom and supervision and motoring. The results of the evaluation of quality of the draft training curriculum at the highest level. The Experts suggestion is that the purpose of the course should be used words that convey the results. 3. The effectiveness of training curriculum 1) Cognitive outcomes of the teachers in creating innovative learning management was at a high level of relative gain score. 2) The assessment results of learning management ability according to the active learning approach to enhance learning achievement by assessing from 2 education supervisor as a whole were very high , 3) Quality of innovation learning management based on active learning approach to enhance learning achievement of the teachers, 7 instructional Innovations were evaluated as outstanding works and 26 instructional Innovations passed the standard 4) Overall learning achievement of students who learned from 35 the sample teachers was at a high level of relative gain score 5) teachers' satisfaction towards the training curriculum was at the highest level.Keywords: training curriculum, innovator teachers, active learning approach, learning achievement
Procedia PDF Downloads 52684 Peripheral Neuropathy after Locoregional Anesthesia
Authors: Dalila Chaid, Bennameur Fedilli, Mohammed Amine Bellelou
Abstract:
The study focuses on the experience of lower-limb amputees, who face both physical and psychological challenges due to their disability. Chronic neuropathic pain and various types of limb pain are common in these patients. They often require orthopaedic interventions for issues such as dressings, infection, ulceration, and bone-related problems. Research Aim: The aim of this study is to determine the most suitable anaesthetic technique for lower-limb amputees, which can provide them with the greatest comfort and prolonged analgesia. The study also aims to demonstrate the effectiveness and cost-effectiveness of ultrasound-guided local regional anaesthesia (LRA) in this patient population. Methodology: The study is an observational analytical study conducted over a period of eight years, from 2010 to 2018. It includes a total of 955 cases of revisions performed on lower limb stumps. The parameters analyzed in this study include the effectiveness of the block and the use of sedation, the duration of the block, the post-operative visual analog scale (VAS) scores, and patient comfort. Findings: The study findings highlight the benefits of ultrasound-guided LRA in providing comfort by optimizing post-operative analgesia, which can contribute to psychological and bodily repair in lower-limb amputees. Additionally, the study emphasizes the use of alpha2 agonist adjuvants with sedative and analgesic properties, long-acting local anaesthetics, and larger volumes for better outcomes. Theoretical Importance: This study contributes to the existing knowledge by emphasizing the importance of choosing an appropriate anaesthetic technique for lower-limb amputees. It highlights the potential of ultrasound-guided LRA and the use of specific adjuvants and local anaesthetics in improving post-operative analgesia and overall patient outcomes. Data Collection and Analysis Procedures: Data for this study were collected through the analysis of medical records and relevant documentation related to the 955 cases included in the study. The effectiveness of the anaesthetic technique, duration of the block, post-operative pain scores, and patient comfort were analyzed using statistical methods. Question Addressed: The study addresses the question of which anaesthetic technique would be most suitable for lower-limb amputees to provide them with optimal comfort and prolonged analgesia. Conclusion: The study concludes that ultrasound-guided LRA, along with the use of alpha2 agonist adjuvants, long-acting local anaesthetics, and larger volumes, can be an effective approach in providing comfort and improving post-operative analgesia for lower-limb amputees. This technique can potentially contribute to the psychological and bodily repair of these patients. The findings of this study have implications for clinical practice in the management of lower-limb amputees, highlighting the importance of personalized anaesthetic approaches for better outcomes.Keywords: neuropathic pain, ultrasound-guided peripheral nerve block, DN4 quiz, EMG
Procedia PDF Downloads 77683 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 296682 Developing Computational Thinking in Early Childhood Education
Authors: Kalliopi Kanaki, Michael Kalogiannakis
Abstract:
Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses
Procedia PDF Downloads 118681 Single Cell and Spatial Transcriptomics: A Beginners Viewpoint from the Conceptual Pipeline
Authors: Leo Nnamdi Ozurumba-Dwight
Abstract:
Messenger ribooxynucleic acid (mRNA) molecules are compositional, protein-based. These proteins, encoding mRNA molecules (which collectively connote the transcriptome), when analyzed by RNA sequencing (RNAseq), unveils the nature of gene expression in the RNA. The obtained gene expression provides clues of cellular traits and their dynamics in presentations. These can be studied in relation to function and responses. RNAseq is a practical concept in Genomics as it enables detection and quantitative analysis of mRNA molecules. Single cell and spatial transcriptomics both present varying avenues for expositions in genomic characteristics of single cells and pooled cells in disease conditions such as cancer, auto-immune diseases, hematopoietic based diseases, among others, from investigated biological tissue samples. Single cell transcriptomics helps conduct a direct assessment of each building unit of tissues (the cell) during diagnosis and molecular gene expressional studies. A typical technique to achieve this is through the use of a single-cell RNA sequencer (scRNAseq), which helps in conducting high throughput genomic expressional studies. However, this technique generates expressional gene data for several cells which lack presentations on the cells’ positional coordinates within the tissue. As science is developmental, the use of complimentary pre-established tissue reference maps using molecular and bioinformatics techniques has innovatively sprung-forth and is now used to resolve this set back to produce both levels of data in one shot of scRNAseq analysis. This is an emerging conceptual approach in methodology for integrative and progressively dependable transcriptomics analysis. This can support in-situ fashioned analysis for better understanding of tissue functional organization, unveil new biomarkers for early-stage detection of diseases, biomarkers for therapeutic targets in drug development, and exposit nature of cell-to-cell interactions. Also, these are vital genomic signatures and characterizations of clinical applications. Over the past decades, RNAseq has generated a wide array of information that is igniting bespoke breakthroughs and innovations in Biomedicine. On the other side, spatial transcriptomics is tissue level based and utilized to study biological specimens having heterogeneous features. It exposits the gross identity of investigated mammalian tissues, which can then be used to study cell differentiation, track cell line trajectory patterns and behavior, and regulatory homeostasis in disease states. Also, it requires referenced positional analysis to make up of genomic signatures that will be sassed from the single cells in the tissue sample. Given these two presented approaches to RNA transcriptomics study in varying quantities of cell lines, with avenues for appropriate resolutions, both approaches have made the study of gene expression from mRNA molecules interesting, progressive, developmental, and helping to tackle health challenges head-on.Keywords: transcriptomics, RNA sequencing, single cell, spatial, gene expression.
Procedia PDF Downloads 120680 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method
Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov
Abstract:
The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection
Procedia PDF Downloads 213