Search results for: pallet body approach
277 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 90276 A Comprehensive Planning Model for Amalgamation of Intensification and Green Infrastructure
Authors: Sara Saboonian, Pierre Filion
Abstract:
The dispersed-suburban model has been the dominant one across North America for the past seventy years, characterized by automobile reliance, low density, and land-use specialization. Two planning models have emerged as possible alternatives to address the ills inflicted by this development pattern. First, there is intensification, which promotes efficient infrastructure by connecting high-density, multi-functional, and walkable nodes with public transit services within the suburban landscape. Second is green infrastructure, which provides environmental health and human well-being by preserving and restoring ecosystem services. This research studies incompatibilities and the possibility of amalgamating the two alternatives in an attempt to develop a comprehensive alternative to suburban model that advocates density, multi-functionality and transit- and pedestrian-conduciveness, with measures capable of mitigating the adverse environmental impacts of compactness. The research investigates three Canadian urban growth centers, where intensification is the current planning practice, and the awareness of green infrastructure benefits is on the rise. However, these three centers are contrasted by their development stage, the presence or absence of protected natural land, their environmental approach, and their adverse environmental consequences according to the planning cannons of different periods. The methods include reviewing the literature on green infrastructure planning, criticizing the Ontario provincial plans for intensification, surveying residents’ preferences for alternative models, and interviewing officials who deal with the local planning for the centers. Moreover, the research draws on recalling debates between New Urbanism and Landscape/Ecological Urbanism. The case studies expose the difficulties in creating urban growth centres that accommodate green infrastructure while adhering to intensification principles. First, the dominant status of intensification and the obstacles confronting intensification have monopolized the planners’ concerns. Second, the tension between green infrastructure and intensification explains the absence of the green infrastructure typologies that correspond to intensification-compatible forms and dynamics. Finally, the lack of highlighted social-economic benefits of green infrastructure reduces residents’ participation. Moreover, the results from the research provide insight into predominating urbanization theories, New Urbanism and Landscape/Ecological Urbanism. In order to understand political, planning, and ecological dynamics of such blending, dexterous context-specific planning is required. Findings suggest the influence of the following factors on amalgamating intensification and green infrastructure. Initially, producing ecosystem services-based justifications for green infrastructure development in the intensification context provides an expert-driven backbone for the implementation programs. This knowledge-base should be translated to effectively imbue different urban stakeholders. Moreover, due to the limited greenfields in intensified areas, spatial distribution and development of multi-level corridors such as pedestrian-hospitable settings and transportation networks along green infrastructure measures are required. Finally, to ensure the long-term integrity of implemented green infrastructure measures, significant investment in public engagement and education, as well as clarification of management responsibilities is essential.Keywords: ecosystem services, green infrastructure, intensification, planning
Procedia PDF Downloads 355275 The Development of User Behavior in Urban Regeneration Areas by Utilizing the Floating Population Data
Authors: Jung-Hun Cho, Tae-Heon Moon, Sun-Young Heo
Abstract:
A lot of urban problems, caused by urbanization and industrialization, have occurred around the world. In particular, the creation of satellite towns, which was attributed to the explicit expansion of the city, has led to the traffic problems and the hollowization of old towns, raising the necessity of urban regeneration in old towns along with the aging of existing urban infrastructure. To select urban regeneration priority regions for the strategic execution of urban regeneration in Korea, the number of population, the number of businesses, and deterioration degree were chosen as standards. Existing standards had a limit in coping with solving urban problems fundamentally and rapidly changing reality. Therefore, it was necessary to add new indicators that can reflect the decline in relevant cities and conditions. In this regard, this study selected Busan Metropolitan City, Korea as the target area as a leading city, where urban regeneration such as an international port city has been activated like Yokohama, Japan. Prior to setting the urban regeneration priority region, the conditions of reality should be reflected because uniform and uncharacterized projects have been implemented without a quantitative analysis about population behavior within the region. For this reason, this study conducted a characterization analysis and type classification, based on the user behaviors by using representative floating population of the big data, which is a hot issue all over the society in recent days. The target areas were analyzed in this study. While 23 regions were classified as three types in existing Busan Metropolitan City urban regeneration priority region, 23 regions were classified as four types in existing Busan Metropolitan City urban regeneration priority region in terms of the type classification on the basis of user behaviors. Four types were classified as follows; type (Ⅰ) of young people - morning type, Type (Ⅱ) of the old and middle-aged- general type with sharp floating population, type (Ⅲ) of the old and middle aged-24hour-type, and type (Ⅳ) of the old and middle aged with less floating population. Characteristics were shown in each region of four types, and the study results of user behaviors were different from those of existing urban regeneration priority region. According to the results, in type (Ⅰ) young people were the majority around the existing old built-up area, where floating population at dawn is four times more than in other areas. In Type (Ⅱ), there were many old and middle-aged people around the existing built-up area and general neighborhoods, where the average floating population was more than in other areas due to commuting, while in type (Ⅲ), there was no change in the floating population throughout 24 hours, although there were many old and middle aged people in population around the existing general neighborhoods. Type (Ⅳ) includes existing economy-based type, central built-up area type, and general neighborhood type, where old and middle aged people were the majority as a general type of commuting with less floating population. Unlike existing urban regeneration priority region, these types were sub-divided according to types, and in this study, approach methods and basic orientations of urban regeneration were set to reflect the reality to a certain degree including the indicators of effective floating population to identify the dynamic activity of urban areas and existing regeneration priority areas in connection with urban regeneration projects by regions. Therefore, it is possible to make effective urban plans through offering the substantial ground by utilizing scientific and quantitative data. To induce more realistic and effective regeneration projects, the regeneration projects tailored to the present local conditions should be developed by reflecting the present conditions on the formulation of urban regeneration strategic plans.Keywords: floating population, big data, urban regeneration, urban regeneration priority region, type classification
Procedia PDF Downloads 213274 Social and Political Economy of Paid and Unpaid Work: Work of Women Home Based Workers in National Capital Region (NCR), India
Authors: Sudeshna Sengupta
Abstract:
Women’s work lives weave a complex fabric of myriad work relations and complex structures. Lives, when seen from the lens of work, is a saga of conjugated oppression by intertwined structures that are vertically and horizontally interwoven in a very complex manner. Women interact with multiple institutions through their work. The interactions and interplay of institutions shape their organization of work. They intersperse productive work with reproductive work, unpaid economic activities with unpaid care work, and all kinds of activities with leisure and self-care. The proposed paper intends to understand how women working as home-based workers in the National Capital Region (NCR) of India are organizing their everyday work, and how the organization of work is influenced by the interplay of structures. Situating itself in a multidisciplinary theoretical framework, this paper brings out how the gendering of work is playing out in the political, economic and social domain and shaping the work-life within the family, and in the paid workspace. The paper will use a primary data source, which is qualitative in nature. It will comprise 15 qualitative interviews of women home-based workers from the National Capital Region. The research uses a life history approach. The sampling was purposive using snowballing as a method. The dataset is part of the primary data (qualitative) collected for the ongoing Ph.D. work in Gender Studies at Ambedkar University Delhi. The home-based workers interviewed were in “non-factory” wage relations based on piece rates with flexible working hours. Their workplaces were their own homes with no spatial divide between living spaces and workspaces. Home-based workers were recognized as a group in the domain of labor economics in the 1980s. When menial work was cheaper than machine work, the capital owners preferred to outsource work as home-based work to women. These production spaces are fragmented and the identity of gender is created within labor processes to favor material accumulation. Both the employers and employees acknowledged the material gain of the capital owner when work was subcontracted to women at home. Simultaneously the market reinforced women’s reproductive role by conforming to patriarchal ideology. The contractors played an important role in implementing localized control on workers and also in finding workers for fragmented, gendered production processes. Their presence helped the employers in bringing together multiple forms of oppression that ranged from creating a structure to flout laws by creating shadow employers. It created an intertwined social and economic structure as well as a workspace where the line between productive and reproductive work gets blurred. The state invisibilized itself either by keeping the sector out of the domain of laws or by not implementing its own laws regulating working conditions or social security. It allowed the local hierarchy to function and define localized working conditions. The productive reproductive continuum reveals a labor control that influenced both the productive and reproductive work of women.Keywords: informal sector, paid work, women workers, labor processes
Procedia PDF Downloads 161273 From Avatars to Humans: A Hybrid World Theory and Human Computer Interaction Experimentations with Virtual Reality Technologies
Authors: Juan Pablo Bertuzzi, Mauro Chiarella
Abstract:
Employing a communication studies perspective and a socio-technological approach, this paper introduces a theoretical framework for understanding the concept of hybrid world; the avatarization phenomena; and the communicational archetype of co-hybridization. This analysis intends to make a contribution to future design of virtual reality experimental applications. Ultimately, this paper presents an ongoing research project that proposes the study of human-avatar interactions in digital educational environments, as well as an innovative reflection on inner digital communication. The aforementioned project presents the analysis of human-avatar interactions, through the development of an interactive experience in virtual reality. The goal is to generate an innovative communicational dimension that could reinforce the hypotheses presented throughout this paper. Being thought for its initial application in educational environments, the analysis and results of this research are dependent and have been prepared in regard of a meticulous planning of: the conception of a 3D digital platform; the interactive game objects; the AI or computer avatars; the human representation as hybrid avatars; and lastly, the potential of immersion, ergonomics and control diversity that can provide the virtual reality system and the game engine that were chosen. The project is divided in two main axes: The first part is the structural one, as it is mandatory for the construction of an original prototype. The 3D model is inspired by the physical space that belongs to an academic institution. The incorporation of smart objects, avatars, game mechanics, game objects, and a dialogue system will be part of the prototype. These elements have all the objective of gamifying the educational environment. To generate a continuous participation and a large amount of interactions, the digital world will be navigable both, in a conventional device and in a virtual reality system. This decision is made, practically, to facilitate the communication between students and teachers; and strategically, because it will help to a faster population of the digital environment. The second part is concentrated to content production and further data analysis. The challenge is to offer a scenario’s diversity that compels users to interact and to question their digital embodiment. The multipath narrative content that is being applied is focused on the subjects covered in this paper. Furthermore, the experience with virtual reality devices proposes users to experiment in a mixture of a seemingly infinite digital world and a small physical area of movement. This combination will lead the narrative content and it will be crucial in order to restrict user’s interactions. The main point is to stimulate and to grow in the user the need of his hybrid avatar’s help. By building an inner communication between user’s physicality and user’s digital extension, the interactions will serve as a self-guide through the gameworld. This is the first attempt to make explicit the avatarization phenomena and to further analyze the communicational archetype of co-hybridization. The challenge of the upcoming years will be to take advantage from these forms of generalized avatarization, in order to create awareness and establish innovative forms of hybridization.Keywords: avatar, hybrid worlds, socio-technology, virtual reality
Procedia PDF Downloads 142272 Governance of Climate Adaptation Through Artificial Glacier Technology: Lessons Learnt from Leh (Ladakh, India) In North-West Himalaya
Authors: Ishita Singh
Abstract:
Social-dimension of Climate Change is no longer peripheral to Science, Technology and Innovation (STI). Indeed, STI is being mobilized to address small farmers’ vulnerability and adaptation to Climate Change. The experiences from the cold desert of Leh (Ladakh) in North-West Himalaya illustrate the potential of STI to address the challenges of Climate Change and the needs of small farmers through the use of Artificial Glacier Techniques. Small farmers have a unique technique of water harvesting to augment irrigation, called “Artificial Glaciers” - an intricate network of water channels and dams along the upper slope of a valley that are located closer to villages and at lower altitudes than natural glaciers. It starts to melt much earlier and supplements additional irrigation to small farmers’ improving their livelihoods. Therefore, the issue of vulnerability, adaptive capacity and adaptation strategy needs to be analyzed in a local context and the communities as well as regions where people live. Leh (Ladakh) in North-West Himalaya provides a Case Study for exploring the ways in which adaptation to Climate Change is taking place at a community scale using Artificial Glacier Technology. With the above backdrop, an attempt has been made to analyze the rural poor households' vulnerability and adaptation practices to Climate Change using this technology, thereby drawing lessons on vulnerability-livelihood interactions in the cold desert of Leh (Ladakh) in North-West Himalaya, India. The study is based on primary data and information collected from 675 households confined to 27 villages of Leh (Ladakh) in North-West Himalaya, India. It reveals that 61.18% of the population is driving livelihoods from agriculture and allied activities. With increased irrigation potential due to the use of Artificial Glaciers, food security has been assured to 77.56% of households and health vulnerability has been reduced in 31% of households. Seasonal migration as a livelihood diversification mechanism has declined in nearly two-thirds of households, thereby improving livelihood strategies. Use of tactical adaptations by small farmers in response to persistent droughts, such as selling livestock, expanding agriculture lands, and use of relief cash and foods, have declined to 20.44%, 24.74% and 63% of households. However, these measures are unsustainable on a long-term basis. The role of policymakers and societal stakeholders becomes important in this context. To address livelihood challenges, the role of technology is critical in a multidisciplinary approach involving multilateral collaboration among different stakeholders. The presence of social entrepreneurs and new actors on the adaptation scene is necessary to bring forth adaptation measures. Better linkage between Science and Technology policies, together with other policies, should be encouraged. Better health care, access to safe drinking water, better sanitary conditions, and improved standards of education and infrastructure are effective measures to enhance a community’s adaptive capacity. However, social transfers for supporting climate adaptive capacity require significant amounts of additional investment. Developing institutional mechanisms for specific adaptation interventions can be one of the most effective ways of implementing a plan to enhance adaptation and build resilience.Keywords: climate change, adaptation, livelihood, stakeholders
Procedia PDF Downloads 70271 Structural Fluxionality of Luminescent Coordination Compounds with Lanthanide Ions
Authors: Juliana A. B. Silva, Caio H. T. L. Albuquerque, Leonardo L. dos Santos, Cristiane K. Oliveira, Ivani Malvestiti, Fernando Hallwass, Ricardo L. Longo
Abstract:
Complexes with lanthanide ions have been extensively studied due to their applications as luminescent, magnetic and catalytic materials as molecular or extended crystals, thin films, glasses, polymeric matrices, ionic liquids, and in solution. NMR chemical shift data in solution have been reported and suggest fluxional structures in a wide range of coordination compounds with rare earth ions. However, the fluxional mechanisms for these compounds are still not established. This structural fluxionality may affect the photophysical, catalytic and magnetic properties in solution. Thus, understanding the structural interconversion mechanisms may aid the design of coordination compounds with, for instance, improved (electro)luminescence, catalytic and magnetic behaviors. The [Eu(btfa)₃bipy] complex, where btfa= 4,4,4-trifluoro-1-phenyl-1,3-butanedionate and bipy= 2,2’-bipiridyl, has a well-defined X-ray crystallographic structure and preliminary 1H NMR data suggested a structural fluxionality. Thus, we have investigated a series of coordination compounds with lanthanide ions [Ln(btfa)₃L], where Ln = La, Eu, Gd or Yb and L= bipy or phen (phen=1,10-phenanthroline) using a combined theoretical-experimental approach. These complexes were synthesized and fully characterized, and detailed NMR measurements were obtained. They were also studied by quantum chemical computational methods (DFT-PBE0). The aim was to determine the relevant factors in the structure of these compounds that favor or not the fluxional behavior. Measurements of the 1H NMR signals at variable temperature in CD₂Cl₂ of the [Eu(btfa)₃L] complexes suggest that these compounds have a fluxional structure, because the crystal structure has non-equivalent btfa ligands that should lead to non-equivalent hydrogen atoms and thus to more signals in the NMR spectra than those obtained at room temperature, where all hydrogen atoms of the btfa ligands are equivalent, and phen ligand has an effective vertical symmetry plane. For the [Eu(btfa)₃bipy] complex, the broadening of the signals at –70°C provides a lower bound for the coalescence temperature, which indicates the energy barriers involved in the structural interconversion mechanisms are quite small. These barriers and, consequently, the coalescence temperature are dependent upon the radii of the lanthanide ion as well as to their paramagnetic effects. The PBE0 calculated structures are in very good agreement with the crystallographic data and, for the [Eu(btfa)₃bipy] complex, this method provided several distinct structures with almost the same energy. However, the energy barrier for structural interconversion via dissociative pathways were found to be quite high and could not explain the experimental observations. Whereas the pseudo-rotation pathways, involving the btfa and bipy ligands, have very small activation barriers, in excellent agreement with the NMR data. The results also showed an increase in the activation barrier along the lanthanide series due to the decrease of the ionic radii and consequent increase of the steric effects. TD-DFT calculations showed a dependence of the ligand donor state energy with different structures of the complex [Eu(btfa)₃phen], which can affect the energy transfer rates and the luminescence. The energy required to promote the structural fluxionality may also enhance the luminescence quenching in solution. These results can aid in the design of more luminescent compounds and more efficient devices.Keywords: computational chemistry, lanthanide-based compounds, NMR, structural fluxionality
Procedia PDF Downloads 199270 SkyCar Rapid Transit System: An Integrated Approach of Modern Transportation Solutions in the New Queen Elizabeth Quay, Perth, Western Australia
Authors: Arfanara Najnin, Michael W. Roach, Jr., Dr. Jianhong Cecilia Xia
Abstract:
The SkyCar Rapid Transit System (SRT) is an innovative intelligent transport system for the sustainable urban transport system. This system will increase the urban area network connectivity and decrease urban area traffic congestion. The SRT system is designed as a suspended Personal Rapid Transit (PRT) system that travels under a guideway 5m above the ground. A driver-less passenger is via pod-cars that hang from slender beams supported by columns that replace existing lamp posts. The beams are setup in a series of interconnecting loops providing non-stop travel from beginning to end to assure journey time. The SRT forward movement is effected by magnetic motors built into the guideway. Passenger stops are at either at line level 5m above the ground or ground level via a spur guideway that curves off the main thoroughfare. The main objective of this paper is to propose an integrated Automated Transit Network (ATN) technology for the future intelligent transport system in the urban built environment. To fulfil the objective a 4D simulated model in the urban built environment has been proposed by using the concept of SRT-ATN system. The methodology for the design, construction and testing parameters of a Technology Demonstrator (TD) for proof of concept and a Simulator (S) has been demonstrated. The completed TD and S will provide an excellent proving ground for the next development stage, the SRT Prototype (PT) and Pilot System (PS). This paper covered by a 4D simulated model in the virtual built environment is to effectively show how the SRT-ATN system works. OpenSim software has been used to develop the model in a virtual environment, and the scenario has been simulated to understand and visualize the proposed SkyCar Rapid Transit Network model. The SkyCar system will be fabricated in a modular form which is easily transported. The system would be installed in increasingly congested city centers throughout the world, as well as in airports, tourist resorts, race tracks and other special purpose for the urban community. This paper shares the lessons learnt from the proposed innovation and provides recommendations on how to improve the future transport system in urban built environment. Safety and security of passengers are prime factors to be considered for this transit system. Design requirements to meet the safety needs to be part of the research and development phase of the project. Operational safety aspects would also be developed during this period. The vehicles, the track and beam systems and stations are the main components that need to be examined in detail for safety and security of patrons. Measures will also be required to protect columns adjoining intersections from errant vehicles in vehicular traffic collisions. The SkyCar Rapid Transit takes advantage of all current disruptive technologies; batteries, sensors and 4G/5G communication and solar energy technologies which will continue to reduce the costs and make the systems more profitable. SkyCar's energy consumption is extremely low compared to other transport systems.Keywords: SkyCar, rapid transit, Intelligent Transport System (ITS), Automated Transit Network (ATN), urban built environment, 4D Visualization, smart city
Procedia PDF Downloads 217269 The Positive Effects of Processing Instruction on the Acquisition of French as a Second Language: An Eye-Tracking Study
Authors: Cecile Laval, Harriet Lowe
Abstract:
Processing Instruction is a psycholinguistic pedagogical approach drawing insights from the Input Processing Model which establishes the initial innate strategies used by second language learners to connect form and meaning of linguistic features. With the ever-growing use of technology in Second Language Acquisition research, the present study uses eye-tracking to measure the effectiveness of Processing Instruction in the acquisition of French and its effects on learner’s cognitive strategies. The experiment was designed using a TOBII Pro-TX300 eye-tracker to measure participants’ default strategies when processing French linguistic input and any cognitive changes after receiving Processing Instruction treatment. Participants were drawn from lower intermediate adult learners of French at the University of Greenwich and randomly assigned to two groups. The study used a pre-test/post-test methodology. The pre-tests (one per linguistic item) were administered via the eye-tracker to both groups one week prior to instructional treatment. One group received full Processing Instruction treatment (explicit information on the grammatical item and on the processing strategies, and structured input activities) on the primary target linguistic feature (French past tense imperfective aspect). The second group received Processing Instruction treatment except the explicit information on the processing strategies. Three immediate post-tests on the three grammatical structures under investigation (French past tense imperfective aspect, French Subjunctive used for the expression of doubt, and the French causative construction with Faire) were administered with the eye-tracker. The eye-tracking data showed the positive change in learners’ processing of the French target features after instruction with improvement in the interpretation of the three linguistic features under investigation. 100% of participants in both groups made a statistically significant improvement (p=0.001) in the interpretation of the primary target feature (French past tense imperfective aspect) after treatment. 62.5% of participants made an improvement in the secondary target item (French Subjunctive used for the expression of doubt) and 37.5% of participants made an improvement in the cumulative target feature (French causative construction with Faire). Statistically there was no significant difference between the pre-test and post-test scores in the cumulative target feature; however, the variance approximately tripled between the pre-test and the post-test (3.9 pre-test and 9.6 post-test). This suggests that the treatment does not affect participants homogenously and implies a role for individual differences in the transfer-of-training effect of Processing Instruction. The use of eye-tracking provides an opportunity for the study of unconscious processing decisions made during moment-by-moment comprehension. The visual data from the eye-tracking demonstrates changes in participants’ processing strategies. Gaze plots from pre- and post-tests display participants fixation points changing from focusing on content words to focusing on the verb ending. This change in processing strategies can be clearly seen in the interpretation of sentences in both primary and secondary target features. This paper will present the research methodology, design and results of the experimental study using eye-tracking to investigate the primary effects and transfer-of-training effects of Processing Instruction. It will then provide evidence of the cognitive benefits of Processing Instruction in Second Language Acquisition and offer suggestion in second language teaching of grammar.Keywords: eye-tracking, language teaching, processing instruction, second language acquisition
Procedia PDF Downloads 280268 The Influence of English Immersion Program on Academic Performance: Case Study at a Sino-US Cooperative University in China
Authors: Leah Li Echiverri, Haoyu Shang, Yue Li
Abstract:
Wenzhou-Kean University (WKU) is a Sino-US Cooperative University in China. It practices the English Immersion Program (EIP), where all the courses are taught in English. Class discussions and presentations are pervasively interwoven in designing students’ learning experiences. This WKU model has brought positive influences on students and is in some way ahead of traditional college English majors. However, literature to support the perceptions on the positive outcomes of this teaching and learning model remain scarce. The distinctive profile of Chinese-ESL students in an English Medium of Instruction (EMI) environment contributes further to the scarcity of literature compared to existing studies conducted among ESL learners in Western educational settings. Hence, the study investigated the students’ perceptions towards the English Immersion Program and determine how it influences Chinese-ESL students’ academic performance (AP). This research can provide empirical data that would be helpful to educators, teaching practitioners, university administrators, and other researchers in making informed decisions when developing curricular reforms, instructional and pedagogical methods, and university-wide support programs using this educational model. The purpose of the study was to establish the relationship between the English Immersion Program and Academic Performance among Chinese-ESL students enrolled at WKU for the academic year 2020-2021. Course length, immersion location, course type, and instructional design were the constructs of the English immersion program. English language learning, learning efficiency, and class participation were used to measure academic performance. Descriptive-correlational design was used in this cross-sectional research project. A quantitative approach for data analysis was applied to determine the relationship between the English immersion program and Chinese-ESL students’ academic performance. The research was conducted at WKU; a Chinese-American jointly established higher educational institution located in Wenzhou, Zhejiang province. Convenience, random, and snowball sampling of 283 students, a response rate of 10.5%, were applied to represent the WKU student population. The questionnaire was posted through the survey website named Wenjuanxing and shared to QQ or WeChat. Cronbach’s alpha was used to test the reliability of the research instrument. Findings revealed that when professors integrate technology (PowerPoint, videos, and audios) in teaching, students pay more attention. This contributes to the acquisition of more professional knowledge in their major courses. As to course immersion, students perceive WKU as a good place to study, providing them a high degree of confidence to talk with their professors in English. This also contributes to their English fluency and better pronunciation in their communication. In the construct of designing instruction, the use of pictures, video clips, and professors’ non-verbal communication, and demonstration of concern for students encouraged students to be more active in-class participation. Findings on course length and academic performance indicated that students’ perception regarding taking courses during fall and spring terms can moderately contribute to their academic performance. In conclusion, the findings revealed a significantly strong positive relationship between course type, immersion location, instructional design, and academic performance.Keywords: class participation, English immersion program, English language learning, learning efficiency
Procedia PDF Downloads 174267 Environmentally Sustainable Transparent Wood: A Fully Green Approach from Bleaching to Impregnation for Energy-Efficient Engineered Wood Components
Authors: Francesca Gullo, Paola Palmero, Massimo Messori
Abstract:
Transparent wood is considered a promising structural material for the development of environmentally friendly, energy-efficient engineered components. To obtain transparent wood from natural wood materials two approaches can be used: i) bottom-up and ii) top-down. Through the second method, the color of natural wood samples is lightened through a chemical bleaching process that acts on chromophore groups of lignin, such as the benzene ring, quinonoid, vinyl, phenolics, and carbonyl groups. These chromophoric units form complex conjugate systems responsible for the brown color of wood. There are two strategies to remove color and increase the whiteness of wood: i) lignin removal and ii) lignin bleaching. In the lignin removal strategy, strong chemicals containing chlorine (chlorine, hypochlorite, and chlorine dioxide) and oxidizers (oxygen, ozone, and peroxide) are used to completely destroy and dissolve the lignin. In lignin bleaching methods, a moderate reductive (hydrosulfite) or oxidative (hydrogen peroxide) is commonly used to alter or remove the groups and chromophore systems of lignin, selectively discoloring the lignin while keeping the macrostructure intact. It is, therefore, essential to manipulate nanostructured wood by precisely controlling the nanopores in the cell walls by monitoring both chemical treatments and process conditions, for instance, the treatment time, the concentration of chemical solutions, the pH value, and the temperature. The elimination of wood light scattering is the second step in the fabrication of transparent wood materials, which can be achieved through two-step approaches: i) the polymer impregnation method and ii) the densification method. For the polymer impregnation method, the wood scaffold is treated with polymers having a corresponding refractive index (e.g., PMMA and epoxy resins) under vacuum to obtain the transparent composite material, which can finally be pressed to align the cellulose fibers and reduce interfacial defects in order to have a finished product with high transmittance (>90%) and excellent light-guiding. However, both the solution-based bleaching and the impregnation processes used to produce transparent wood generally consume large amounts of energy and chemicals, including some toxic or pollutant agents, and are difficult to scale up industrially. Here, we report a method to produce optically transparent wood by modifying the lignin structure with a chemical reaction at room temperature using small amounts of hydrogen peroxide in an alkaline environment. This method preserves the lignin, which results only deconjugated and acts as a binder, providing both a strong wood scaffold and suitable porosity for infiltration of biobased polymers while reducing chemical consumption, the toxicity of the reagents used, polluting waste, petroleum by-products, energy and processing time. The resulting transparent wood demonstrates high transmittance and low thermal conductivity. Through the combination of process efficiency and scalability, the obtained materials are promising candidates for application in the field of construction for modern energy-efficient buildings.Keywords: bleached wood, energy-efficient components, hydrogen peroxide, transparent wood, wood composites
Procedia PDF Downloads 54266 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level
Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni
Abstract:
In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.Keywords: tropocollagen, multiscale model, fibrils, knee ligaments
Procedia PDF Downloads 128265 Social Licence to Operate Methodology to Secure Commercial, Community and Regulatory Approval for Small and Large Scale Fisheries
Authors: Kelly S. Parkinson, Katherine Y. Teh-White
Abstract:
Futureye has a bespoke social licence to operate methodology which has successfully secured community approval and commercial return for fisheries which have faced regulatory and financial risk. This unique approach to fisheries management focuses on delivering improved social and environmental outcomes to support the fishing industry make steps towards achieving the United Nations SDGs. An SLO is the community’s implicit consent for a business or project to exist. An SLO must be earned and maintained alongside regulatory licences. In current and new operations, it helps you to anticipate and measure community concerns around your operations – leading to more predictable and sensible policy outcomes that will not jeopardise your commercial returns. Rising societal expectations and increasing activist sophistication mean the international fishing industry needs to resolve community concerns at each stage their supply chain. Futureye applied our tested social licence to operate (SLO) methodology to help Austral Fisheries who was being attacked by activists concerned about the sustainability of Patagonian Toothfish. Austral was Marine Stewardship Council certified, but pirates were making the overall catch unsustainable. Austral wanted to be carbon neutral. SLO provides a lens on the risk that helps industries and companies act before regulatory and political risk escalates. To do this assessment, we have a methodology that assesses the risk that we can then translate into a process to create a strategy. 1) Audience: we understand the drivers of change and the transmission of those drivers across all audience segments. 2) Expectation: we understand the level of social norming of changing expectations. 3) Outrage: we understand the technical and perceptual aspects of risk and the opportunities to mitigate these. 4) Inter-relationships: we understand the political, regulatory, and reputation system so that we can understand the levers of change. 5) Strategy: we understand whether the strategy will achieve a social licence through bringing the internal and external stakeholders on the journey. Futureye’s SLO methodologies helped Austral to understand risks and opportunities to enhance its resilience. Futureye reviewed the issues, assessed outrage and materiality and mapped SLO threats to the company. Austral was introduced to a new way that it could manage activism, climate action, and responsible consumption. As a result of Futureye’s work, Austral worked closely with Sea Shepherd who was campaigning against pirates illegally fishing Patagonian Toothfish as well as international governments. In 2016 Austral launched the world’s first carbon neutral fish which won Austral a thirteen percent premium for tender on the open market. In 2017, Austral received the prestigious Banksia Foundation Sustainability Leadership Award for seafood that is sustainable, healthy and carbon neutral. Austral’s position as a leader in sustainable development has opened doors for retailers all over the world. Futureye’s SLO methodology can identify the societal, political and regulatory risks facing fisheries and position them to proactively address the issues and become an industry leader in sustainability.Keywords: carbon neutral, fisheries management, risk communication, social licence to operate, sustainable development
Procedia PDF Downloads 120264 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates
Authors: Jennifer Buz, Alvin Spivey
Abstract:
The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation
Procedia PDF Downloads 129263 Influence of Dryer Autumn Conditions on Weed Control Based on Soil Active Herbicides
Authors: Juergen Junk, Franz Ronellenfitsch, Michael Eickermann
Abstract:
An appropriate weed management in autumn is a prerequisite for an economically successful harvest in the following year. In Luxembourg oilseed rape, wheat and barley is sown from August until October, accompanied by a chemical weed control with soil active herbicides, depending on the state of the weeds and the meteorological conditions. Based on regular ground and surface water-analysis, high levels of contamination by transformation products of respective herbicide compounds have been found in Luxembourg. The most ideal conditions for incorporating soil active herbicides are single rain events. Weed control may be reduced if application is made when weeds are under drought stress or if repeated light rain events followed by dry spells, because the herbicides tend to bind tightly to the soil particles. These effects have been frequently reported for Luxembourg throughout the last years. In the framework of a multisite long-term field experiment (EFFO) weed monitoring, plants observations and corresponding meteorological measurements were conducted. Long-term time series (1947-2016) from the SYNOP station Findel-Airport (WMO ID = 06590) showed a decrease in the number of days with precipitation. As the total precipitation amount has not significantly changed, this indicates a trend towards rain events with higher intensity. All analyses are based on decades (10-day periods) for September and October of each individual year. To assess the future meteorological conditions for Luxembourg, two different approaches were applied. First, multi-model ensembles from the CORDEX experiments (spatial resolution ~12.5 km; transient projections until 2100) were analysed for two different Representative Concentration Pathways (RCP8.5 and RCP4.5), covering the time span from 2005 until 2100. The multi-model ensemble approach allows for the quantification of the uncertainties and also to assess the differences between the two emission scenarios. Second, to assess smaller scale differences within the country a high resolution model projection using the COSMO-LM model was used (spatial resolution 1.3 km). To account for the higher computational demands, caused by the increased spatial resolution, only 10-year time slices have been simulated (reference period 1991-2000; near future 2041-2050 and far future 2091-2100). Statistically significant trends towards higher air temperatures, +1.6 K for September (+5.3 K far future) and +1.3 K for October (+4.3 K), were predicted for the near future compared to the reference period. Precipitation simultaneously decreased by 9.4 mm (September) and 5.0 mm (October) for the near future and -49 mm (September) and -10 mm (October) in the far future. Beside the monthly values also decades were analyzed for the two future time periods of the CLM model. For all decades of September and October the number of days with precipitation decreased for the projected near and far future. Changes in meteorological variables such as air temperature and precipitation did already induce transformations in weed societies (composition, late-emerging etc.) of arable ecosystems in Europe. Therefore, adaptations of agronomic practices as well as effective weed control strategies must be developed to maintain crop yield.Keywords: CORDEX projections, dry spells, ensembles, weed management
Procedia PDF Downloads 235262 OpenFOAM Based Simulation of High Reynolds Number Separated Flows Using Bridging Method of Turbulence
Authors: Sagar Saroha, Sawan S. Sinha, Sunil Lakshmipathy
Abstract:
Reynolds averaged Navier-Stokes (RANS) model is the popular computational tool for prediction of turbulent flows. Being computationally less expensive as compared to direct numerical simulation (DNS), RANS has received wide acceptance in industry and research community as well. However, for high Reynolds number flows, the traditional RANS approach based on the Boussinesq hypothesis is incapacitated to capture all the essential flow characteristics, and thus, its performance is restricted in high Reynolds number flows of practical interest. RANS performance turns out to be inadequate in regimes like flow over curved surfaces, flows with rapid changes in the mean strain rate, duct flows involving secondary streamlines and three-dimensional separated flows. In the recent decade, partially averaged Navier-Stokes (PANS) methodology has gained acceptability among seamless bridging methods of turbulence- placed between DNS and RANS. PANS methodology, being a scale resolving bridging method, is inherently more suitable than RANS for simulating turbulent flows. The superior ability of PANS method has been demonstrated for some cases like swirling flows, high-speed mixing environment, and high Reynolds number turbulent flows. In our work, we intend to evaluate PANS in case of separated turbulent flows past bluff bodies -which is of broad aerodynamic research and industrial application. PANS equations, being derived from base RANS, continue to inherit the inadequacies from the parent RANS model based on linear eddy-viscosity model (LEVM) closure. To enhance PANS’ capabilities for simulating separated flows, the shortcomings of the LEVM closure need to be addressed. Inabilities of the LEVMs have inspired the development of non-linear eddy viscosity models (NLEVM). To explore the potential improvement in PANS performance, in our study we evaluate the PANS behavior in conjugation with NLEVM. Our work can be categorized into three significant steps: (i) Extraction of PANS version of NLEVM from RANS model, (ii) testing the model in the homogeneous turbulence environment and (iii) application and evaluation of the model in the canonical case of separated non-homogeneous flow field (flow past prismatic bodies and bodies of revolution at high Reynolds number). PANS version of NLEVM shall be derived and implemented in OpenFOAM -an open source solver. Homogeneous flows evaluation will comprise the study of the influence of the PANS’ filter-width control parameter on the turbulent stresses; the homogeneous analysis performed over typical velocity fields and asymptotic analysis of Reynolds stress tensor. Non-homogeneous flow case will include the study of mean integrated quantities and various instantaneous flow field features including wake structures. Performance of PANS + NLEVM shall be compared against the LEVM based PANS and LEVM based RANS. This assessment will contribute to significant improvement of the predictive ability of the computational fluid dynamics (CFD) tools in massively separated turbulent flows past bluff bodies.Keywords: bridging methods of turbulence, high Re-CFD, non-linear PANS, separated turbulent flows
Procedia PDF Downloads 145261 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing
Authors: Arjun Kumar Rath, Titus Dhanasingh
Abstract:
Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.Keywords: board diagnostics software, embedded system, hardware testing, test frameworks
Procedia PDF Downloads 145260 Prospective Museum Visitor Management Based on Prospect Theory: A Pragmatic Approach
Authors: Athina Thanou, Eirini Eleni Tsiropoulou, Symeon Papavassiliou
Abstract:
The problem of museum visitor experience and congestion management – in various forms - has come increasingly under the spotlight over the last few years, since overcrowding can significantly decrease the quality of visitors’ experience. Evidence suggests that on busy days the amount of time a visitor spends inside a crowded house museum can fall by up to 60% compared to a quiet mid-week day. In this paper we consider the aforementioned problem, by treating museums as evolving social systems that induce constraints. However, in a cultural heritage space, as opposed to the majority of social environments, the momentum of the experience is primarily controlled by the visitor himself. Visitors typically behave selfishly regarding the maximization of their own Quality of Experience (QoE) - commonly expressed through a utility function that takes several parameters into consideration, with crowd density and waiting/visiting time being among the key ones. In such a setting, congestion occurs when either the utility of one visitor decreases due to the behavior of other persons, or when costs of undertaking an activity rise due to the presence of other persons. We initially investigate how visitors’ behavioral risk attitudes, as captured and represented by prospect theory, affect their decisions in resource sharing settings, where visitors’ decisions and experiences are strongly interdependent. Different from the majority of existing studies and literature, we highlight that visitors are not risk neutral utility maximizers, but they demonstrate risk-aware behavior according to their personal risk characteristics. In our work, exhibits are organized into two groups: a) “safe exhibits” that correspond to less congested ones, where the visitors receive guaranteed satisfaction in accordance with the visiting time invested, and b) common pool of resources (CPR) exhibits, which are the most popular exhibits with possibly increased congestion and uncertain outcome in terms of visitor satisfaction. A key difference is that the visitor satisfaction due to CPR strongly depends not only on the invested time decision of a specific visitor, but also on that of the rest of the visitors. In the latter case, the over-investment in time, or equivalently the increased congestion potentially leads to “exhibit failure”, interpreted as the visitors gain no satisfaction from their observation of this exhibit due to high congestion. We present a framework where each visitor in a distributed manner determines his time investment in safe or CPR exhibits to optimize his QoE. Based on this framework, we analyze and evaluate how visitors, acting as prospect-theoretic decision-makers, respond and react to the various pricing policies imposed by the museum curators. Based on detailed evaluation results and experiments, we present interesting observations, regarding the impact of several parameters and characteristics such as visitor heterogeneity and use of alternative pricing policies, on scalability, user satisfaction, museum capacity, resource fragility, and operation point stability. Furthermore, we study and present the effectiveness of alternative pricing mechanisms, when used as implicit tools, to deal with the congestion management problem in the museums, and potentially decrease the exhibit failure probability (fragility), while considering the visitor risk preferences.Keywords: museum resource and visitor management, congestion management, propsect theory, cyber physical social systems
Procedia PDF Downloads 184259 Exploring Perspectives and Complexities of E-tutoring: Insights from Students Opting out of Online Tutor Service
Authors: Prince Chukwuneme Enwereji, Annelien Van Rooyen
Abstract:
In recent years, technology integration in education has transformed the learning landscape, particularly in online institutions. One technological advancement that has gained popularity is e-tutoring, which offers personalised academic support to students through online platforms. While e-tutoring has become well-known and has been adopted to promote collaborative learning, there are still students who do not use these services for various reasons. However, little attention has been given to understanding the perspectives of students who have not utilized these services. The research objectives include identifying the perceived benefits that non-e-tutoring students believe e-tutoring could offer, such as enhanced academic support, personalized learning experiences, and improved performance. Additionally, the study explored the potential drawbacks or concerns that non-e-tutoring students associate with e-tutoring, such as concerns about efficacy, a lack of face-to-face interaction, and platform accessibility. The study adopted a quantitative research approach with a descriptive design to gather and analyze data on non-e-tutoring students' perspectives. Online questionnaires were employed as the primary data collection method, allowing for the efficient collection of data from many participants. The collected data was analyzed using the Statistical Package for the Social Sciences (SPSS). Ethical concepts such as informed consent, anonymity of responses and protection of respondents against harm were maintained. Findings indicate that non-e-tutoring students perceive a sense of control over their own pace of learning, suggesting a preference for self-directed learning and the ability to tailor their educational experience to their individual needs and learning styles. They also exhibit high levels of motivation, believe in their ability to effectively participate in their studies and organize their academic work, and feel comfortable studying on their own without the help of e-tutors. However, non-e-tutoring students feel that e-tutors do not sufficiently address their academic needs and lack engagement. They also perceive a lack of clarity in the roles of e-tutors, leading to uncertainty about their responsibilities. In terms of communication, students feel overwhelmed by the volume of announcements and find repetitive information frustrating. Additionally, some students face challenges with their internet connection and associated cost, which can hinder their participation in online activities. Furthermore, non-e-tutoring students express a desire for interactions with their peers and a sense of belonging to a group or team. They value opportunities for collaboration, teamwork in their learning experience, the importance of fostering social interactions and creating a sense of community in online learning environments. This study recommended that students seek alternate support systems by reaching out to professors or academic advisors for guidance and clarification. Developing self-directed learning skills is essential, empowering students to take charge of their own learning through setting objectives, creating own study plans, and utilising resources. For HEIs, it was recommended that they should ensure that a variety of support services are available to cater to the needs of all students, including non-e-tutoring students. HEIs should also ensure easy access to online resources, promote a supportive community, and regularly evaluate and adapt their support techniques to meet students' changing requirements.Keywords: online-tutor;, student support;, online education, educational practices, distance education
Procedia PDF Downloads 82258 Synthesis and Luminescent Properties of Barium-Europium (III) Silicate Systems
Authors: A. Isahakyan, A. Terzyan, V. Stepanyan, N. Zulumyan, H. Beglaryan
Abstract:
The involvement of silica hydrogel derived from serpentine minerals (Mg(Fe))6[Si4O10](OH)8 as a source of silicon dioxide in SiO2–NaOH–BaCl2–H2O system results in precipitating via one-hour stirring of boiling suspension such intermediates that on heating up to 800 °C crystallize into the product composed of barium ortho- Ba2SiO4 and metasilicates BaSiO3. Based on the positive results, this approach has been decided to be adapted to inserting europium (III) ions into the structure of the synthesized compounds. Intermediates previously precipitated in silica hydrogel–NaOH–BaCl2–Eu(NO3)3 system via one-hour stirring at room temperature underwent one-hour heat-treatment at different temperatures (6001200 °C). Prior to calcination, the suspension produced in the mixer was heated on a boiling-water bath until a powder-like sample was obtained. When the silica hydrogel was metered, SiO2 content in the silica hydrogel that is 5.8 % was taken into consideration in order to guaranty the molar ratios of both SiO2 to BaO and SiO2 to Na2O equal to 1:2. BaCl2 and Eu(NO3)3 reagents were weighted so that the formation of appropriate compositions was guaranteed. Samples including various concentrations of Eu3+ ions (1.25, 2.5, 3.75, 5, 6.35, 8.65, 10, 17.5, 18.75 and 20 mol%) were synthesized by the described method. Luminescence excitation, emission spectra of the products were recorded on the Agilent Cary Eclipes fluorescence spectrophotometer using Agilent Xenon flash lamp (80 Hz) as the excitation source (scanning rate=30 nm/min, excitation and emission slits width=5 nm, excitation filter set to auto, emission filter set to auto and PMT detector Voltage=800 V). Prior to optical properties measurements, each of the powder samples was put in the solid sample-holder. X-ray powder diffraction (XRPD) measurements were made on the SmartLab SE diffractometer. Emission spectra recorded for all the samples at an excitation wavelength of 394 nm exhibit peaks centered at around 536, 555, 587, 614, 653, 690 and 702.5 nm. The most intensive emission peak is observed at 614nm due to 5D0→7F2 of europium (III) ions transition. Luminescence intensity achieves its maximum for Eu3+ 17.5 mol% and heat-treatment at 1200 °C. The XRPD patterns revealed that the diffraction peaks recorded for this sample are identical to NaBa6Nd(SiO4)4 reflections. As Nd-containing reagents were not involved into the synthesis, the maximum luminescent intensity is most likely to be conditioned by NaBa6Eu(SiO4)4 formation whose reflections are not available in the ICDD-JCPDS database of crystallographic 2024. Up to Eu3+ 2.5 mol% the samples demonstrate the phases corresponding to Ba2SiO4 and BaSiO3 standards. Subsequent increasing of europium (III) concentration in the system leads to NaBa6Eu(SiO4)4 formation along with Ba2SiO4 and BaSiO3. NaBa6Eu(SiO4)4 share gradually increases and starting from 17.5 mol% and more NaBa6Eu(SiO4)4 phase is only registered. Thus, the variation of europium (III) concentration in silica hydrogel–NaOH–BaCl2–Eu(NO3)3 system allows producing by the precipitation method the products composed of europium (III)-doped Ba2SiO4 and BaSiO3 and/or NaBa6Eu(SiO4)4 distinguished by different luminescent properties. The work was supported by the Science Committee of RA, in the frames of the research projects № 21T-1D131 and № 21SCG-1D013.Keywords: europium (III)-doped barium ortho- Ba2SiO4 and metasilicates BaSiO₃, NaBa₆Eu(SiO₄)₄, luminescence, precipitation method
Procedia PDF Downloads 39257 Engineering Design of a Chemical Launcher: An Interdisciplinary Design Activity
Authors: Mei Xuan Tan, Gim-Yang Maggie Pee, Mei Chee Tan
Abstract:
Academic performance, in the form of scoring high grades in enrolled subjects, is not the only significant trait in achieving success. Engineering graduates with experience in working on hands-on projects in a team setting are highly sought after in industry upon graduation. Such projects are typically real world problems that require the integration and application of knowledge and skills from several disciplines. In a traditional university setting, subjects are taught in a silo manner with no cross participation from other departments or disciplines. This may lead to knowledge compartmentalization and students are unable to understand and connect the relevance and applicability of the subject. University instructors thus see this integration across disciplines as a challenging task as they aim to better prepare students in understanding and solving problems for work or future studies. To improve students’ academic performance and to cultivate various skills such as critical thinking, there has been a gradual uptake in the use of an active learning approach in introductory science and engineering courses, where lecturing is traditionally the main mode of instruction. This study aims to discuss the implementation and experience of a hands-on, interdisciplinary project that involves all the four core subjects taught during the term at the Singapore University of Technology Design (SUTD). At SUTD, an interdisciplinary design activity, named 2D, is integrated into the curriculum to help students reinforce the concepts learnt. A student enrolled in SUTD experiences his or her first 2D in Term 1. This activity. which spans over one week in Week 10 of Term 1, highlights the application of chemistry, physics, mathematics, humanities, arts and social sciences (HASS) in designing an engineering product solution. The activity theme for Term 1 2D revolved around “work and play”. Students, in teams of 4 or 5, used a scaled-down model of a chemical launcher to launch a projectile across the room. It involved the use of a small chemical combustion reaction between ethanol (a highly volatile fuel) and oxygen. This reaction generated a sudden and large increase in gas pressure built up in a closed chamber, resulting in rapid gas expansion and ejection of the projectile out of the launcher. Students discussed and explored the meaning of play in their lives in HASS class while the engineering aspects of a combustion system to launch an object using underlying principles of energy conversion and projectile motion were revisited during the chemistry and physics classes, respectively. Numerical solutions on the distance travelled by the projectile launched by the chemical launcher, taking into account drag forces, was developed during the mathematics classes. At the end of the activity, students developed skills in report writing, data collection and analysis. Specific to this 2D activity, students gained an understanding and appreciation on the application and interdisciplinary nature of science, engineering and HASS. More importantly, students were exposed to design and problem solving, where human interaction and discussion are important yet challenging in a team setting.Keywords: active learning, collaborative learning, first year undergraduate, interdisciplinary, STEAM
Procedia PDF Downloads 122256 Challenges and Proposals for Public Policies Aimed At Increasing Energy Efficiency in Low-Income Communities in Brazil: A Multi-Criteria Approach
Authors: Anna Carolina De Paula Sermarini, Rodrigo Flora Calili
Abstract:
Energy Efficiency (EE) needs investments, new technologies, greater awareness and management on the side of citizens and organizations, and more planning. However, this issue is usually remembered and discussed only in moments of energy crises, and opportunities are missed to take better advantage of the potential of EE in the various sectors of the economy. In addition, there is little concern about the subject among the less favored classes, especially in low-income communities. Accordingly, this article presents suggestions for public policies that aim to increase EE for low-income housing and communities based on international and national experiences. After reviewing the literature, eight policies were listed, and to evaluate them; a multicriteria decision model was developed using the AHP (Analytical Hierarchy Process) and TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) methods, combined with fuzzy logic. Nine experts analyzed the policies according to 9 criteria: economic impact, social impact, environmental impact, previous experience, the difficulty of implementation, possibility/ease of monitoring and evaluating the policies, expected impact, political risks, and public governance and sustainability of the sector. The results found in order of preference are (i) Incentive program for equipment replacement; (ii) Community awareness program; (iii) EE Program with a greater focus on low income; (iv) Staggered and compulsory certification of social interest buildings; (v) Programs for the expansion of smart metering, energy monitoring and digitalization; (vi) Financing program for construction and retrofitting of houses with the emphasis on EE; (vii) Income tax deduction for investment in EE projects in low-income households made by companies; (viii) White certificates of energy for low-income. First, the policy of equipment substitution has been employed in Brazil and the world and has proven effective in promoting EE. For implementation, efforts are needed from the federal and state governments, which can encourage companies to reduce prices, and provide some type of aid for the purchase of such equipment. In second place is the community awareness program, promoting socio-educational actions on EE concepts and with energy conservation tips. This policy is simple to implement and has already been used by many distribution utilities in Brazil. It can be carried out through bids defined by the government in specific areas, being executed by third sector companies with public and private resources. Third on the list is the proposal to continue the Energy Efficiency Program (which obliges electric energy companies to allocate resources for research in the area) by suggesting the return of the mandatory investment of 60% of the resources in projects for low income. It is also relatively simple to implement, requiring efforts by the federal government to make it mandatory, and on the part of the distributors, compliance is needed. The success of the suggestions depends on changes in the established rules and efforts from the interested parties. For future work, we suggest the development of pilot projects in low-income communities in Brazil and the application of other multicriteria decision support methods to compare the results obtained in this study.Keywords: energy efficiency, low-income community, public policy, multicriteria decision making
Procedia PDF Downloads 117255 Analyzing the Effectiveness of Elderly Design and the Impact on Sustainable Built Environment
Authors: Tristance Kee
Abstract:
With an unprecedented increase in elderly population around the world, the severe lack of quality housing and health-and-safety provisions to serve this cohort cannot be ignored any longer. Many elderly citizens, especially singletons, live in unsafe housing conditions with poorly executed planning and design. Some suffer from deteriorating mobility, sight and general alertness and their sub-standard living conditions further hinder their daily existence. This research explains how concepts such as Universal Design and Co-Design operate in a high density city such as Hong Kong, China where innovative design can become an alternative solution where government and the private sector fail to provide quality elderly friendly facilities to promote a sustainable urban development. Unlike other elderly research which focuses more on housing policies, nursing care and theories, this research takes a more progressive approach by providing an in-depth impact assessment on how innovative design can be practical solutions for creating a more sustainable built environment. The research objectives are to: 1) explain the relationship between innovative design for elderly and a healthier and sustainable environment; 2) evaluate the impact of human ergonomics with the use of universal design; and 3) explain how innovation can enhance the sustainability of a city in improving citizen’s sight, sound, walkability and safety within the ageing population. The research adopts both qualitative and quantitative methodologies to examine ways to improve elderly population’s relationship to our built environment. In particular, the research utilizes collected data from questionnaire survey and focus group discussions to obtain inputs from various stakeholders, including designers, operators and managers related to public housing, community facilities and overall urban development. In addition to feedbacks from end-users and stakeholders, a thorough analysis on existing elderly housing facilities and Universal Design provisions are examined to evaluate their adequacy. To echo the theme of this conference on Innovation and Sustainable Development, this research examines the effectiveness of innovative design in a risk-benefit factor assessment. To test the hypothesis that innovation can cater for a sustainable development, the research evaluated the health improvement of a sample size of 150 elderly in a period of eight months. Their health performances, including mobility, speech and memory are monitored and recorded on a regular basis to assess if the use of innovation does trigger impact on improving health and home safety for an elderly cohort. This study was supported by district community centers under the auspices of Home Affairs Bureau to provide respondents for questionnaire survey, a standardized evaluation mechanism, and professional health care staff for evaluating the performance impact. The research findings will be integrated to formulate design solutions such as innovative home products to improve elderly daily experience and safety with a particular focus on the enhancement on sight, sound and mobility safety. Some policy recommendations and architectural planning recommendations related to Universal Design will also be incorporated into the research output for future planning of elderly housing and amenity provisions.Keywords: elderly population, innovative design, sustainable built environment, universal design
Procedia PDF Downloads 228254 Cell-free Bioconversion of n-Octane to n-Octanol via a Heterogeneous and Bio-Catalytic Approach
Authors: Shanna Swart, Caryn Fenner, Athanasios Kotsiopoulos, Susan Harrison
Abstract:
Linear alkanes are produced as by-products from the increasing use of gas-to-liquid fuel technologies for synthetic fuel production and offer great potential for value addition. Their current use as low-value fuels and solvents do not maximize this potential. Therefore, attention has been drawn towards direct activation of these aliphatic alkanes to more useful products such as alcohols, aldehydes, carboxylic acids and derivatives. Cytochrome P450 monooxygenases (P450s) can be used for activation of these aliphatic alkanes using whole-cells or cell-free systems. Some limitations of whole-cell systems include reduced mass transfer, stability and possible side reactions. Since the P450 systems are little studied as cell-free systems, they form the focus of this study. Challenges of a cell-free system include co-factor regeneration, substrate availability and enzyme stability. Enzyme immobilization offers a positive outlook on this dilemma, as it may enhance stability of the enzyme. In the present study, 2 different P450s (CYP153A6 and CYP102A1) as well as the relevant accessory enzymes required for electron transfer (ferredoxin and ferredoxin reductase) and co-factor regeneration (glucose dehydrogenase) have been expressed in E. coli and purified by metal affinity chromatography. Glucose dehydrogenase (GDH), was used as a model enzyme to assess the potential of various enzyme immobilization strategies including; surface attachment on MagReSyn® microspheres with various functionalities and on electrospun nanofibers, using self-assembly based methods forming Cross Linked Enzymes (CLE), Cross Linked Enzyme Aggregates (CLEAs) and spherezymes as well as in a sol gel. The nanofibers were synthesized by electrospinning, which required the building of an electrospinning machine. The nanofiber morphology has been analyzed by SEM and binding will be further verified by FT-IR. Covalent attachment based methods showed limitations where only ferredoxin reductase and GDH retained activity after immobilization which were largely attributed to insufficient electron transfer and inactivation caused by the crosslinkers (60% and 90% relative activity loss for the free enzyme when using 0.5% glutaraldehyde and glutaraldehyde/ethylenediamine (1:1 v/v), respectively). So far, initial experiments with GDH have shown the most potential when immobilized via their His-tag onto the surface of MagReSyn® microspheres functionalized with Ni-NTA. It was found that Crude GDH could be simultaneously purified and immobilized with sufficient activity retention. Immobilized pure and crude GDH could be recycled 9 and 10 times, respectively, with approximately 10% activity remaining. The immobilized GDH was also more stable than the free enzyme after storage for 14 days at 4˚C. This immobilization strategy will also be applied to the P450s and optimized with regards to enzyme loading and immobilization time, as well as characterized and compared with the free enzymes. It is anticipated that the proposed immobilization set-up will offer enhanced enzyme stability (as well as reusability and easy recovery), minimal mass transfer limitation, with continuous co-factor regeneration and minimal enzyme leaching. All of which provide a positive outlook on this robust multi-enzyme system for efficient activation of linear alkanes as well as the potential for immobilization of various multiple enzymes, including multimeric enzymes for different bio-catalytic applications beyond alkane activation.Keywords: alkane activation, cytochrome P450 monooxygenase, enzyme catalysis, enzyme immobilization
Procedia PDF Downloads 227253 Impact of Water Interventions under WASH Program in the South-west Coastal Region of Bangladesh
Authors: S. M. Ashikur Elahee, Md. Zahidur Rahman, Md. Shofiqur Rahman
Abstract:
This study evaluated the impact of different water interventions under WASH program on access of household's to safe drinking water. Following survey method, the study was carried out in two Upazila of South-west coastal region of Bangladesh namely Koyra from Khulna and Shymnagar from Satkhira district. Being an explanatory study, a total of 200 household's selected applying random sampling technique were interviewed using a structured interview schedule. The predicted probability suggests that around 62 percent household's are out of year-round access to safe drinking water whereby, only 25 percent household's have access at SPHERE standard (913 Liters/per person/per year). Besides, majority (78 percent) of the household's have not accessed at both indicators simultaneously. The distance from household residence to the water source varies from 0 to 25 kilometer with an average distance of 2.03 kilometers. The study also reveals that the increase in monthly income around BDT 1,000 leads to additional 11 liters (coefficient 0.01 at p < 0.1) consumption of safe drinking water for a person/year. As expected, lining up time has significant negative relationship with dependent variables i.e., for higher lining up time, the probability of getting access for both SPHERE standard and year round access variables becomes lower. According to ordinary least square (OLS) regression results, water consumption decreases at 93 liters for per person/year of a household if one member is added to that household. Regarding water consumption intensity, ordered logistic regression (OLR) model shows that one-minute increase of lining up time for water collection tends to reduce water consumption intensity. On the other hand, as per OLS regression results, for one-minute increase of lining up time, the water consumption decreases by around 8 liters. Considering access to Deep Tube Well (DTW) as a reference dummy, in OLR, the household under Pond Sand Filter (PSF), Shallow Tube Well (STW), Reverse Osmosis (RO) and Rainwater Harvester System (RWHS) are respectively 37 percent, 29 percent, 61 percent and 27 percent less likely to ensure year round access of water consumption. In line of health impact, different type of water born diseases like diarrhea, cholera, and typhoid are common among the coastal community caused by microbial impurities i.e., Bacteria, Protozoa. High turbidity and TDS in pond water caused by reduction of water depth, presence of suspended particle and inorganic salt stimulate the growth of bacteria, protozoa, and algae causes affecting health hazard. Meanwhile, excessive growth of Algae in pond water caused by excessive nitrate in drinking water adversely effects on child health. In lieu of ensuring access at SPHERE standard, we need to increase the number of water interventions at reasonable distance, preferably a half kilometer away from the dwelling place, ensuring community peoples involved with its installation process where collectively owned water intervention is found more effective than privately owned. In addition, a demand-responsive approach to supply of piped water should be adopted to allow consumer demand to guide investment in domestic water supply in future.Keywords: access, impact, safe drinking water, Sphere standard, water interventions
Procedia PDF Downloads 219252 Finite Element Analysis of Mini-Plate Stabilization of Mandible Fracture
Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski
Abstract:
The aim of the presented investigation is to recognize the possible mechanical issues of mini-plate connection used to treat mandible fractures and to check the impact of different factors for the stresses and displacements within the bone-stabilizer system. The mini-plate osteosynthesis technique is a common type of internal fixation using metal plates connected to the fractured bone parts by a set of screws. The selected two types of plate application methodology used by maxillofacial surgeons were investigated in the work. Those patterns differ in location and number of plates. The bone geometry was modeled on the base of computed tomography scans of hospitalized patient done just after mini-plate application. The solid volume geometry consisting of cortical and cancellous bone was created based on gained cloud of points. Temporomandibular joint and muscle system were simulated to imitate the real masticatory system behavior. Finite elements mesh and analysis were performed by ANSYS software. To simulate realistic connection behavior nonlinear contact conditions were used between the connecting elements and bones. The influence of the initial compression of the connected bone parts or the gap between them was analyzed. Nonlinear material properties of the bone tissues and elastic-plastic model of titanium alloy were used. The three cases of loading assuming the force of magnitude of 100N acting on the left molars, the right molars and the incisors were investigated. Stress distribution within connecting plate shows that the compression of the bone parts in the connection results in high stress concentration in the plate and the screws, however the maximum stress levels do not exceed material (titanium) yield limit. There are no significant differences between negative offset (gap) and no-offset conditions. The location of the external force influences the magnitude of stresses around both the plate and bone parts. Two-plate system gives generally lower von Misses stress under the same loading than the one-plating approach. Von Mises stress distribution within the cortical bone shows reduction of high stress field for the cases without the compression (neutral initial contact). For the initial prestressing there is a visible significant stress increase around the fixing holes at the bottom mini-plate due to the assembly stress. The local stress concentration may be the reason of bone destruction in those regions. The performed calculations prove that the bone-mini-plate system is able to properly stabilize the fractured mandible bone. There is visible strong dependency between the mini-plate location and stress distribution within the stabilizer structure and the surrounding bone tissue. The results (stresses within the bone tissues and within the devices, relative displacements of the bone parts at the interface) corresponding to different models of the connection provide a basis for the mechanical optimization of the mini-plate connections. The results of the performed numerical simulations were compared to clinical observation. They provide information helpful for better understanding of the load transfer in the mandible with the stabilizer and for improving stabilization techniques.Keywords: finite element modeling, mandible fracture, mini-plate connection, osteosynthesis
Procedia PDF Downloads 246251 International Indigenous Employment Empirical Research: A Community-Based Participatory Research Content Analysis
Authors: Melanie Grier, Adam Murry
Abstract:
Objective: Worldwide, Indigenous Peoples experience underemployment and poverty at disproportionately higher rates than non-Indigenous people, despite similar rates of employment seeking. Euro-colonial conquest and genocidal assimilation policies are implicated as perpetuating poverty, which research consistently links to health and wellbeing disparities. Many of the contributors to poverty, such as inadequate income and lack of access to medical care, can be directly or indirectly linked to underemployment. Calls have been made to prioritize Indigenous perspectives in Industrial-Organizational (I/O) psychology research, yet the literature on Indigenous employment remains scarce. What does exist is disciplinarily diverse, topically scattered, and lacking evidence of community-based participatory research (CBPR) practices, a research project approach which prioritizes community leadership, partnership, and betterment and reduces the potential for harm. Due to the harmful colonial legacy of extractive scientific inquiry "on" rather than "with" Indigenous groups, Indigenous leaders and research funding agencies advocate for academic researchers to adopt reparative research methodologies such as CBPR to be used when studying issues pertaining to Indigenous Peoples or individuals. However, the frequency and consistency of CBPR implementation within scholarly discourse are unknown. Therefore, this project’s goal is two-fold: (1) to understand what comprises CBPR in Indigenous research and (2) to determine if CBPR has been historically used in Indigenous employment research. Method: Using a systematic literature review process, sixteen articles about CBPR use with Indigenous groups were selected, and content was analyzed to identify key components comprising CBPR usage. An Indigenous CBPR components framework was constructed and subsequently utilized to analyze the Indigenous employment empirical literature. A similar systematic literature review process was followed to search for relevant empirical articles on Indigenous employment. A total of 120 articles were identified in six global regions: Australia, New Zealand, Canada, America, the Pacific Islands, and Greenland/Norway. Each empirical study was procedurally examined and coded for criteria inclusion using content analysis directives. Results: Analysis revealed that, in total, CBPR elements were used 14% of the time in Indigenous employment research. Most studies (n=69; 58%) neglected to mention using any CBPR components, while just two studies discussed implementing all sixteen (2%). The most significant determinant of overall CBPR use was community member partnership (CP) in the research process. Studies from New Zealand were most likely to use CBPR components, followed by Canada, Australia, and America. While CBPR use did increase slowly over time, meaningful temporal trends were not found. Further, CBPR use did not directly correspond with the total number of topical articles published that year. Conclusions: Community-initiated and engaged research approaches must be better utilized in employment studies involving Indigenous Peoples. Future research efforts must be particularly attentive to community-driven objectives and research protocols, emphasizing specific areas of concern relevant to the field of I/O psychology, such as organizational support, recruitment, and selection.Keywords: community-based participatory research, content analysis, employment, indigenous research, international, reconciliation, recruitment, reparative research, selection, systematic literature review
Procedia PDF Downloads 74250 An Elasto-Viscoplastic Constitutive Model for Unsaturated Soils: Numerical Implementation and Validation
Authors: Maria Lazari, Lorenzo Sanavia
Abstract:
Mechanics of unsaturated soils has been an active field of research in the last decades. Efficient constitutive models that take into account the partial saturation of soil are necessary to solve a number of engineering problems e.g. instability of slopes and cuts due to heavy rainfalls. A large number of constitutive models can now be found in the literature that considers fundamental issues associated with the unsaturated soil behaviour, like the volume change and shear strength behaviour with suction or saturation changes. Partially saturated soils may either expand or collapse upon wetting depending on the stress level, and it is also possible that a soil might experience a reversal in the volumetric behaviour during wetting. Shear strength of soils also changes dramatically with changes in the degree of saturation, and a related engineering problem is slope failures caused by rainfall. There are several states of the art reviews over the last years for studying the topic, usually providing a thorough discussion of the stress state, the advantages, and disadvantages of specific constitutive models as well as the latest developments in the area of unsaturated soil modelling. However, only a few studies focused on the coupling between partial saturation states and time effects on the behaviour of geomaterials. Rate dependency is experimentally observed in the mechanical response of granular materials, and a viscoplastic constitutive model is capable of reproducing creep and relaxation processes. Therefore, in this work an elasto-viscoplastic constitutive model for unsaturated soils is proposed and validated on the basis of experimental data. The model constitutes an extension of an existing elastoplastic strain-hardening constitutive model capable of capturing the behaviour of variably saturated soils, based on energy conjugated stress variables in the framework of superposed continua. The purpose was to develop a model able to deal with possible mechanical instabilities within a consistent energy framework. The model shares the same conceptual structure of the elastoplastic laws proposed to deal with bonded geomaterials subject to weathering or diagenesis and is capable of modelling several kinds of instabilities induced by the loss of hydraulic bonding contributions. The novelty of the proposed formulation is enhanced with the incorporation of density dependent stiffness and hardening coefficients in order to allow the modeling of the pycnotropy behaviour of granular materials with a single set of material constants. The model has been implemented in the commercial FE platform PLAXIS, widely used in Europe for advanced geotechnical design. The algorithmic strategies adopted for the stress-point algorithm had to be revised to take into account the different approach adopted by PLAXIS developers in the solution of the discrete non-linear equilibrium equations. An extensive comparison between models with a series of experimental data reported by different authors is presented to validate the model and illustrate the capability of the newly developed model. After the validation, the effectiveness of the viscoplastic model is displayed by numerical simulations of a partially saturated slope failure of the laboratory scale and the effect of viscosity and degree of saturation on slope’s stability is discussed.Keywords: PLAXIS software, slope, unsaturated soils, Viscoplasticity
Procedia PDF Downloads 225249 The Regulation of the Cancer Epigenetic Landscape Lies in the Realm of the Long Non-coding RNAs
Authors: Ricardo Alberto Chiong Zevallos, Eduardo Moraes Rego Reis
Abstract:
Pancreatic adenocarcinoma (PDAC) patients have a less than 10% 5-year survival rate. PDAC has no defined diagnostic and prognostic biomarkers. Gemcitabine is the first-line drug in PDAC and several other cancers. Long non-coding RNAs (lncRNAs) contribute to the tumorigenesis and are potential biomarkers for PDAC. Although lncRNAs aren’t translated into proteins, they have important functions. LncRNAs can decoy or recruit proteins from the epigenetic machinery, act as microRNA sponges, participate in protein translocation through different cellular compartments, and even promote chemoresistance. The chromatin remodeling enzyme EZH2 is a histone methyltransferase that catalyzes the methylation of histone 3 at lysine 27, silencing local expression. EZH2 is ambivalent, it can also activate gene expression independently of its histone methyltransferase activity. EZH2 is overexpressed in several cancers and interacts with lncRNAs, being recruited to a specific locus. EZH2 can be recruited to activate an oncogene or silence a tumor suppressor. The lncRNAs misregulation in cancer can result in the differential recruitment of EZH2 and in a distinct epigenetic landscape, promoting chemoresistance. The relevance of the EZH2-lncRNAs interaction to chemoresistant PDAC was assessed by Real Time quantitative PCR (RT-qPCR) and RNA Immunoprecipitation (RIP) experiments with naïve and gemcitabine-resistant PDAC cells. The expression of several lncRNAs and EZH2 gene targets was evaluated contrasting naïve and resistant cells. Selection of candidate genes was made by bioinformatic analysis and literature curation. Indeed, the resistant cell line showed higher expression of chemoresistant-associated lncRNAs and protein coding genes. RIP detected lncRNAs interacting with EZH2 with varying intensity levels in the cell lines. During RIP, the nuclear fraction of the cells was incubated with an antibody for EZH2 and with magnetic beads. The RNA precipitated with the beads-antibody-EZH2 complex was isolated and reverse transcribed. The presence of candidate lncRNAs was detected by RT-qPCR, and the enrichment was calculated relative to INPUT (total lysate control sample collected before RIP). The enrichment levels varied across the several lncRNAs and cell lines. The EZH2-lncRNA interaction might be responsible for the regulation of chemoresistance-associated genes in multiple cancers. The relevance of the lncRNA-EZH2 interaction to PDAC was assessed by siRNA knockdown of a lncRNA, followed by the analysis of the EZH2 target expression by RT-qPCR. The chromatin immunoprecipitation (ChIP) of EZH2 and H3K27me3 followed by RT-qPCR with primers for EZH2 targets also assess the specificity of the EZH2 recruitment by the lncRNA. This is the first report of the interaction of EZH2 and lncRNAs HOTTIP and PVT1 in chemoresistant PDAC. HOTTIP and PVT1 were described as promoting chemoresistance in several cancers, but the role of EZH2 is not clarified. For the first time, the lncRNA LINC01133 was detected in a chemoresistant cancer. The interaction of EZH2 with LINC02577, LINC00920, LINC00941, and LINC01559 have never been reported in any context. The novel lncRNAs-EZH2 interactions regulate chemoresistant-associated genes in PDAC and might be relevant to other cancers. Therapies targeting EZH2 alone weren’t successful, and a combinatorial approach also targeting the lncRNAs interacting with it might be key to overcome chemoresistance in several cancers.Keywords: epigenetics, chemoresistance, long non-coding RNAs, pancreatic cancer, histone modification
Procedia PDF Downloads 96248 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 276