Search results for: learning efficiency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13474

Search results for: learning efficiency

754 Structural Equation Modeling Exploration for the Multiple College Admission Criteria in Taiwan

Authors: Tzu-Ling Hsieh

Abstract:

When the Taiwan Ministry of Education implemented a new university multiple entrance policy in 2002, most colleges and universities still use testing scores as mainly admission criteria. With forthcoming 12 basic-year education curriculum, the Ministry of Education provides a new college admission policy, which will be implemented in 2021. The new college admission policy will highlight the importance of holistic education by more emphases on the learning process of senior high school, except only on the outcome of academic testing. However, the development of college admission criteria doesn’t have a thoughtful process. Universities and colleges don’t have an idea about how to make suitable multi-admission criteria. Although there are lots of studies in other countries which have implemented multi-college admission criteria for years, these studies still cannot represent Taiwanese students. Also, these studies are limited without the comparison of two different academic fields. Therefore, this study investigated multiple admission criteria and its relationship with college success. This study analyzed the Taiwan Higher Education Database with 12,747 samples from 156 universities and tested a conceptual framework that examines factors by structural equation model (SEM). The conceptual framework of this study was adapted from Pascarella's general causal model and focused on how different admission criteria predict students’ college success. It discussed the relationship between admission criteria and college success, also the relationship how motivation (one of admission standard) influence college success through engagement behaviors of student effort and interactions with agents of socialization. After processing missing value, reliability and validity analysis, the study found three indicators can significantly predict students’ college success which was defined as average grade of last semester. These three indicators are the Chinese language scores at college entrance exam, high school class rank, and quality of student academic engagement. In addition, motivation can significantly predict quality of student academic engagement and interactions with agents of socialization. However, the multi-group SEM analysis showed that there is no difference to predict college success between the students from liberal arts and science. Finally, this study provided some suggestions for universities and colleges to develop multi-admission criteria through the empirical research of Taiwanese higher education students.

Keywords: college admission, admission criteria, structural equation modeling, higher education, education policy

Procedia PDF Downloads 179
753 An Approach for Estimating Open Education Resources Textbook Savings: A Case Study

Authors: Anna Ching-Yu Wong

Abstract:

Introduction: Textbooks play a sizable portion of the overall cost of higher education students. It is a board consent that open education resources (OER) reduce the te4xtbook costs and provide students a way to receive high-quality learning materials at little or no cost to them. However, there is less agreement over exactly how much. This study presents an approach for calculating OER savings by using SUNY Canton NON-OER courses (N=233) to estimate the potentially textbook savings for one semester – Fall 2022. The purpose in collecting data is to understand how much potentially saved from using OER materials and to have a record for future further studies. Literature Reviews: In the past years, researchers identified the rising cost of textbooks disproportionately harm students in higher education institutions and how much an average cost of a textbook. For example, Nyamweya (2018) found that on average students save $116.94 per course when OER adopted in place of traditional commercial textbooks by using a simple formula. Student PIRGs (2015) used reports of per-course savings when transforming a course from using a commercial textbook to OER to reach an estimate of $100 average cost savings per course. Allen and Wiley (2016) presented at the 2016 Open Education Conference on multiple cost-savings studies and concluded $100 was reasonable per-course savings estimates. Ruth (2018) calculated an average cost of a textbook was $79.37 per-course. Hilton, et al (2014) conducted a study with seven community colleges across the nation and found the average textbook cost to be $90.61. There is less agreement over exactly how much would be saved by adopting an OER course. This study used SUNY Canton as a case study to create an approach for estimating OER savings. Methodology: Step one: Identify NON-OER courses from UcanWeb Class Schedule. Step two: View textbook lists for the classes (Campus bookstore prices). Step three: Calculate the average textbook prices by averaging the new book and used book prices. Step four: Multiply the average textbook prices with the number of students in the course. Findings: The result of this calculation was straightforward. The average of a traditional textbooks is $132.45. Students potentially saved $1,091,879.94. Conclusion: (1) The result confirms what we have known: Adopting OER in place of traditional textbooks and materials achieves significant savings for students, as well as the parents and taxpayers who support them through grants and loans. (2) The average textbook savings for adopting an OER course is variable depending on the size of the college and as well as the number of enrollment students.

Keywords: textbook savings, open textbooks, textbook costs assessment, open access

Procedia PDF Downloads 75
752 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin

Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa

Abstract:

Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.

Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®

Procedia PDF Downloads 131
751 Needs of Omani Children in First Grade during Their Transition from Kindergarten to Primary School: An Ethnographic Study

Authors: Zainab Algharibi, Julie McAdam, Catherine Fagan

Abstract:

The purpose of this paper is to shed light on how Omani children in the first grade experience their needs during their transition to primary school. Theoretically, the paper was built on two perspectives: Dewey's concept of continuity of experience and the boundary objects introduced by Vygotsky (CHAT). The methodology of the study is based on the crucial role of children’s agency which is a very important activity as an educational tool to enhance the child’s participation in the learning process and develop their ability to face various issues in their life. Thus, the data were obtained from 45 children in grade one from 4 different primary schools using drawing and visual narrative activities, in addition to researcher observations during the start of the first weeks of the academic year for the first grade. As the study dealt with children, all of the necessary ethical laws were followed. This paper is considered original since it seeks to deal with the issue of children's transition from kindergarten to primary school in Oman, if not in the Arab region. Therefore, it is expected to fill an important gap in this field and present a proposal that will be a door for researchers to enter this research field later. The analysis of drawing and visual narrative was performed according to the social semiotics approach in two phases. The first is to read out the surface message “denotation,” while the second is to go in-depth via the symbolism obtained from children while they talked and drew letters and signs. This stage is known as “signified”; a video was recorded of each child talking about their drawing and expressing themself. Then, the data were organised and classified according to a cross-data network. Regarding the researcher observation analyses, the collected data were analysed according to the model was developed for the "grounded theory". It is based on comparing the recent data collected from observations with data previously encoded by other methods in which children were drawing alongside the visual narrative in the current study, in order to identify the similarities and differences, and also to clarify the meaning of the accessed categories and to identify sub-categories of them with a description of possible links between them. This is a kind of triangulation in data collection. The study came up with a set of findings, the most vital being that the children's greatest interest goes to their social and psychological needs, such as friends, their teacher, and playing. Also, their biggest fears are a new place, a new teacher, and not having friends, while they showed less concern for their need for educational knowledge and skills.

Keywords: children’s academic needs, children’s social needs, transition, primary school

Procedia PDF Downloads 109
750 Topology Optimization Design of Transmission Structure in Flapping-Wing Micro Aerial Vehicle via 3D Printing

Authors: Zuyong Chen, Jianghao Wu, Yanlai Zhang

Abstract:

Flapping-wing micro aerial vehicle (FMAV) is a new type of aircraft by mimicking the flying behavior to that of small birds or insects. Comparing to the traditional fixed wing or rotor-type aircraft, FMAV only needs to control the motion of flapping wings, by changing the size and direction of lift to control the flight attitude. Therefore, its transmission system should be designed very compact. Lightweight design can effectively extend its endurance time, while engineering experience alone is difficult to simultaneously meet the requirements of FMAV for structural strength and quality. Current researches still lack the guidance of considering nonlinear factors of 3D printing material when carrying out topology optimization, especially for the tiny FMAV transmission system. The coupling of non-linear material properties and non-linear contact behaviors of FMAV transmission system is a great challenge to the reliability of the topology optimization result. In this paper, topology optimization design based on FEA solver package Altair Optistruct for the transmission system of FMAV manufactured by 3D Printing was carried out. Firstly, the isotropic constitutive behavior of the Ultraviolet (UV) Cureable Resin used to fabricate the structure of FMAV was evaluated and confirmed through tensile test. Secondly, a numerical computation model describing the mechanical behavior of FMAV transmission structure was established and verified by experiments. Then topology optimization modeling method considering non-linear factors were presented, and optimization results were verified by dynamic simulation and experiments. Finally, detail discussions of different load status and constraints were carried out to explore the leading factors affecting the optimization results. The contributions drawn from this article helpful for guiding the lightweight design of FMAV are summarizing as follow; first, a dynamic simulation modeling method used to obtain the load status is presented. Second, verification method of optimized results considering non-linear factors is introduced. Third, based on or can achieve a better weight reduction effect and improve the computational efficiency rather than taking multi-states into account. Fourth, basing on makes for improving the ability to resist bending deformation. Fifth, constraint of displacement helps to improve the structural stiffness of optimized result. Results and engineering guidance in this paper may shed lights on the structural optimization and light-weight design for future advanced FMAV.

Keywords: flapping-wing micro aerial vehicle, 3d printing, topology optimization, finite element analysis, experiment

Procedia PDF Downloads 170
749 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 234
748 Construction of Graph Signal Modulations via Graph Fourier Transform and Its Applications

Authors: Xianwei Zheng, Yuan Yan Tang

Abstract:

Classical window Fourier transform has been widely used in signal processing, image processing, machine learning and pattern recognition. The related Gabor transform is powerful enough to capture the texture information of any given dataset. Recently, in the emerging field of graph signal processing, researchers devoting themselves to develop a graph signal processing theory to handle the so-called graph signals. Among the new developing theory, windowed graph Fourier transform has been constructed to establish a time-frequency analysis framework of graph signals. The windowed graph Fourier transform is defined by using the translation and modulation operators of graph signals, following the similar calculations in classical windowed Fourier transform. Specifically, the translation and modulation operators of graph signals are defined by using the Laplacian eigenvectors as follows. For a given graph signal, its translation is defined by a similar manner as its definition in classical signal processing. Specifically, the translation operator can be defined by using the Fourier atoms; the graph signal translation is defined similarly by using the Laplacian eigenvectors. The modulation of the graph can also be established by using the Laplacian eigenvectors. The windowed graph Fourier transform based on these two operators has been applied to obtain time-frequency representations of graph signals. Fundamentally, the modulation operator is defined similarly to the classical modulation by multiplying a graph signal with the entries in each Fourier atom. However, a single Laplacian eigenvector entry cannot play a similar role as the Fourier atom. This definition ignored the relationship between the translation and modulation operators. In this paper, a new definition of the modulation operator is proposed and thus another time-frequency framework for graph signal is constructed. Specifically, the relationship between the translation and modulation operations can be established by the Fourier transform. Specifically, for any signal, the Fourier transform of its translation is the modulation of its Fourier transform. Thus, the modulation of any signal can be defined as the inverse Fourier transform of the translation of its Fourier transform. Therefore, similarly, the graph modulation of any graph signal can be defined as the inverse graph Fourier transform of the translation of its graph Fourier. The novel definition of the graph modulation operator established a relationship of the translation and modulation operations. The new modulation operation and the original translation operation are applied to construct a new framework of graph signal time-frequency analysis. Furthermore, a windowed graph Fourier frame theory is developed. Necessary and sufficient conditions for constructing windowed graph Fourier frames, tight frames and dual frames are presented in this paper. The novel graph signal time-frequency analysis framework is applied to signals defined on well-known graphs, e.g. Minnesota road graph and random graphs. Experimental results show that the novel framework captures new features of graph signals.

Keywords: graph signals, windowed graph Fourier transform, windowed graph Fourier frames, vertex frequency analysis

Procedia PDF Downloads 342
747 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 193
746 Cultural Heritage, Urban Planning and the Smart City in Indian Context

Authors: Paritosh Goel

Abstract:

The conservation of historic buildings and historic Centre’s over recent years has become fully encompassed in the planning of built-up areas and their management following climate changes. The approach of the world of restoration, in the Indian context on integrated urban regeneration and its strategic potential for a smarter, more sustainable and socially inclusive urban development introduces, for urban transformations in general (historical centers and otherwise), the theme of sustainability. From this viewpoint, it envisages, as a primary objective, a real “green, ecological or environmental” requalification of the city through interventions within the main categories of sustainability: mobility, energy efficiency, use of sources of renewable energy, urban metabolism (waste, water, territory, etc.) and natural environment. With this the concept of a “resilient city” is also introduced, which can adapt through progressive transformations to situations of change which may not be predictable, behavior that the historical city has always been able to express. Urban planning on the other hand, has increasingly focused on analyses oriented towards the taxonomic description of social/economic and perceptive parameters. It is connected with human behavior, mobility and the characterization of the consumption of resources, in terms of quantity even before quality to inform the city design process, which for ancient fabrics, and mainly affects the public space also in its social dimension. An exact definition of the term “smart city” is still essentially elusive, since we can attribute three dimensions to the term: a) That of a virtual city, evolved based on digital networks and web networks b) That of a physical construction determined by urban planning based on infrastructural innovation, which in the case of historic Centre’s implies regeneration that stimulates and sometimes changes the existing fabric; c) That of a political and social/economic project guided by a dynamic process that provides new behavior and requirements of the city communities that orients the future planning of cities also through participation in their management. This paper is a preliminary research into the connections between these three dimensions applied to the specific case of the fabric of ancient cities with the aim of obtaining a scientific theory and methodology to apply to the regeneration of Indian historical Centre’s. The Smart city scheme if contextualize with heritage of the city it can be an initiative which intends to provide a transdisciplinary approach between various research networks (natural sciences, socio-economics sciences and humanities, technological disciplines, digital infrastructures) which are united in order to improve the design, livability and understanding of urban environment and high historical/cultural performance levels.

Keywords: historical cities regeneration, sustainable restoration, urban planning, smart cities, cultural heritage development strategies

Procedia PDF Downloads 282
745 The Impact of the Lexical Quality Hypothesis and the Self-Teaching Hypothesis on Reading Ability

Authors: Anastasios Ntousas

Abstract:

The purpose of the following paper is to analyze the relationship between the lexical quality and the self-teaching hypothesis and their impact on the reading ability. The following questions emerged, is there a correlation between the effective reading experience that the lexical quality hypothesis proposes and the self-teaching hypothesis, would the ability to read by analogy facilitate and create stable, synchronized four-word representational, and would word morphological knowledge be a possible extension of the self-teaching hypothesis. The lexical quality hypothesis speculates that words include four representational attributes, phonology, orthography, morpho-syntax, and meaning. Those four-word representations work together to make word reading an effective task. A possible lack of knowledge in one of the representations might disrupt reading comprehension. The degree that the four-word features connect together makes high and low lexical word quality representations. When the four-word representational attributes connect together effectively, readers have a high lexical quality of words; however, when they hardly have a strong connection with each other, readers have a low lexical quality of words. Furthermore, the self-teaching hypothesis proposes that phonological recoding enables printed word learning. Phonological knowledge and reading experience facilitate the acquisition and consolidation of specific-word orthographies. The reading experience is related to strong reading comprehension. The more readers have contact with texts, the better readers they become. Therefore, their phonological knowledge, as the self-teaching hypothesis suggests, might have a facilitative impact on the consolidation of the orthographical, morphological-syntax and meaning representations of unknown words. The phonology of known words might activate effectively the rest of the representational features of words. Readers use their existing phonological knowledge of similarly spelt words to pronounce unknown words; a possible transference of this ability to read by analogy will appear with readers’ morphological knowledge. Morphemes might facilitate readers’ ability to pronounce and spell new unknown words in which they do not have lexical access. Readers will encounter unknown words with similarly phonemes and morphemes but with different meanings. Knowledge of phonology and morphology might support and increase reading comprehension. There was a careful selection, discussion of theoretical material and comparison of the two existing theories. Evidence shows that morphological knowledge improves reading ability and comprehension, so morphological knowledge might be a possible extension of the self-teaching hypothesis, the fundamental skill to read by analogy can be implemented to the consolidation of word – specific orthographies via readers’ morphological knowledge, and there is a positive correlation between effective reading experience and self-teaching hypothesis.

Keywords: morphology, orthography, reading ability, reading comprehension

Procedia PDF Downloads 129
744 Solar Liquid Desiccant Regenerator for Two Stage KCOOH Based Fresh Air Dehumidifier

Authors: M. V. Rane, Tareke Tekia

Abstract:

Liquid desiccant based fresh air dehumidifiers can be gainfully deployed for air-conditioning, agro-produce drying and in many industrial processes. Regeneration of liquid desiccant can be done using direct firing, high temperature waste heat or solar energy. Solar energy is clean and available in abundance; however, it is costly to collect. A two stage liquid desiccant fresh air dehumidification system can offer Coefficient of Performance (COP), in the range of 1.6 to 2 for comfort air conditioning applications. High COP helps reduce the size and cost of collectors required. Performance tests on high temperature regenerator of a two stage liquid desiccant fresh air dehumidifier coupled with seasonally tracked flat plate like solar collector will be presented in this paper. The two stage fresh air dehumidifier has four major components: High Temperature Regenerator (HTR), Low Temperature Regenerator (LTR), High and Low Temperature Solution Heat Exchangers and Fresh Air Dehumidifier (FAD). This open system can operate at near atmospheric pressure in all the components. These systems can be simple, maintenance-free and scalable. Environmentally benign, non-corrosive, moderately priced Potassium Formate, KCOOH, is used as a liquid desiccant. Typical KCOOH concentration in the system is expected to vary between 65 and 75%. Dilute liquid desiccant at 65% concentration exiting the fresh air dehumidifier will be pumped and preheated in solution heat exchangers before entering the high temperature solar regenerator. In the solar collector, solution will be regenerated to intermediate concentration of 70%. Steam and saturated solution exiting the solar collector array will be separated. Steam at near atmospheric pressure will then be used to regenerate the intermediate concentration solution up to a concentration of 75% in a low temperature regenerator where moisture vaporized be released in to atmosphere. Condensed steam can be used as potable water after adding a pinch of salt and some nutrient. Warm concentrated liquid desiccant will be routed to solution heat exchanger to recycle its heat to preheat the weak liquid desiccant solution. Evacuated glass tube based seasonally tracked solar collector is used for regeneration of liquid desiccant at high temperature. Temperature of regeneration for KCOOH is 133°C at 70% concentration. The medium temperature collector was designed for temperature range of 100 to 150°C. Double wall polycarbonate top cover helps reduce top losses. Absorber integrated heat storage helps stabilize the temperature of liquid desiccant exiting the collectors during intermittent cloudy conditions, and extends the operation of the system by couple of hours beyond the sunshine hours. This solar collector is light in weight, 12 kg/m2 without absorber integrated heat storage material, and 27 kg/m2 with heat storage material. Cost of the collector is estimated to be 10,000 INR/m2. Theoretical modeling of the collector has shown that the optical efficiency is 62%. Performance test of regeneration of KCOOH will be reported.

Keywords: solar, liquid desiccant, dehumidification, air conditioning, regeneration

Procedia PDF Downloads 348
743 From By-product To Brilliance: Transforming Adobe Brick Construction Using Meat Industry Waste-derived Glycoproteins

Authors: Amal Balila, Maria Vahdati

Abstract:

Earth is a green building material with very low embodied energy and almost zero greenhouse gas emissions. However, it lacks strength and durability in its natural state. By responsibly sourcing stabilisers, it's possible to enhance its strength. This research draws inspiration from the robustness of termite mounds, where termites incorporate glycoproteins from their saliva during construction. Biomimicry explores the potential of these termite stabilisers in producing bio-inspired adobe bricks. The meat industry generates significant waste during slaughter, including blood, skin, bones, tendons, gastrointestinal contents, and internal organs. While abundant, many meat by-products raise concerns regarding human consumption, religious orders, cultural and ethical beliefs, and also heavily contribute to environmental pollution. Extracting and utilising proteins from this waste is vital for reducing pollution and increasing profitability. Exploring the untapped potential of meat industry waste, this research investigates how glycoproteins could revolutionize adobe brick construction. Bovine serum albumin (BSA) from cows' blood and mucin from porcine stomachs were the chosen glycoproteins used as stabilisers for adobe brick production. Despite their wide usage across various fields, they have very limited utilisation in food processing. Thus, both were identified as potential stabilisers for adobe brick production in this study. Two soil types were utilised to prepare adobe bricks for testing, comparing controlled unstabilised bricks with glycoprotein-stabilised ones. All bricks underwent testing for unconfined compressive strength and erosion resistance. The primary finding of this study is the efficacy of BSA, a glycoprotein derived from cows' blood and a by-product of the beef industry, as an earth construction stabiliser. Adding 0.5% by weight of BSA resulted in a 17% and 41% increase in the unconfined compressive strength for British and Sudanese adobe bricks, respectively. Further, adding 5% by weight of BSA led to a 202% and 97% increase in the unconfined compressive strength for British and Sudanese adobe bricks, respectively. Moreover, using 0.1%, 0.2%, and 0.5% by weight of BSA resulted in erosion rate reductions of 30%, 48%, and 70% for British adobe bricks, respectively, with a 97% reduction observed for Sudanese adobe bricks at 0.5% by weight of BSA. However, mucin from the porcine stomach did not significantly improve the unconfined compressive strength of adobe bricks. Nevertheless, employing 0.1% and 0.2% by weight of mucin resulted in erosion rate reductions of 28% and 55% for British adobe bricks, respectively. These findings underscore BSA's efficiency as an earth construction stabiliser for wall construction and mucin's efficacy for wall render, showcasing their potential for sustainable and durable building practices.

Keywords: biomimicry, earth construction, industrial waste management, sustainable building materials, termite mounds.

Procedia PDF Downloads 52
742 An Automated Magnetic Dispersive Solid-Phase Extraction Method for Detection of Cocaine in Human Urine

Authors: Feiyu Yang, Chunfang Ni, Rong Wang, Yun Zou, Wenbin Liu, Chenggong Zhang, Fenjin Sun, Chun Wang

Abstract:

Cocaine is the most frequently used illegal drug globally, with the global annual prevalence of cocaine used ranging from 0.3% to 0.4 % of the adult population aged 15–64 years. Growing consumption trend of abused cocaine and drug crimes are a great concern, therefore urine sample testing has become an important noninvasive sampling whereas cocaine and its metabolites (COCs) are usually present in high concentrations and relatively long detection windows. However, direct analysis of urine samples is not feasible because urine complex medium often causes low sensitivity and selectivity of the determination. On the other hand, presence of low doses of analytes in urine makes an extraction and pretreatment step important before determination. Especially, in gathered taking drug cases, the pretreatment step becomes more tedious and time-consuming. So developing a sensitive, rapid and high-throughput method for detection of COCs in human body is indispensable for law enforcement officers, treatment specialists and health officials. In this work, a new automated magnetic dispersive solid-phase extraction (MDSPE) sampling method followed by high performance liquid chromatography-mass spectrometry (HPLC-MS) was developed for quantitative enrichment of COCs from human urine, using prepared magnetic nanoparticles as absorbants. The nanoparticles were prepared by silanizing magnetic Fe3O4 nanoparticles and modifying them with divinyl benzene and vinyl pyrrolidone, which possesses the ability for specific adsorption of COCs. And this kind of magnetic particle facilitated the pretreatment steps by electromagnetically controlled extraction to achieve full automation. The proposed device significantly improved the sampling preparation efficiency with 32 samples in one batch within 40mins. Optimization of the preparation procedure for the magnetic nanoparticles was explored and the performances of magnetic nanoparticles were characterized by scanning electron microscopy, vibrating sample magnetometer and infrared spectra measurements. Several analytical experimental parameters were studied, including amount of particles, adsorption time, elution solvent, extraction and desorption kinetics, and the verification of the proposed method was accomplished. The limits of detection for the cocaine and cocaine metabolites were 0.09-1.1 ng·mL-1 with recoveries ranging from 75.1 to 105.7%. Compared to traditional sampling method, this method is time-saving and environmentally friendly. It was confirmed that the proposed automated method was a kind of highly effective way for the trace cocaine and cocaine metabolites analyses in human urine.

Keywords: automatic magnetic dispersive solid-phase extraction, cocaine detection, magnetic nanoparticles, urine sample testing

Procedia PDF Downloads 204
741 Information-Controlled Laryngeal Feature Variations in Korean Consonants

Authors: Ponghyung Lee

Abstract:

This study seeks to investigate the variations occurring to Korean consonantal variations center around laryngeal features of the concerned sounds, to the exclusion of others. Our fundamental premise is that the weak contrast associated with concerned segments might be held accountable for the oscillation of the status quo of the concerned consonants. What is more, we assume that an array of notions as a measure of communicative efficiency of linguistic units would be significantly influential on triggering those variations. To this end, we have tried to compute the surprisal, entropic contribution, and relative contrastiveness associated with Korean obstruent consonants. What we found therein is that the Information-theoretic perspective is compelling enough to lend support our approach to a considerable extent. That is, the variant realizations, chronologically and stylistically, prove to be profoundly affected by a set of Information-theoretic factors enumerated above. When it comes to the biblical proper names, we use Georgetown University CQP Web-Bible corpora. From the 8 texts (4 from Old Testament and 4 from New Testament) among the total 64 texts, we extracted 199 samples. We address the issue of laryngeal feature variations associated with Korean obstruent consonants under the presumption that the variations stem from the weak contrast among the triad manifestations of laryngeal features. The variants emerge from diverse sources in chronological and stylistic senses: Christianity biblical texts, ordinary casual speech, the shift of loanword adaptation over time, and ideophones. For the purpose of discussing what they are really like from the perspective of Information Theory, it is necessary to closely look at the data. Among them, the massive changes occurring to loanword adaptation of proper nouns during the centennial history of Korean Christianity draw our special attention. We searched 199 types of initially capitalized words among 45,528-word tokens, which account for around 5% of total 901,701-word tokens (12,786-word types) from Georgetown University CQP Web-Bible corpora. We focus on the shift of the laryngeal features incorporated into word-initial consonants, which are available through the two distinct versions of Korean Bible: one came out in the 1960s for the Protestants, and the other was published in the 1990s for the Catholic Church. Of these proper names, we have closely traced the adaptation of plain obstruents, e. g. /b, d, g, s, ʤ/ in the sources. The results show that as much as 41% of the extracted proper names show variations; 37% in terms of aspiration, and 4% in terms of tensing. This study set out in an effort to shed light on the question: to what extent can we attribute the variations occurring to the laryngeal features associated with Korean obstruent consonants to the communicative aspects of linguistic activities? In this vein, the concerted effects of the triad, of surprisal, entropic contribution, and relative contrastiveness can be credited with the ups and downs in the feature specification, despite being contentiousness on the role of surprisal to some extent.

Keywords: entropic contribution, laryngeal feature variation, relative contrastiveness, surprisal

Procedia PDF Downloads 130
740 Melt–Electrospun Polyprophylene Fabrics Functionalized with TiO2 Nanoparticles for Effective Photocatalytic Decolorization

Authors: Z. Karahaliloğlu, C. Hacker, M. Demirbilek, G. Seide, E. B. Denkbaş, T. Gries

Abstract:

Currently, textile industry has played an important role in world’s economy, especially in developing countries. Dyes and pigments used in textile industry are significant pollutants. Most of theirs are azo dyes that have chromophore (-N=N-) in their structure. There are many methods for removal of the dyes from wastewater such as chemical coagulation, flocculation, precipitation and ozonation. But these methods have numerous disadvantages and alternative methods are needed for wastewater decolorization. Titanium-mediated photodegradation has been used generally due to non-toxic, insoluble, inexpensive, and highly reactive properties of titanium dioxide semiconductor (TiO2). Melt electrospinning is an attractive manufacturing process for thin fiber production through electrospinning from PP (Polyprophylene). PP fibers have been widely used in the filtration due to theirs unique properties such as hydrophobicity, good mechanical strength, chemical resistance and low-cost production. In this study, we aimed to investigate the effect of titanium nanoparticle localization and amine modification on the dye degradation. The applicability of the prepared chemical activated composite and pristine fabrics for a novel treatment of dyeing wastewater were evaluated.In this study, a photocatalyzer material was prepared from nTi (titanium dioxide nanoparticles) and PP by a melt-electrospinning technique. The electrospinning parameters of pristine PP and PP/nTi nanocomposite fabrics were optimized. Before functionalization with nTi, the surface of fabrics was activated by a technique using glutaraldehyde (GA) and polyethyleneimine to promote the dye degredation. Pristine PP and PP/nTi nanocomposite melt-electrospun fabrics were characterized using scanning electron microscopy (SEM) and X-Ray Photon Spectroscopy (XPS). Methyl orange (MO) was used as a model compound for the decolorization experiments. Photocatalytic performance of nTi-loaded pristine and nanocomposite melt-electrospun filters was investigated by varying initial dye concentration 10, 20, 40 mg/L). nTi-PP composite fabrics were successfully processed into a uniform, fibrous network of beadless fibers with diameters of 800±0.4 nm. The process parameters were determined as a voltage of 30 kV, a working distance of 5 cm, a temperature of the thermocouple and hotcoil of 260–300 ºC and a flow rate of 0.07 mL/h. SEM results indicated that TiO2 nanoparticles were deposited uniformly on the nanofibers and XPS results confirmed the presence of titanium nanoparticles and generation of amine groups after modification. According to photocatalytic decolarization test results, nTi-loaded GA-treated pristine or nTi-PP nanocomposite fabric filtern have superior properties, especially over 90% decolorization efficiency at GA-treated pristine and nTi-PP composite PP fabrics. In this work, as a photocatalyzer for wastewater treatment, surface functionalized with nTi melt-electrospun fabrics from PP were prepared. Results showed melt-electrospun nTi-loaded GA-tretaed composite or pristine PP fabrics have a great potential for use as a photocatalytic filter to decolorization of wastewater and thus, requires further investigation.

Keywords: titanium oxide nanoparticles, polyprophylene, melt-electrospinning

Procedia PDF Downloads 267
739 Evaluating Multiple Diagnostic Tests: An Application to Cervical Intraepithelial Neoplasia

Authors: Areti Angeliki Veroniki, Sofia Tsokani, Evangelos Paraskevaidis, Dimitris Mavridis

Abstract:

The plethora of diagnostic test accuracy (DTA) studies has led to the increased use of systematic reviews and meta-analysis of DTA studies. Clinicians and healthcare professionals often consult DTA meta-analyses to make informed decisions regarding the optimum test to choose and use for a given setting. For example, the human papilloma virus (HPV) DNA, mRNA, and cytology can be used for the cervical intraepithelial neoplasia grade 2+ (CIN2+) diagnosis. But which test is the most accurate? Studies directly comparing test accuracy are not always available, and comparisons between multiple tests create a network of DTA studies that can be synthesized through a network meta-analysis of diagnostic tests (DTA-NMA). The aim is to summarize the DTA-NMA methods for at least three index tests presented in the methodological literature. We illustrate the application of the methods using a real data set for the comparative accuracy of HPV DNA, HPV mRNA, and cytology tests for cervical cancer. A search was conducted in PubMed, Web of Science, and Scopus from inception until the end of July 2019 to identify full-text research articles that describe a DTA-NMA method for three or more index tests. Since the joint classification of the results from one index against the results of another index test amongst those with the target condition and amongst those without the target condition are rarely reported in DTA studies, only methods requiring the 2x2 tables of the results of each index test against the reference standard were included. Studies of any design published in English were eligible for inclusion. Relevant unpublished material was also included. Ten relevant studies were finally included to evaluate their methodology. DTA-NMA methods that have been presented in the literature together with their advantages and disadvantages are described. In addition, using 37 studies for cervical cancer obtained from a published Cochrane review as a case study, an application of the identified DTA-NMA methods to determine the most promising test (in terms of sensitivity and specificity) for use as the best screening test to detect CIN2+ is presented. As a conclusion, different approaches for the comparative DTA meta-analysis of multiple tests may conclude to different results and hence may influence decision-making. Acknowledgment: This research is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme «Human Resources Development, Education and Lifelong Learning 2014-2020» in the context of the project “Extension of Network Meta-Analysis for the Comparison of Diagnostic Tests ” (MIS 5047640).

Keywords: colposcopy, diagnostic test, HPV, network meta-analysis

Procedia PDF Downloads 140
738 Pathway Linking Early Use of Electronic Device and Psychosocial Wellbeing in Early Childhood

Authors: Rosa S. Wong, Keith T.S. Tung, Winnie W. Y. Tso, King-Wa Fu, Nirmala Rao, Patrick Ip

Abstract:

Electronic devices have become an essential part of our lives. Various reports have highlighted the alarming usage of electronic devices at early ages and its long-term developmental consequences. More sedentary screen time was associated with increased adiposity, worse cognitive and motor development, and psychosocial health. Apart from the problems caused by children’s own screen time, parents today are often paying less attention to their children due to hand-held device. Some anecdotes suggest that distracted parenting has negative impact on parent-child relationship. This study examined whether distracted parenting detrimentally affected parent-child activities which may, in turn, impair children’s psychosocial health. In 2018/19, we recruited a cohort of preschoolers from 32 local kindergartens in Tin Shui Wai and Sham Shui Po for a 5-year programme aiming to build stronger foundations for children from disadvantaged backgrounds through an integrated support model involving medical, education and social service sectors. A comprehensive set of questionnaires were used to survey parents on their frequency of being distracted while parenting and their frequency of learning and recreational activities with children. Furthermore, they were asked to report children’s screen time amount and their psychosocial problems. Mediation analyses were performed to test the direct and indirect effects of electronic device-distracted parenting on children’s psychosocial problems. This study recruited 873 children (448 females and 425 males, average age: 3.42±0.35). Longer screen time was associated with more psychosocial difficulties (Adjusted B=0.37, 95%CI: 0.12 to 0.62, p=0.004). Children’s screen time positively correlated with electronic device-distracted parenting (r=0.369, p < 01). We also found that electronic device-distracted parenting was associated with more hyperactive/inattentive problems (Adjusted B=0.66, p < 0.01), fewer prosocial behavior (Adjusted B=-0.74, p < 0.01), and more emotional symptoms (Adjusted B=0.61, p < 0.001) in children. Further analyses showed that electronic device-distracted parenting exerted influences both directly and indirectly through parent-child interactions but to different extent depending upon the outcome under investigation (38.8% for hyperactivity/inattention, 31.3% for prosocial behavior, and 15.6% for emotional symptoms). We found that parents’ use of devices and children’s own screen time both have negative effects on children’s psychosocial health. It is important for parents to set “device-free times” each day so as to ensure enough relaxed downtime for connecting with children and responding to their needs.

Keywords: early childhood, electronic device, psychosocial wellbeing, parenting

Procedia PDF Downloads 164
737 A Web-Based Real Property Updating System for Efficient and Sustainable Urban Development: A Case Study in Ethiopia

Authors: Eyosiyas Aga

Abstract:

The development of information communication technology has transformed the paper-based mapping and land registration processes to a computerized and networked system. The computerization and networking of real property information system play a vital role in good governance and sustainable development of emerging countries through cost effective, easy and accessible service delivery for the customer. The efficient, transparent and sustainable real property system is becoming the basic infrastructure for the urban development thus improve the data management system and service delivery in the organizations. In Ethiopia, the real property administration is paper based as a result, it confronted problems of data management, illegal transactions, corruptions, and poor service delivery. In order to solve this problem and to facilitate real property market, the implementation of web-based real property updating system is crucial. A web-based real property updating is one of the automation (computerizations) methods to facilitate data sharing, reduce time and cost of the service delivery in real property administration system. In additions, it is useful for the integration of data onto different information systems and organizations. This system is designed by combining open source software which supported by open Geo-spatial consortium. The web-based system is mainly designed by using open source software with the help of open Geo-spatial Consortium. The Open Geo-spatial Consortium standards such as the Web Feature Service and Web Map Services are the most widely used standards to support and improves web-based real property updating. These features allow the integration of data from different sources, and it can be used to maintain consistency of data throughout transactions. The PostgreSQL and Geoserver are used to manage and connect a real property data to the flex viewer and user interface. The system is designed for both internal updating system (municipality); which is mainly updating of spatial and textual information, and the external system (customer) which focus on providing and interacting with the customer. This research assessed the potential of open source web applications and adopted this technology for real property updating system in Ethiopia through simple, cost effective and secured way. The system is designed by combining and customizing open source software to enhance the efficiency of the system in cost effective way. The existing workflow for real property updating is analyzed to identify the bottlenecks, and the new workflow is designed for the system. The requirement is identified through questionnaire and literature review, and the system is prototype for the study area. The research mainly aimed to integrate human resource with technology in designing of the system to reduce data inconsistency and security problems. In additions, the research reflects on the current situation of real property administration and contributions of effective data management system for efficient, transparent and sustainable urban development in Ethiopia.

Keywords: cadaster, real property, sustainable, transparency, web feature service, web map service

Procedia PDF Downloads 267
736 Vocal Advocacy: A Case Study at the First Black College Regarding Students Experiencing an Empowerment Workshop

Authors: Denise F. Brown, Melina McConatha

Abstract:

African Americans utilizing the art of vocal expressions, particularly for self-expression, has been a historical avenue of advocating for social justice and human rights. Vocal expressions can take many forms, such as singing, poetry, storytelling, and acting. Many well-known artists, politicians, leaders, and teachers used their voices to promote the causes and concerns of the African American community as well as the expression of their own experiences of being 'black' in America. The purpose of this project was to evaluate the perceptions of African American students in utilizing their voices for self-awareness, interview skills, and social change after attending a three-part workshop on vocal advocacy. This research utilized the framework of black feminism to understand empowerment in advocacy and self-expression. Students participated in learning about the power of their voices, and what purpose presence, and passion they discovered through the Immersive Voice workshop. There were three areas covered in the workshop. The first area was the power of the voice, the second area was the application of vocal passion, and the third area was applying the vocal power to express personal interest, interests of advocating for others, and confidence and speaking to others to further careers, i.e., using vocal power for job interviewing skills. The students were instructed to prepare for the workshops by completing a pre-workshop open-ended survey. There were a total of 15 students that participated. After the workshop ended, the students were instructed to complete a post-workshop survey. The surveys were assessed by evaluating both themes and codes from student's written feedback. From the pre-workshop survey, students were given a survey for them to provide feedback regarding the power of voice prior to participating in the workshops. From the student's responses, the theme (advocating for self and others) emerged as it related to student's feedback on what it means to advocate. There were three codes that led to the theme, having knowledge about advocating for self and others, gaining knowledge to advocate for self and others, and using that knowledge to advocate for self and others. After the students completed participation in the workshops, a post workshop- survey was given to the students. Students' feedback was assessed, and the same theme emerged, 'advocating for self and others.' The codes related to the theme, however, were different and included using vocal power (a term students learned during the workshop) to represent self, represent others, and obtain a job/career. In conclusion, the results of the survey showed that students still perceived advocating as speaking up for themselves and other people. After the workshop, students still continued to associate advocacy with helping themselves and helping others but were able to be more specific about how the sound of their voice could help in advocating, and how they could use their voice to represent themselves in getting a job or starting a career.

Keywords: advocacy, command, self-expression, voice

Procedia PDF Downloads 110
735 Potential Impacts of Climate Change on Hydrological Droughts in the Limpopo River Basin

Authors: Nokwethaba Makhanya, Babatunde J. Abiodun, Piotr Wolski

Abstract:

Climate change possibly intensifies hydrological droughts and reduces water availability in river basins. Despite this, most research on climate change effects in southern Africa has focused exclusively on meteorological droughts. This thesis projects the potential impact of climate change on the future characteristics of hydrological droughts in the Limpopo River Basin (LRB). The study uses regional climate model (RCM) measurements (from the Coordinated Regional Climate Downscaling Experiment, CORDEX) and a combination of hydrological simulations (using the Soil and Water Assessment Tool Plus model, SWAT+) to predict the impacts at four global warming levels (GWLs: 1.5℃, 2.0℃, 2.5℃, and 3.0℃) under the RCP8.5 future climate scenario. The SWAT+ model was calibrated and validated with a streamflow dataset observed over the basin, and the sensitivity of model parameters was investigated. The performance of the SWAT+LRB model was verified using the Nash-Sutcliffe efficiency (NSE), Percent Bias (PBIAS), Root Mean Square Error (RMSE), and coefficient of determination (R²). The Standardized Precipitation Evapotranspiration Index (SPEI) and the Standardized Precipitation Index (SPI) have been used to detect meteorological droughts. The Soil Water Index (SSI) has been used to define agricultural drought, while the Water Yield Drought Index (WYLDI), the Surface Run-off Index (SRI), and the Streamflow Index (SFI) have been used to characterise hydrological drought. The performance of the SWAT+ model simulations over LRB is sensitive to the parameters CN2 (initial SCS runoff curve number for moisture condition II) and ESCO (soil evaporation compensation factor). The best simulation generally performed better during the calibration period than the validation period. In calibration and validation periods, NSE is ≤ 0.8, while PBIAS is ≥ ﹣80.3%, RMSE ≥ 11.2 m³/s, and R² ≤ 0.9. The simulations project a future increase in temperature and potential evapotranspiration over the basin, but they do not project a significant future trend in precipitation and hydrological variables. However, the spatial distribution of precipitation reveals a projected increase in precipitation in the southern part of the basin and a decline in the northern part of the basin, with the region of reduced precipitation projected to increase with GWLs. A decrease in all hydrological variables is projected over most parts of the basin, especially over the eastern part of the basin. The simulations predict meteorological droughts (i.e., SPEI and SPI), agricultural droughts (i.e., SSI), and hydrological droughts (i.e., WYLDI, SRI) would become more intense and severe across the basin. SPEI-drought has a greater magnitude of increase than SPI-drought, and agricultural and hydrological droughts have a magnitude of increase between the two. As a result, this research suggests that future hydrological droughts over the LRB could be more severe than the SPI-drought projection predicts but less severe than the SPEI-drought projection. This research can be used to mitigate the effects of potential climate change on basin hydrological drought.

Keywords: climate change, CORDEX, drought, hydrological modelling, Limpopo River Basin

Procedia PDF Downloads 129
734 A Targeted Maximum Likelihood Estimation for a Non-Binary Causal Variable: An Application

Authors: Mohamed Raouf Benmakrelouf, Joseph Rynkiewicz

Abstract:

Targeted maximum likelihood estimation (TMLE) is well-established method for causal effect estimation with desirable statistical properties. TMLE is a doubly robust maximum likelihood based approach that includes a secondary targeting step that optimizes the target statistical parameter. A causal interpretation of the statistical parameter requires assumptions of the Rubin causal framework. The causal effect of binary variable, E, on outcomes, Y, is defined in terms of comparisons between two potential outcomes as E[YE=1 − YE=0]. Our aim in this paper is to present an adaptation of TMLE methodology to estimate the causal effect of a non-binary categorical variable, providing a large application. We propose coding on the initial data in order to operate a binarization of the interest variable. For each category, we get a transformation of the non-binary interest variable into a binary variable, taking value 1 to indicate the presence of category (or group of categories) for an individual, 0 otherwise. Such a dummy variable makes it possible to have a pair of potential outcomes and oppose a category (or a group of categories) to another category (or a group of categories). Let E be a non-binary interest variable. We propose a complete disjunctive coding of our variable E. We transform the initial variable to obtain a set of binary vectors (dummy variables), E = (Ee : e ∈ {1, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when its category is not present, and the value of 1 when its category is present, which allows to compute a pairwise-TMLE comparing difference in the outcome between one category and all remaining categories. In order to illustrate the application of our strategy, first, we present the implementation of TMLE to estimate the causal effect of non-binary variable on outcome using simulated data. Secondly, we apply our TMLE adaptation to survey data from the French Political Barometer (CEVIPOF), to estimate the causal effect of education level (A five-level variable) on a potential vote in favor of the French extreme right candidate Jean-Marie Le Pen. Counterfactual reasoning requires us to consider some causal questions (additional causal assumptions). Leading to different coding of E, as a set of binary vectors, E = (Ee : e ∈ {2, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when the first category (reference category) is present, and the value of 1 when its category is present, which allows to apply a pairwise-TMLE comparing difference in the outcome between the first level (fixed) and each remaining level. We confirmed that the increase in the level of education decreases the voting rate for the extreme right party.

Keywords: statistical inference, causal inference, super learning, targeted maximum likelihood estimation

Procedia PDF Downloads 105
733 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism

Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape

Abstract:

Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.

Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders

Procedia PDF Downloads 25
732 Influence of Structured Capillary-Porous Coatings on Cryogenic Quenching Efficiency

Authors: Irina P. Starodubtseva, Aleksandr N. Pavlenko

Abstract:

Quenching is a term generally accepted for the process of rapid cooling of a solid that is overheated above the thermodynamic limit of the liquid superheat. The main objective of many previous studies on quenching is to find a way to reduce the total time of the transient process. Computational experiments were performed to simulate quenching by a falling liquid nitrogen film of an extremely overheated vertical copper plate with a structured capillary-porous coating. The coating was produced by directed plasma spraying. Due to the complexities in physical pattern of quenching from chaotic processes to phase transition, the mechanism of heat transfer during quenching is still not sufficiently understood. To our best knowledge, no information exists on when and how the first stable liquid-solid contact occurs and how the local contact area begins to expand. Here we have more models and hypotheses than authentically established facts. The peculiarities of the quench front dynamics and heat transfer in the transient process are studied. The created numerical model determines the quench front velocity and the temperature fields in the heater, varying in space and time. The dynamic pattern of the running quench front obtained numerically satisfactorily correlates with the pattern observed in experiments. Capillary-porous coatings with straight and reverse orientation of crests are investigated. The results show that the cooling rate is influenced by thermal properties of the coating as well as the structure and geometry of the protrusions. The presence of capillary-porous coating significantly affects the dynamics of quenching and reduces the total quenching time more than threefold. This effect is due to the fact that the initialization of a quench front on a plate with a capillary-porous coating occurs at a temperature significantly higher than the thermodynamic limit of the liquid superheat, when a stable solid-liquid contact is thermodynamically impossible. Waves present on the liquid-vapor interface and protrusions on the complex micro-structured surface cause destabilization of the vapor film and the appearance of local liquid-solid micro-contacts even though the average integral surface temperature is much higher than the liquid superheat limit. The reliability of the results is confirmed by direct comparison with experimental data on the quench front velocity, the quench front geometry, and the surface temperature change over time. Knowledge of the quench front velocity and total time of transition process is required for solving practically important problems of nuclear reactors safety.

Keywords: capillary-porous coating, heat transfer, Leidenfrost phenomenon, numerical simulation, quenching

Procedia PDF Downloads 130
731 Generating Ideas to Improve Road Intersections Using Design with Intent Approach

Authors: Omar Faruqe Hamim, M. Shamsul Hoque, Rich C. McIlroy, Katherine L. Plant, Neville A. Stanton

Abstract:

Road safety has become an alarming issue, especially in low-middle income developing countries. The traditional approaches lack the out of the box thinking, making engineers confined to applying usual techniques in making roads safer. A socio-technical approach has recently been introduced in improving road intersections through designing with intent. This Design With Intent (DWI) approach aims to give practitioners a more nuanced approach to design and behavior, working with people, people’s understanding, and the complexities of everyday human experience. It's a collection of design patterns —and a design and research approach— for exploring the interactions between design and people’s behavior across products, services, and environments, both digital and physical. Through this approach, it can be seen that how designing with people in behavior change can be applied to social and environmental problems, as well as commercially. It has a total of 101 cards across eight different lenses, such as architectural, error-proofing, interaction, ludic, perceptual, cognitive, Machiavellian, and security lens each having its own distinct characteristics of extracting ideas from the participant of this approach. For this research purpose, a three-legged accident blackspot intersection of a national highway has been chosen to perform the DWI workshop. Participants from varying fields such as civil engineering, naval architecture and marine engineering, urban and regional planning, and sociology actively participated for a day long workshop. While going through the workshops, the participants were given a preamble of the accident scenario and a brief overview of DWI approach. Design cards of varying lenses were distributed among 10 participants and given an hour and a half for brainstorming and generating ideas to improve the safety of the selected intersection. After the brainstorming session, the participants spontaneously went through roundtable discussions regarding the ideas they have come up with. According to consensus of the forum, ideas were accepted or rejected. These generated ideas were then synthesized and agglomerated to bring about an improvement scheme for the intersection selected in our study. To summarize the improvement ideas from DWI approach, color coding of traffic lanes for separate vehicles, channelizing the existing bare intersection, providing advance warning traffic signs, cautionary signs and educational signs motivating road users to drive safe, using textured surfaces at approach with rumble strips before the approach of intersection were the most significant one. The motive of this approach is to bring about new ideas from the road users and not just depend on traditional schemes to increase the efficiency, safety of roads as well and to ensure the compliance of road users since these features are being generated from the minds of users themselves.

Keywords: design with intent, road safety, human experience, behavior

Procedia PDF Downloads 142
730 Implementation of Ecological and Energy-Efficient Building Concepts

Authors: Robert Wimmer, Soeren Eikemeier, Michael Berger, Anita Preisler

Abstract:

A relatively large percentage of energy and resource consumption occurs in the building sector. This concerns the production of building materials, the construction of buildings and also the energy consumption during the use phase. Therefore, the overall objective of this EU LIFE project “LIFE Cycle Habitation” (LIFE13 ENV/AT/000741) is to demonstrate innovative building concepts that significantly reduce CO₂emissions, mitigate climate change and contain a minimum of grey energy over their entire life cycle. The project is being realised with the contribution of the LIFE financial instrument of the European Union. The ultimate goal is to design and build prototypes for carbon-neutral and “LIFE cycle”-oriented residential buildings and make energy-efficient settlements the standard of tomorrow in line with the EU 2020 objectives. To this end, a resource and energy-efficient building compound is being built in Böheimkirchen, Lower Austria, which includes 6 living units and a community area as well as 2 single family houses with a total usable floor surface of approximately 740 m². Different innovative straw bale construction types (load bearing and pre-fabricated non loadbearing modules) together with a highly innovative energy-supply system, which is based on the maximum use of thermal energy for thermal energy services, are going to be implemented. Therefore only renewable resources and alternative energies are used to generate thermal as well as electrical energy. This includes the use of solar energy for space heating, hot water and household appliances like dishwasher or washing machine, but also a cooking place for the community area operated with thermal oil as heat transfer medium on a higher temperature level. Solar collectors in combination with a biomass cogeneration unit and photovoltaic panels are used to provide thermal and electric energy for the living units according to the seasonal demand. The building concepts are optimised by support of dynamic simulations. A particular focus is on the production and use of modular prefabricated components and building parts made of regionally available, highly energy-efficient, CO₂-storing renewable materials like straw bales. The building components will be produced in collaboration by local SMEs that are organised in an efficient way. The whole building process and results are monitored and prepared for knowledge transfer and dissemination including a trial living in the residential units to test and monitor the energy supply system and to involve stakeholders into evaluation and dissemination of the applied technologies and building concepts. The realised building concepts should then be used as templates for a further modular extension of the settlement in a second phase.

Keywords: energy-efficiency, green architecture, renewable resources, sustainable building

Procedia PDF Downloads 150
729 A Study of Emotional Intelligence and Adjustment of Senior Secondary School Students in District Karnal, Haryana, India

Authors: Rooma Rani

Abstract:

The education is really important for the improvement of physical and mental well-being of the school students. It is used to express inner potential, acquire knowledge, develop skills, shape habits, attitudes, values, belief, etc. along with providing strengths and resilience to people to changing situations and allowing them to develop all those capacities which will enable individual to control surrounding environment. Education has a significant effect on the behavior of individuals which helps us in the new situations of everyday life. Educating the child is directing the child’s capacities, attitudes interest, urges, and needs into the most desirable channels. We are the part of 21st century and now a day emotional intelligence is considered more important than intelligence in the success of a person. Success depends on several intelligences and on the control of emotions too. Emotional Intelligence, like general intelligence is the product of one’s heredity and its interaction with his environmental forces. There are certain methods evolved in modern researches. Keeping in view the nature and purpose of the study, the descriptive survey method is preferred. This method is one of the important methods in education research because it describes the current position of the phenomenon under study. The term descriptive survey is generally used for the type of research which proposes to condition of practices of the present time. In the present study, a systematically random sampling method was used to select a representative sample. 50 students were selected from 2 schools. Out of 50 students, 25 were boys and 25 were girls. In the study, a) it has been found a significant difference in the level of adjustment between male and female students; b) it has been found a non-significant difference in the level of emotional intelligence between male and female students; c) it has been found a non-significant relationship between adjustment and emotional intelligence among male students; d) it has been found a significant relationship between adjustment and emotional intelligence among male students. The results of the study indicated that amongst the students those who possess high scores on emotional intelligence tests are high in level of adjustment. Measures should be adopted to improve and sustain the emotional intelligence level of students throughout their studies. Adolescent students are prone to many problems like physical, social and psychological. They need a congenial home atmosphere so that they grow into full-fledged citizens of our country. After understanding these, it helps in the development of personality which leads to a better learning situation and better thinking capacities, in turn, enhances adjustment and achievement along with a better perception of self.

Keywords: adjustment, education, emotional intelligence, students

Procedia PDF Downloads 132
728 The Effect of Nanocomposite on the Release of Imipenem on Bacteria Causing Infections with Implants

Authors: Mohammad Hossein Pazandeh, Monir Doudi, Sona Rostampour Yasouri

Abstract:

—Results The prudent administration of antibiotics aims to avoid the side effects and the microbes' resistance to antibiotics. An approach developing methods of local administration of antibiotics is especially required for localized infections caused by bacterial colonization of medical devices or implant materials. Among the wide variety of materials used as drug delivery systems, bioactive glasses (BG) have large utilization in regenerative medicine . firstly, the production of bioactive glass/nickel oxide/tin dioxide nanocomposite using sol-gel method, and then, the controlled release of imipenem from the double metal oxide/bioactive glass nanocomposite, and finally, the investigation of the antibacterial property of the nanocomposite. against a number of implant-related infectious agents. In this study, BG/SnO2 and BG/NiO single systema with different metal oxide present and BG/NiO/SnO2 nanocomposites were synthesized by sol-gel as drug carriers for tetracycline and imepinem. These two antibiotics were widely used for osteomyelitis because of its favorable penetration and bactericidal effect on all the probable osteomyelitis pathogens. The antibacterial activity of synthesized samples were evaluated against Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa as bacteria model using disk diffusion method. The BG modification using metal oxides results to antibacterial property of samples containing metal oxide with highest efficiency for nancomposite. bioactivity of all samples was assessed by determining the surface morphology, structural and composition changes using scanning electron microscopy (SEM), FTIR and X-ray diffraction (XRD) spectroscopy, respectively, after soaking in simulated body fluid (SBF) for 28 days. The hydroxyapatite formation was clearly observed as a bioactivity measurement. Then, BG nanocomposite sample was loaded using two antibiotics, separately and their release profiles were studied. The BG nancomposite sample was shown the slow and continuous drug releasing for a period of 72 hours which is desirable for a drug delivery system. The loaded antibiotic nanocomposite sample retaining antibacterial property and showing inactivation effect against bacteria under test. The modified bioactive glass forming hydroxyapatite with controlled release drug and effective against bacterial infections can be introduced as scaffolds for bone implants after clinical trials for biomedical applications . Considering the formation of biofilm by infectious bacteria after sticking on the surfaces of implants, medical devices, etc. Also, considering the complications of traditional methods, solving the problems caused by the above-mentioned microorganisms in technical and biomedical industries was one of the necessities of this research.

Keywords: antibacterial, bioglass, drug delivery system, sol- gel

Procedia PDF Downloads 62
727 Comics as an Intermediary for Media Literacy Education

Authors: Ryan C. Zlomek

Abstract:

The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.

Keywords: comics, graphics novels, mass communication, media literacy, metacognition

Procedia PDF Downloads 300
726 Arc Plasma Application for Solid Waste Processing

Authors: Vladimir Messerle, Alfred Mosse, Alexandr Ustimenko, Oleg Lavrichshev

Abstract:

Hygiene and sanitary study of typical medical-biological waste made in Kazakhstan, Russia, Belarus and other countries show that their risk to the environment is much higher than that of most chemical wastes. For example, toxicity of solid waste (SW) containing cytotoxic drugs and antibiotics is comparable to toxicity of radioactive waste of high and medium level activity. This report presents the results of the thermodynamic analysis of thermal processing of SW and experiments at the developed plasma unit for SW processing. Thermodynamic calculations showed that the maximum yield of the synthesis gas at plasma gasification of SW in air and steam mediums is achieved at a temperature of 1600K. At the air plasma gasification of SW high-calorific synthesis gas with a concentration of 82.4% (СO – 31.7%, H2 – 50.7%) can be obtained, and at the steam plasma gasification – with a concentration of 94.5% (СO – 33.6%, H2 – 60.9%). Specific heat of combustion of the synthesis gas produced by air gasification amounts to 14267 kJ/kg, while by steam gasification - 19414 kJ/kg. At the optimal temperature (1600 K), the specific power consumption for air gasification of SW constitutes 1.92 kWh/kg, while for steam gasification - 2.44 kWh/kg. Experimental study was carried out in a plasma reactor. This is device of periodic action. The arc plasma torch of 70 kW electric power is used for SW processing. Consumption of SW was 30 kg/h. Flow of plasma-forming air was 12 kg/h. Under the influence of air plasma flame weight average temperature in the chamber reaches 1800 K. Gaseous products are taken out of the reactor into the flue gas cooling unit, and the condensed products accumulate in the slag formation zone. The cooled gaseous products enter the gas purification unit, after which via gas sampling system is supplied to the analyzer. Ventilation system provides a negative pressure in the reactor up to 10 mm of water column. Condensed products of SW processing are removed from the reactor after its stopping. By the results of experiments on SW plasma gasification the reactor operating conditions were determined, the exhaust gas analysis was performed and the residual carbon content in the slag was determined. Gas analysis showed the following composition of the gas at the exit of gas purification unit, (vol.%): СO – 26.5, H2 – 44.6, N2–28.9. The total concentration of the syngas was 71.1%, which agreed well with the thermodynamic calculations. The discrepancy between experiment and calculation by the yield of the target syngas did not exceed 16%. Specific power consumption for SW gasification in the plasma reactor according to the results of experiments amounted to 2.25 kWh/kg of working substance. No harmful impurities were found in both gas and condensed products of SW plasma gasification. Comparison of experimental results and calculations showed good agreement. Acknowledgement—This work was supported by Ministry of Education and Science of the Republic of Kazakhstan and Ministry of Education and Science of the Russian Federation (Agreement on grant No. 14.607.21.0118, project RFMEF160715X0118).

Keywords: coal, efficiency, ignition, numerical modeling, plasma-fuel system, plasma generator

Procedia PDF Downloads 250
725 Implementation of a PDMS Microdevice for the Improved Purification of Circulating MicroRNAs

Authors: G. C. Santini, C. Potrich, L. Lunelli, L. Vanzetti, S. Marasso, M. Cocuzza, C. Pederzolli

Abstract:

The relevance of circulating miRNAs as non-invasive biomarkers for several pathologies is nowadays undoubtedly clear, as they have been found to have both diagnostic and prognostic value able to add fundamental information to patients’ clinical picture. The availability of these data, however, relies on a time-consuming process spanning from the sample collection and processing to the data analysis. In light of this, strategies which are able to ease this procedure are in high demand and considerable effort have been made in developing Lab-on-a-chip (LOC) devices able to speed up and standardise the bench work. In this context, a very promising polydimethylsiloxane (PDMS)-based microdevice which integrates the processing of the biological sample, i.e. purification of extracellular miRNAs, and reverse transcription was previously developed in our lab. In this study, we aimed at the improvement of the miRNA extraction performances of this micro device by increasing the ability of its surface to absorb extracellular miRNAs from biological samples. For this purpose, we focused on the modulation of two properties of the material: roughness and charge. PDMS surface roughness was modulated by casting with several templates (terminated with silicon oxide coated by a thin anti-adhesion aluminum layer), followed by a panel of curing conditions. Atomic force microscopy (AFM) was employed to estimate changes at the nanometric scale. To introduce modifications in surface charge we functionalized PDMS with different mixes of positively charged 3-aminopropyltrimethoxysilanes (APTMS) and neutral poly(ethylene glycol) silane (PEG). The surface chemical composition was characterized by X-ray photoelectron spectroscopy (XPS) and the number of exposed primary amines was quantified with the reagent sulfosuccinimidyl-4-o-(4,4-dimethoxytrityl) butyrate (s-SDTB). As our final end point, the adsorption rate of all these different conditions was assessed by fluorescence microscopy by incubating a synthetic fluorescently-labeled miRNA. Our preliminary analysis identified casting on thermally grown silicon oxide, followed by a curing step at 85°C for 1 hour, as the most efficient technique to obtain a PDMS surface roughness in the nanometric scaleable to trap miRNA. In addition, functionalisation with 0.1% APTMS and 0.9% PEG was found to be a necessary step to significantly increase the amount of microRNA adsorbed on the surface, therefore, available for further steps as on-chip reverse transcription. These findings show a substantial improvement in the extraction efficiency of our PDMS microdevice, ultimately leading to an important step forward in the development of an innovative, easy-to-use and integrated system for the direct purification of less abundant circulating microRNAs.

Keywords: circulating miRNAs, diagnostics, Lab-on-a-chip, polydimethylsiloxane (PDMS)

Procedia PDF Downloads 318