Search results for: simple reaction time
1890 Multiple Intelligences as Basis for Differentiated Classroom Instruction in Technology Livelihood Education: An Impact Analysis
Authors: Sheila S. Silang
Abstract:
This research seeks to make an impact analysis on multiple intelligence as the basis for differentiated classroom instruction in TLE. It will also address the felt need of how TLE subject could be taught effectively exhausting all the possible means.This study seek the effect of giving different instruction according to the ability of the students in the following objectives: 1. student’s technological skills enhancement, 2. learning potential improvements 3. having better linkage between school and community in a need for soliciting different learning devices and materials for the learner’s academic progress. General Luna, Quezon is composed of twenty seven barangays. There are only two public high schools. We are aware that K-12 curriculum is focused on providing sufficient time for mastery of concepts and skills, develop lifelong learners, and prepare graduates for tertiary education, middle-level skills development, employment, and entrepreneurship. The challenge is with TLE offerring a vast area of specializations, how would Multiple Intelligence play its vital role as basis in classroom instruction in acquiring the requirement of the said curriculum? 1.To what extent do the respondent students manifest the following types of intelligences: Visual-Spatial, Body-Kinesthetic, Musical, Interpersonal, Intrapersonal, Verbal-Linguistic, Logical-Mathematical and Naturalistic. What media should be used appropriate to the student’s learning style? Visual, Printed Words, Sound, Motion, Color or Realia 3. What is the impact of multiple intelligence as basis for differentiated instruction in T.L.E. based on the following student’s ability? Learning Characteristic and Reading Ability and Performance 3. To what extent do the intelligences of the student relate with their academic performance? The following were the findings derived from the study: In consideration of the vast areas of study of TLE, and the importance it plays in the school curriculum coinciding with the expectation of turning students to technologically competent contributing members of the society, either in the field of Technical/Vocational Expertise or Entrepreneurial based competencies, as well as the government’s concern for it, we visualize TLE classroom teachers making use of multiple intelligence as basis for differentiated classroom instruction in teaching the subject .Somehow, multiple intelligence sample such as Linguistic, Logical-Mathematical, Bodily-Kinesthetic, Interpersonal, Intrapersonal, and Spatial abilities that an individual student may have or may not have, can be a basis for a TLE teacher’s instructional method or design.Keywords: education, multiple, differentiated classroom instruction, impact analysis
Procedia PDF Downloads 4451889 The Relationship between Environmental Factors and Purchasing Decisions in the Residential Market in Sweden
Authors: Agnieszka Zalejska-Jonsson
Abstract:
The Swedish Green Building Council (SGBC) was established in 2009. Since then, over 1000 buildings have been certified, of which approximately 600 are newly produced and 340 are residential buildings. During that time, approximately 2000 apartment buildings have been built in Sweden. This means that over a five- year period 17% of residential buildings have been certified according to the environmental building scheme. The certification of the building is not a guarantee of environmental progress but it gives us an indication of the extent of the progress. The overarching aim of this study is to investigate the factors behind the relatively slow evolution of the green residential housing market in Sweden. The intention is to examine stated willingness to pay (WTP) for green and low energy apartments, and to explore which factors have a significant effect on stated WTP among apartment owners. A green building was defined as a building certified according to the environmental scheme and a low energy building as a building designed and constructed with high energy efficiency goals. Data for this study were collected through a survey conducted among occupants of comparable apartment buildings: two green and one conventional. The total number of received responses was 429: green A (N=160), response rate 42%; green B (N=138) response rate 35%, and conventional (N=131) response rate 43%. The study applied a quasi-experimental method. Survey responses regarding factors affecting purchase of apartment, stated WTP and environmental literacy have been analysed using descriptive statistics, the Mann–Whitney (rank sum) test and logistic models. Comments received from respondents have been used for further interpretation of results. Results indicate that environmental education has a significant effect on stated WTP. Occupants who declared higher WTP showed a higher level of environmental literacy and indicated that energy efficiency was one of the important factors that affected their decision to buy an apartment. Generally, the respondents were more likely to pay more for low energy buildings than for green buildings. This is to a great extent a consequence of rational customer behaviour and difficulty in apprehending the meaning of green building certification. The analysis shows that people living in green buildings indicate higher WTP for both green and low energy buildings, the difference being statistically significant. It is concluded that growth in the green housing market in Sweden might be achieved if policymakers and developers engage in active education in the environmental labelling system. The demand for green buildings is more likely to increase when the difference between green and conventional buildings is easily understood and information is not only delivered by the estate agent, but is part of an environmental education programme.Keywords: consumer, environmental education, housing market, stated WTP, Sweden
Procedia PDF Downloads 2411888 Online Think–Pair–Share in a Third-Age Information and Communication Technology Course
Authors: Daniele Traversaro
Abstract:
Problem: Senior citizens have been facing a challenging reality as a result of strict public health measures designed to protect people from the COVID-19 outbreak. These include the risk of social isolation due to the inability of the elderly to integrate with technology. Never before have information and communication technology (ICT) skills become essential for their everyday life. Although third-age ICT education and lifelong learning are widely supported by universities and governments, there is a lack of literature on which teaching strategy/methodology to adopt in an entirely online ICT course aimed at third-age learners. This contribution aims to present an application of the Think-Pair-Share (TPS) learning method in an ICT third-age virtual classroom with an intergenerational approach to conducting online group labs and review activities. This collaborative strategy can help increase student engagement, promote active learning and online social interaction. Research Question: Is collaborative learning applicable and effective, in terms of student engagement and learning outcomes, for an entirely online third-age ICT introductory course? Methods: In the TPS strategy, a problem is posed by the teacher, students have time to think about it individually, and then they work in pairs (or small groups) to solve the problem and share their ideas with the entire class. We performed four experiments in the ICT course of the University of the Third Age of Genova (University of Genova, Italy) on the Microsoft Teams platform. The study cohort consisted of 26 students over the age of 45. Data were collected through online questionnaires. Two have been proposed, one at the end of the first activity and another at the end of the course. They consisted of five and three close-ended questions, respectively. The answers were on a Likert scale (from 1 to 4) except two questions (which asked the number of correct answers given individually and in groups) and the field for free comments/suggestions. Results: Results show that groups perform better than individual students (with scores greater than one order of magnitude) and that most students found it helpful to work in groups and interact with their peers. Insights: From these early results, it appears that TPS is applicable to an online third-age ICT classroom and useful for promoting discussion and active learning. Despite this, our experimentation has a number of limitations. First of all, the results highlight the need for more data to be able to perform a statistical analysis in order to determine the effectiveness of this methodology in terms of student engagement and learning outcomes as a future direction.Keywords: collaborative learning, information technology education, lifelong learning, older adult education, think-pair-share
Procedia PDF Downloads 1881887 Analysis of Waterjet Propulsion System for an Amphibious Vehicle
Authors: Nafsi K. Ashraf, C. V. Vipin, V. Anantha Subramanian
Abstract:
This paper reports the design of a waterjet propulsion system for an amphibious vehicle based on circulation distribution over the camber line for the sections of the impeller and stator. In contrast with the conventional waterjet design, the inlet duct is straight for water entry parallel and in line with the nozzle exit. The extended nozzle after the stator bowl makes the flow more axial further improving thrust delivery. Waterjet works on the principle of volume flow rate through the system and unlike the propeller, it is an internal flow system. The major difference between the propeller and the waterjet occurs at the flow passing the actuator. Though a ducted propeller could constitute the equivalent of waterjet propulsion, in a realistic situation, the nozzle area for the Waterjet would be proportionately larger to the inlet area and propeller disc area. Moreover, the flow rate through impeller disk is controlled by nozzle area. For these reasons the waterjet design is based on pump systems rather than propellers and therefore it is important to bring out the characteristics of the flow from this point of view. The analysis is carried out using computational fluid dynamics. Design of waterjet propulsion is carried out adapting the axial flow pump design and performance analysis was done with three-dimensional computational fluid dynamics (CFD) code. With the varying environmental conditions as well as with the necessity of high discharge and low head along with the space confinement for the given amphibious vehicle, an axial pump design is suitable. The major problem of inlet velocity distribution is the large variation of velocity in the circumferential direction which gives rise to heavy blade loading that varies with time. The cavitation criteria have also been taken into account as per the hydrodynamic pump design. Generally, waterjet propulsion system can be parted into the inlet, the pump, the nozzle and the steering device. The pump further comprises an impeller and a stator. Analytical and numerical approaches such as RANSE solver has been undertaken to understand the performance of designed waterjet propulsion system. Unlike in case of propellers the analysis was based on head flow curve with efficiency and power curves. The modeling of the impeller is performed using rigid body motion approach. The realizable k-ϵ model has been used for turbulence modeling. The appropriate boundary conditions are applied for the domain, domain size and grid dependence studies are carried out.Keywords: amphibious vehicle, CFD, impeller design, waterjet propulsion
Procedia PDF Downloads 2281886 Examining Employee Social Intrapreneurial Behaviour (ESIB) in Kuwait: Pilot Study
Authors: Ardita Malaj, Ahmad R. Alsaber, Bedour Alboloushi, Anwaar Alkandari
Abstract:
Organizations worldwide, particularly in Kuwait, are concerned with implementing a progressive workplace culture and fostering social innovation behaviours. The main aim of this research is to examine and establish a thorough comprehension of the relationship between an inventive organizational culture, employee intrapreneurial behaviour, authentic leadership, employee job satisfaction, and employee job commitment in the manufacturing sector of Kuwait, which is a developed economy. Literature reviews analyse the core concepts and their related areas by scrutinizing their definitions, dimensions, and importance to uncover any deficiencies in existing research. The examination of relevant research uncovered major gaps in understanding. This study examines the reliability and validity of a newly developed questionnaire designed to identify the appropriate applications for a large-scale investigation. A preliminary investigation was carried out, determining a sample size of 36 respondents selected randomly from a pool of 223 samples. SPSS was utilized to calculate the percentages of the demographic characteristics for the participants, assess the credibility of the measurements, evaluate the internal consistency, validate all agreements, and determine Pearson's correlation. The study's results indicated that the majority of participants were male (66.7%), aged between 35 and 44 (38.9%), and possessed a bachelor's degree (58.3%). Approximately 94.4% of the participants were employed full-time. 72.2% of the participants are employed in the electrical, computer, and ICT sector, whilst 8.3% work in the metal industry. Out of all the departments, the human resource department had the highest level of engagement, making up 13.9% of the total. Most participants (36.1%) possessed intermediate or advanced levels of experience, whilst 21% were classified as entry-level. Furthermore, 8.3% of individuals were categorized as first-level management, 22.2% were categorized as middle management, and 16.7% were categorized as executive or senior management. Around 19.4% of the participants have over a decade of professional experience. The Pearson's correlation coefficient for all 5 components varies between 0.4009 to 0.7183. The results indicate that all elements of the questionnaire were effectively verified, with a Cronbach alpha factor predominantly exceeding 0.6, which is the criterion commonly accepted by researchers. Therefore, the work on the larger scope of testing and analysis could continue.Keywords: pilot study, ESIB, innovative organizational culture, Kuwait, validation
Procedia PDF Downloads 321885 Long Short-Term Memory Stream Cruise Control Method for Automated Drift Detection and Adaptation
Authors: Mohammad Abu-Shaira, Weishi Shi
Abstract:
Adaptive learning, a commonly employed solution to drift, involves updating predictive models online during their operation to react to concept drifts, thereby serving as a critical component and natural extension for online learning systems that learn incrementally from each example. This paper introduces LSTM-SCCM “Long Short-Term Memory Stream Cruise Control Method”, a drift adaptation-as-a-service framework for online learning. LSTM-SCCM automates drift adaptation through prompt detection, drift magnitude quantification, dynamic hyperparameter tuning, performing shortterm optimization and model recalibration for immediate adjustments, and, when necessary, conducting long-term model recalibration to ensure deeper enhancements in model performance. LSTM-SCCM is incorporated into a suite of cutting-edge online regression models, assessing their performance across various types of concept drift using diverse datasets with varying characteristics. The findings demonstrate that LSTM-SCCM represents a notable advancement in both model performance and efficacy in handling concept drift occurrences. LSTM-SCCM stands out as the sole framework adept at effectively tackling concept drifts within regression scenarios. Its proactive approach to drift adaptation distinguishes it from conventional reactive methods, which typically rely on retraining after significant degradation to model performance caused by drifts. Additionally, LSTM-SCCM employs an in-memory approach combined with the Self-Adjusting Memory (SAM) architecture to enhance real-time processing and adaptability. The framework incorporates variable thresholding techniques and does not assume any particular data distribution, making it an ideal choice for managing high-dimensional datasets and efficiently handling large-scale data. Our experiments, which include abrupt, incremental, and gradual drifts across both low- and high-dimensional datasets with varying noise levels, and applied to four state-of-the-art online regression models, demonstrate that LSTM-SCCM is versatile and effective, rendering it a valuable solution for online regression models to address concept drift.Keywords: automated drift detection and adaptation, concept drift, hyperparameters optimization, online and adaptive learning, regression
Procedia PDF Downloads 111884 Palaeo-Environmental Reconstruction of the Wet Zone of Sri Lanka: A Zooarchaeological Perspective
Authors: Kalangi Rodrigo
Abstract:
Sri Lanka has been known as an island which has a diverse variety of prehistoric occupation among ecological zones. Defining the paleoecology of the past societies has been an archaeological thought developed in the 1960s. It is mainly concerned with the reconstruction from available geological and biological evidence of past biota, populations, communities, landscapes, environments, and ecosystems. Sri Lanka has dealt with this subject, and considerable research has been already undertaken. The fossil and material record of Sri Lanka’s Wet Zone tropical forests continues from c. 38,000–34,000 ybp. This early and persistent human fossil, technical, and cultural florescence, as well as a collection of well-preserved tropical-forest rock shelters with associated 'on-site' palaeoenvironmental records, makes Sri Lanka a central and unusual case study to determine the extent and strength of early human tropical forest encounters. Excavations carried out in prehistoric caves in the low country wet zone has shown that in the last 50,000 years, the temperature in the lowland rainforests has not exceeded 5°C. When taking Batadombalena alone, the entire seven layers have yielded an uninterrupted occupation of Acavus sp and Canerium zeylanicum, a plant that grows in the middle of the rainforest. Acavus, which is highly sensitive to rainforest ecosystems, has been well documented in many of the lowland caves, confirming that the wetland rainforest environment has remained intact at least for the last 50,000 years. If the dry and arid conditions in the upper hills regions affected the wet zone, the Tufted Gray Lunger (semnopithecus priam), must also meet with the prehistoric caves in the wet zone thrown over dry climate. However, the bones in the low country wet zone do not find any of the fragments belonging to Turfed Gray Lunger, and prehistoric human consumption is bestowed with purple-faced leaf monkey (Trachypithecus vetulus) and Toque Macaque (Macaca Sinica). The skeletal remains of Lyriocephalus scutatus, a full-time resident in rain forests, have also been recorded among lowland caves. But, in zoological terms, these remains may be the remains of the Barking deer (Muntiacus muntjak), which is currently found in the wet zone. For further investigations, the mtDNA test of genetic diversity (Bottleneck effect) and pollen study from lowland caves should determine whether the wet zone climate has persisted over the last 50,000 years, or whether the dry weather affected in the mountainous region has invaded the wet zone.Keywords: paleoecology, prehistory, zooarchaeology, reconstruction, palaeo-climate
Procedia PDF Downloads 1401883 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique
Authors: Nishant Shrivastava, D. K. Sehgal
Abstract:
In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.Keywords: finite elements, Lagrangian, optimal stress location, serendipity
Procedia PDF Downloads 1051882 A Professional Learning Model for Schools Based on School-University Research Partnering That Is Underpinned and Structured by a Micro-Credentialing Regime
Authors: David Lynch, Jake Madden
Abstract:
There exists a body of literature that reports on the many benefits of partnerships between universities and schools, especially in terms of teaching improvement and school reform. This is because such partnerships can build significant teaching capital, by deepening and expanding the skillsets and mindsets needed to create the connections that support ongoing and embedded teacher professional development and career goals. At the same time, this literature is critical of such initiatives when the partnership outcomes are short- term or one-sided, misaligned to fundamental problems, and not expressly focused on building the desired teaching capabilities. In response to this situation, research conducted by Professor David Lynch and his TeachLab research team, has begun to shed light on the strengths and limitations of school/university partnerships, via the identification of key conceptual elements that appear to act as critical partnership success factors. These elements are theorised as an inter-play between professional knowledge acquisition, readiness, talent management and organisational structure. However, knowledge of how these elements are established, and how they manifest within the school and its teaching workforce as an overall system, remains incomplete. Therefore, research designed to more clearly delineate these elements in relation to their impact on school/university partnerships is thus required. It is within this context that this paper reports on the development and testing of a Professional Learning (PL) model for schools and their teachers that incorporates school-university research partnering within a systematic, whole-of-school PL strategy that is underpinned and structured by a micro-credentialing (MC) regime. MC involves learning a narrow-focused certificate (a micro-credential) in a specific topic area (e.g., 'How to Differentiate Instruction for English as a second language Students') and embedded in the teacher’s day-to-day teaching work. The use of MC is viewed as important to the efficacy and sustainability of teacher PL because it (1) provides an evidence-based framework for teacher learning, (2) has the ability to promote teacher social capital and (3) engender lifelong learning in keeping professional skills current in an embedded and seamless to work manner. The associated research is centred on a primary school in Australia (P-6) that acted as an arena to co-develop, test/investigate and report on outcomes for teacher PL that uses MC to support a whole-of-school partnership with a university.Keywords: teaching improvement, teacher professional learning, talent management, education partnerships, school-university research
Procedia PDF Downloads 811881 Revealing the Sustainable Development Mechanism of Guilin Tourism Based on Driving Force/Pressure/State/Impact/Response Framework
Authors: Xiujing Chen, Thammananya Sakcharoen, Wilailuk Niyommaneerat
Abstract:
China's tourism industry is in a state of shock and recovery, although COVID-19 has brought great impact and challenges to the tourism industry. The theory of sustainable development originates from the contradiction of increasing awareness of environmental protection and the pursuit of economic interests. The sustainable development of tourism should consider social, economic, and environmental factors and develop tourism in a planned and targeted way from the overall situation. Guilin is one of the popular tourist cities in China. However, there exist several problems in Guilin tourism, such as low quality of scenic spot construction and low efficiency of tourism resource development. Due to its unwell-managed, Guilin's tourism industry is facing problems such as supply and demand crowding pressure for tourists. According to the data from 2009 to 2019, there is a change in the degree of sustainable development of Guilin tourism. This research aimed to evaluate the sustainable development state of Guilin tourism using the DPSIR (driving force/pressure/state/impact/response) framework and to provide suggestions and recommendations for sustainable development in Guilin. An improved TOPSIS (technology for order preference by similarity to an ideal solution) model based on the entropy weights relationship is applied to the quantitative analysis and to analyze the mechanisms of sustainable development of tourism in Guilin. The DPSIR framework organizes indicators into sub-five categories: of which twenty-eight indicators related to sustainable aspects of Guilin tourism are classified. The study analyzed and summarized the economic, social, and ecological effects generated by tourism development in Guilin from 2009-2019. The results show that the conversion rate of tourism development in Guilin into regional economic benefits is more efficient than that into social benefits. Thus, tourism development is an important driving force of Guilin's economic growth. In addition, the study also analyzed the static weights of 28 relevant indicators of sustainable development of tourism in Guilin and ranked them from largest to smallest. Then it was found that the economic and social factors related to tourism revenue occupy the highest weight, which means that the economic and social development of Guilin can influence the sustainable development of Guilin tourism to a greater extent. Therefore, there is a two-way causal relationship between tourism development and economic growth in Guilin. At the same time, ecological development-related indicators also have relatively large weights, so ecological and environmental resources also have a great influence on the sustainable development of Guilin tourism.Keywords: DPSIR framework, entropy weights analysis, sustainable development of tourism, TOPSIS analysis
Procedia PDF Downloads 981880 Simulation and Characterization of Stretching and Folding in Microchannel Electrokinetic Flows
Authors: Justo Rodriguez, Daming Chen, Amador M. Guzman
Abstract:
The detection, treatment, and control of rapidly propagating, deadly viruses such as COVID-19, require the development of inexpensive, fast, and accurate devices to address the urgent needs of the population. Microfluidics-based sensors are amongst the different methods and techniques for detection that are easy to use. A micro analyzer is defined as a microfluidics-based sensor, composed of a network of microchannels with varying functions. Given their size, portability, and accuracy, they are proving to be more effective and convenient than other solutions. A micro analyzer based on the concept of “Lab on a Chip” presents advantages concerning other non-micro devices due to its smaller size, and it is having a better ratio between useful area and volume. The integration of multiple processes in a single microdevice reduces both the number of necessary samples and the analysis time, leading the next generation of analyzers for the health-sciences. In some applications, the flow of solution within the microchannels is originated by a pressure gradient, which can produce adverse effects on biological samples. A more efficient and less dangerous way of controlling the flow in a microchannel-based analyzer is applying an electric field to induce the fluid motion and either enhance or suppress the mixing process. Electrokinetic flows are characterized by no less than two non-dimensional parameters: the electric Rayleigh number and its geometrical aspect ratio. In this research, stable and unstable flows have been studied numerically (and when possible, will be experimental) in a T-shaped microchannel. Additionally, unstable electrokinetic flows for Rayleigh numbers higher than critical have been characterized. The flow mixing enhancement was quantified in relation to the stretching and folding that fluid particles undergo when they are subjected to supercritical electrokinetic flows. Computational simulations were carried out using a finite element-based program while working with the flow mixing concepts developed by Gollub and collaborators. Hundreds of seeded massless particles were tracked along the microchannel from the entrance to exit for both stable and unstable flows. After post-processing, their trajectories, the folding and stretching values for the different flows were found. Numerical results show that for supercritical electrokinetic flows, the enhancement effects of the folding and stretching processes become more apparent. Consequently, there is an improvement in the mixing process, ultimately leading to a more homogenous mixture.Keywords: microchannel, stretching and folding, electro kinetic flow mixing, micro-analyzer
Procedia PDF Downloads 1261879 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting
Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero
Abstract:
In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling
Procedia PDF Downloads 1351878 Assessment of Drinking Water Contamination from the Water Source to the Consumer in Palapye Region, Botswana
Authors: Tshegofatso Galekgathege
Abstract:
Poor water quality is of great concern to human health as it can cause disease outbreaks. A standard practice today, in developed countries, is that people should be provided with safe-reliable drinking water, as safe drinking water is recognized as a basic human right and a cost effective measure of reducing diseases. Over 1.1 billion people worldwide lack access to a safe water supply and as a result, the majority are forced to use polluted surface or groundwater. It is widely accepted that our water supply systems are susceptible to the intentional or accidental contamination .Water quality degradation may occur anywhere in the path that water takes from the water source to the consumer. Chlorine is believed to be an effective tool in disinfecting water, but its concentration may decrease with time due to consumption by chemical reactions. This shows that we are at the risk of being infected by waterborne diseases if chlorine in water falls below the required level of 0.2-1mg/liter which should be maintained in water and some contaminants enter into the water distribution system. It is believed that the lack of adequate sanitation also contributes to the contamination of water globally. This study therefore, assesses drinking water contamination from the source to the consumer by identifying the point vulnerable to contamination from the source to the consumer in the study area .To identify the point vulnerable to contamination, water was sampled monthly from boreholes, water treatment plant, water distribution system (WDS), service reservoirs and consumer taps from all the twenty (20) villages of Palapye region. Sampled water was then taken to the laboratory for testing and analysis of microbiological and chemical parameters. Water quality analysis were then compared with Botswana drinking water quality standards (BOS32:2009) to see if they comply. Major sources of water contamination identified during site visits were the livestock which were found drinking stagnant water from leaking pipes in 90 percent of the villages. Soils structure around the area was negatively affected because of livestock movement even vegetation in the area. In conclusion microbiological parameters of water in the study area do not comply with drinking water standards, some microbiological parameters in water indicated that livestock do not only affect land degradation but also the quality of water. Chlorine has been applied to water over some years but it is not effective enough thus preventative measures have to be developed, to prevent contaminants from reaching water. Remember: Prevention is better than cure.Keywords: land degradation, leaking systems, livestock, water contamination
Procedia PDF Downloads 3521877 Valorization of Mineralogical Byproduct TiO₂ Using Photocatalytic Degradation of Organo-Sulfur Industrial Effluent
Authors: Harish Kuruva, Vedasri Bai Khavala, Tiju Thomas, K. Murugan, B. S. Murty
Abstract:
Industries are growing day to day to increase the economy of the country. The biggest problem with industries is wastewater treatment. Releasing these wastewater directly into the river is more harmful to human life and a threat to aquatic life. These industrial effluents contain many dissolved solids, organic/inorganic compounds, salts, toxic metals, etc. Phenols, pesticides, dioxins, herbicides, pharmaceuticals, and textile dyes were the types of industrial effluents and more challenging to degrade eco-friendly. So many advanced techniques like electrochemical, oxidation process, and valorization have been applied for industrial wastewater treatment, but these are not cost-effective. Industrial effluent degradation is complicated compared to commercially available pollutants (dyes) like methylene blue, methylene orange, rhodamine B, etc. TiO₂ is one of the widely used photocatalysts which can degrade organic compounds using solar light and moisture available in the environment (organic compounds converted to CO₂ and H₂O). TiO₂ is widely studied in photocatalysis because of its low cost, non-toxic, high availability, and chemically and physically stable in the atmosphere. This study mainly focused on valorizing the mineralogical product TiO₂ (IREL, India). This mineralogical graded TiO₂ was characterized and compared with its structural and photocatalytic properties (industrial effluent degradation) with the commercially available Degussa P-25 TiO₂. It was testified that this mineralogical TiO₂ has the best photocatalytic properties (particle shape - spherical, size - 30±5 nm, surface area - 98.19 m²/g, bandgap - 3.2 eV, phase - 95% anatase, and 5% rutile). The industrial effluent was characterized by TDS (total dissolved solids), ICP-OES (inductively coupled plasma – optical emission spectroscopy), CHNS (Carbon, Hydrogen, Nitrogen, and sulfur) analyzer, and FT-IR (fourier-transform infrared spectroscopy). It was observed that it contains high sulfur (S=11.37±0.15%), organic compounds (C=4±0.1%, H=70.25±0.1%, N=10±0.1%), heavy metals, and other dissolved solids (60 g/L). However, the organo-sulfur industrial effluent was degraded by photocatalysis with the industrial mineralogical product TiO₂. In this study, the industrial effluent pH value (2.5 to 10), catalyst concentration (50 to 150 mg) were varied, and effluent concentration (0.5 Abs) and light exposure time (2 h) were maintained constant. The best degradation is about 80% of industrial effluent was achieved at pH 5 with a concentration of 150 mg - TiO₂. The FT-IR results and CHNS analyzer confirmed that the sulfur and organic compounds were degraded.Keywords: wastewater treatment, industrial mineralogical product TiO₂, photocatalysis, organo-sulfur industrial effluent
Procedia PDF Downloads 1171876 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 1441875 In silico Designing of Imidazo [4,5-b] Pyridine as a Probable Lead for Potent Decaprenyl Phosphoryl-β-D-Ribose 2′-Epimerase (DprE1) Inhibitors as Antitubercular Agents
Authors: Jineetkumar Gawad, Chandrakant Bonde
Abstract:
Tuberculosis (TB) is a major worldwide concern whose control has been exacerbated by HIV, the rise of multidrug-resistance (MDR-TB) and extensively drug resistance (XDR-TB) strains of Mycobacterium tuberculosis. The interest for newer and faster acting antitubercular drugs are more remarkable than any time. To search potent compounds is need and challenge for researchers. Here, we tried to design lead for inhibition of Decaprenyl phosphoryl-β-D-ribose 2′-epimerase (DprE1) enzyme. Arabinose is an essential constituent of mycobacterial cell wall. DprE1 is a flavoenzyme that converts decaprenylphosphoryl-D-ribose into decaprenylphosphoryl-2-keto-ribose, which is intermediate in biosynthetic pathway of arabinose. Latter, DprE2 converts keto-ribose into decaprenylphosphoryl-D-arabinose. We had a selection of 23 compounds from azaindole series for computational study, and they were drawn using marvisketch. Ligands were prepared using Maestro molecular modeling interface, Schrodinger, v10.5. Common pharmacophore hypotheses were developed by applying dataset thresholds to yield active and inactive set of compounds. There were 326 hypotheses were developed. On the basis of survival score, ADRRR (Survival Score: 5.453) was selected. Selected pharmacophore hypotheses were subjected to virtual screening results into 1000 hits. Hits were prepared and docked with protein 4KW5 (oxydoreductase inhibitor) was downloaded in .pdb format from RCSB Protein Data Bank. Protein was prepared using protein preparation wizard. Protein was preprocessed, the workspace was analyzed using force field OPLS 2005. Glide grid was generated by picking single atom in molecule. Prepared ligands were docked with prepared protein 4KW5 using Glide docking. After docking, on the basis of glide score top-five compounds were selected, (5223, 5812, 0661, 0662, and 2945) and the glide docking score (-8.928, -8.534, -8.412, -8.411, -8.351) respectively. There were interactions of ligand and protein, specifically HIS 132, LYS 418, TRY 230, ASN 385. Pi-pi stacking was observed in few compounds with basic Imidazo [4,5-b] pyridine ring. We had basic azaindole ring in parent compounds, but after glide docking, we received compounds with Imidazo [4,5-b] pyridine as a basic ring. That might be the new lead in the process of drug discovery.Keywords: DprE1 inhibitors, in silico drug designing, imidazo [4, 5-b] pyridine, lead, tuberculosis
Procedia PDF Downloads 1541874 Body, Experience, Sense, and Place: Past and Present Sensory Mappings of Istiklal Street in Istanbul
Authors: Asiye Nisa Kartal
Abstract:
An attempt to recognize the undiscovered bounds of Istiklal Street in Istanbul between its sensory experiences (intangible qualities) and physical setting (tangible qualities) could be taken as the first inspiration point for this study. ‘The dramatic physical changes’ and ‘their current impacts on sensory attributions’ of Istiklal Street have directed this study to consider the role of changing the physical layout on sensory dimensions which have a subtle but important role in the examination of urban places. The public places have always been subject to transformation, so in the last years, the changing socio-cultural structure, economic and political movements, law and city regulations, innovative transportation and communication activities have resulted in a controversial modification of Istanbul. And, as the culture, entertainment, tourism, and shopping focus of Istanbul, Istiklal Street has witnessed different changing stages within the last years. In this process, because of the projects being implemented, many buildings such as cinemas, theatres, and bookstores have restored, moved, converted, closed and demolished which have been significant elements in terms of the qualitative value of this area. And, the multi-layered socio-cultural, and architectural structure of Istiklal Street has been changing in a dramatical and controversial way. But importantly, while the physical setting of Istiklal Street has changed, the transformation has not been spatial, socio-cultural, economic; avoidably the sensory dimensions of Istiklal Street which have great importance in terms of intangible qualities of this area have begun to lose their distinctive features. This has created the challenge of this research. As the main hypothesis, this study claims that the physical transformations have led to change in the sensory characteristic of Istiklal Street, therefore the Sensescape of Istiklal Street deserve to be recorded, decoded and promoted as expeditiously as possible to observe the sensory reflections of physical transformations in this area. With the help of the method of ‘Sensewalking’ which is an efficient research tool to generate knowledge on sensory dimensions of an urban settlement, this study suggests way of ‘mapping’ to understand how do ‘changes of physical setting’ play role on ‘sensory qualities’ of Istiklal Street which have been changed or lost over time. Basically, this research focuses on the sensory mapping of Istiklal Street from the 1990s until today to picture, interpret, criticize the ‘sensory mapping of Istiklal Street in present’ and the ‘sensory mapping of Istiklal Street in past’. Through the sensory mapping of Istiklal Street, this study intends to increase the awareness about the distinctive sensory qualities of places. It is worthwhile for further studies that consider the sensory dimensions of places especially in the field of architecture.Keywords: Istiklal street, sense, sensewalking, sensory mapping
Procedia PDF Downloads 1771873 The Effect of Sea Buckthorn (Hippophae rhamnoides L.) Berries on Some Quality Characteristics of Cooked Pork Sausages
Authors: Anna M. Salejda, Urszula Tril, Grażyna Krasnowska
Abstract:
The aim of this study was to analyze selected quality characteristics of cooked pork sausages manufactured with the addition of Sea buckthorn (Hippophae rhamnoides L.) berries preparations. Stuffings of model sausages consisted of pork, backfat, water and additives such a curing salt and sodium isoascorbate. Functional additives used in production process were two preparations obtained from dried Sea buckthorn berries in form of powder and brew. Powder of dried berries was added in amount of 1 and 3 g, while water infusion as a replacement of 50 and 100% ice water included in meat products formula. Control samples were produced without functional additives. Experimental stuffings were heat treated in water bath and stored for 4 weeks under cooled conditions (4±1ºC). Physical parameters of colour, texture profile and technological parameters as acidity, weight losses and water activity were estimated. The effect of Sea buckthorn berries preparations on lipid oxidation during storage of final products was determine by TBARS method. Studies have shown that addition of Sea buckthorn preparations to meat-fatty batters significant (P≤0.05) reduced the pH values of sausages samples after thermal treatment. Moreover, the addition of berries powder caused significant differences (P ≤ 0.05) in weight losses after cooking process. Analysis of results of texture profile analysis indicated, that utilization of infusion prepared from Sea buckthorn dried berries caused increase of springiness, gumminess and chewiness of final meat products. At the same time, the highest amount of Sea buckthorn berries powder in recipe caused the decrease of all measured texture parameters. Utilization of experimental preparations significantly decreased (P≤0.05) lightness (L* parameter of color) of meat products. Simultaneously, introduction of 1 and 3 grams of Sea buckthorn berries powder to meat-fatty batter increased redness (a* parameter) of samples under investigation. Higher content of substances reacting with thiobarbituric acid was observed in meat products produced without functional additives. It was observed that powder of Sea buckthorn berries added to meat-fatty batters caused higher protection against lipid oxidation in cooked sausages.Keywords: sea buckthorn, meat products, texture, color parameters, lipid oxidation
Procedia PDF Downloads 2961872 Effect of Manure Treatment on Furrow Erosion: A Case Study of Sagawika Irrigation Scheme in Kasungu, Malawi
Authors: Abel Mahowe
Abstract:
Furrow erosion is the major problem menacing sustainability of irrigation in Malawi and polluting water bodies resulting in death of many aquatic animals. Many rivers in Malawi are drying due to some poor practices that are being practiced around these water bodies, furrow erosion is one of the cause of sedimentation in these rivers although it has gradual effect on deteriorating of these rivers hence neglected, but has got long term disastrous effect on water bodies. Many aquatic animals also suffer when these sediments are taken into these water bodies. An assessment of effect of manure treatment on furrow erosion was carried out in Sagawika irrigation scheme located in Kasungu District north part of Malawi. The soil on the field was clay loam and had just been tilled. The average furrow slope of 0.2% and was divided into two blocks, A and B. Each block had 20V-shaped furrow having a length of 10 m. Three different manure were used to construct these furrows by mixing it with soil which was moderately moist and 5 furrows from each block were constructed without manure. In each block 5furrow were made using a specific type of manure, and one set of five furrows in each block was made without manure treatment. The types of manure that were used were goat manure, pig manure, and manure from crop residuals. The manure application late was 5 kg/m. The furrow was constructed at a spacing of 0.6 m. Tomato was planted in the two blocks at spacing of 0.15 m between rows and 0.15 m between planting stations. Irrigation water was led from feeder canal into the irrigation furrows using siphons. The siphons discharge into each furrow was set at 1.86 L/S. The ¾ rule was used to determine the cut-off time for the irrigation cycles in order to reduce the run-off at the tail end. During each irrigation cycle, samples of the runoff water were collected at one-minute intervals and analyzed for total sediment concentration for use in estimating the total soil sediment loss. The results of the study have shown that a significant amount of soil is lost in soils without many organic matters, there was a low level of erosion in furrows that were constructed using manure treatment within the blocks. In addition, the results have shown that manure also differs in their ability to control erosion since pig manure proved to have greater abilities in binding the soil together than other manure since they were reduction in the amount of sediments at the tail end of furrows constructed by this type of manure. The results prove that manure contains organic matters which helps soil particles to bind together hence resisting the erosive force of water. The use of manure when constructing furrows in soil with less organic matter can highly reduce erosion hence reducing also pollution of water bodies and improve the conditions of aquatic animals.Keywords: aquatic, erosion, furrow, soil
Procedia PDF Downloads 2861871 A Novel Harmonic Compensation Algorithm for High Speed Drives
Authors: Lakdar Sadi-Haddad
Abstract:
The past few years study of very high speed electrical drives have seen a resurgence of interest. An inventory of the number of scientific papers and patents dealing with the subject makes it relevant. In fact democratization of magnetic bearing technology is at the origin of recent developments in high speed applications. These machines have as main advantage a much higher power density than the state of the art. Nevertheless particular attention should be paid to the design of the inverter as well as control and command. Surface mounted permanent magnet synchronous machine is the most appropriate technology to address high speed issues. However, it has the drawback of using a carbon sleeve to contain magnets that could tear because of the centrifugal forces generated in rotor periphery. Carbon fiber is well known for its mechanical properties but it has poor heat conduction. It results in a very bad evacuation of eddy current losses induce in the magnets by time and space stator harmonics. The three-phase inverter is the main harmonic source causing eddy currents in the magnets. In high speed applications such harmonics are harmful because on the one hand the characteristic impedance is very low and on the other hand the ratio between the switching frequency and that of the fundamental is much lower than that of the state of the art. To minimize the impact of these harmonics a first lever is to use strategy of modulation producing low harmonic distortion while the second is to introduce a sinus filter between the inverter and the machine to smooth voltage and current waveforms applied to the machine. Nevertheless, in very high speed machine the interaction of the processes mentioned above may introduce particular harmonics that can irreversibly damage the system: harmonics at the resonant frequency, harmonics at the shaft mode frequency, subharmonics etc. Some studies address these issues but treat these phenomena with separate solutions (specific strategy of modulation, active damping methods ...). The purpose of this paper is to present a complete new active harmonic compensation algorithm based on an improvement of the standard vector control as a global solution to all these issues. This presentation will be based on a complete theoretical analysis of the processes leading to the generation of such undesired harmonics. Then a state of the art of available solutions will be provided before developing the content of a new active harmonic compensation algorithm. The study will be completed by a validation study using simulations and practical case on a high speed machine.Keywords: active harmonic compensation, eddy current losses, high speed machine
Procedia PDF Downloads 3951870 Best Practice for Post-Operative Surgical Site Infection Prevention
Authors: Scott Cavinder
Abstract:
Surgical site infections (SSI) are a known complication to any surgical procedure and are one of the most common nosocomial infections. Globally it is estimated 300 million surgical procedures take place annually, with an incidence of SSI’s estimated to be 11 of 100 surgical patients developing an infection within 30 days after surgery. The specific purpose of the project is to address the PICOT (Problem, Intervention, Comparison, Outcome, Time) question: In patients who have undergone cardiothoracic or vascular surgery (P), does implementation of a post-operative care bundle based on current EBP (I) as compared to current clinical agency practice standards (C) result in a decrease of SSI (O) over a 12-week period (T)? Synthesis of Supporting Evidence: A literature search of five databases, including citation chasing, was performed, which yielded fourteen pieces of evidence ranging from high to good quality. Four common themes were identified for the prevention of SSI’s including use and removal of surgical dressings; use of topical antibiotics and antiseptics; implementation of evidence-based care bundles, and implementation of surveillance through auditing and feedback. The Iowa Model was selected as the framework to help guide this project as it is a multiphase change process which encourages clinicians to recognize opportunities for improvement in healthcare practice. Practice/Implementation: The process for this project will include recruiting postsurgical participants who have undergone cardiovascular or thoracic surgery prior to discharge at a Northwest Indiana Hospital. The patients will receive education, verbal instruction, and return demonstration. The patients will be followed for 12 weeks, and wounds assessed utilizing the National Healthcare Safety Network//Centers for Disease Control (NHSN/CDC) assessment tool and compared to the SSI rate of 2021. Key stakeholders will include two cardiovascular surgeons, four physician assistants, two advance practice nurses, medical assistant and patients. Method of Evaluation: Chi Square analysis will be utilized to establish statistical significance and similarities between the two groups. Main Results/Outcomes: The proposed outcome is the prevention of SSIs in the post-op cardiothoracic and vascular patient. Implication/Recommendation(s): Implementation of standardized post operative care bundles in the prevention of SSI in cardiovascular and thoracic surgical patients.Keywords: cardiovascular, evidence based practice, infection, post-operative, prevention, thoracic, surgery
Procedia PDF Downloads 831869 Impact of Pandemics on Cities and Societies
Authors: Deepak Jugran
Abstract:
Purpose: The purpose of this study is to identify how past Pandemics shaped social evolution and cities. Methodology: A historical and comparative analysis of major historical pandemics in human history their origin, transmission route, biological response and the aftereffects. A Comprehensive pre & post pandemic scenario and focuses selectively on major issues and pandemics that have deepest & lasting impact on society with available secondary data. Results: Past pandemics shaped the behavior of human societies and their cities and made them more resilient biologically, intellectually & socially endorsing the theory of “Survival of the fittest” by Sir Charles Darwin. Pandemics & Infectious diseases are here to stay and as a human society, we need to strengthen our collective response & preparedness besides evolving mechanisms for strict controls on inter-continental movements of people, & especially animals who become carriers for these viruses. Conclusion: Pandemics always resulted in great mortality, but they also improved the overall individual human immunology & collective social response; at the same time, they also improved the public health system of cities, health delivery systems, water, sewage distribution system, institutionalized various welfare reforms and overall collective social response by the societies. It made human beings more resilient biologically, intellectually, and socially hence endorsing the theory of “AGIL” by Prof Talcott Parsons. Pandemics & infectious diseases are here to stay and as humans, we need to strengthen our city response & preparedness besides evolving mechanisms for strict controls on inter-continental movements of people, especially animals who always acted as carriers for these novel viruses. Pandemics over the years acted like natural storms, mitigated the prevailing social imbalances and laid the foundation for scientific discoveries. We understand that post-Covid-19, institutionalized city, state and national mechanisms will get strengthened and the recommendations issued by the various expert groups which were ignored earlier will now be implemented for reliable anticipation, better preparedness & help to minimize the impact of Pandemics. Our analysis does not intend to present chronological findings of pandemics but rather focuses selectively on major pandemics in history, their causes and how they wiped out an entire city’s population and influenced the societies, their behavior and facilitated social evolution.Keywords: pandemics, Covid-19, social evolution, cities
Procedia PDF Downloads 1121868 A Methodology to Virtualize Technical Engineering Laboratories: MastrLAB-VR
Authors: Ivana Scidà, Francesco Alotto, Anna Osello
Abstract:
Due to the importance given today to innovation, the education sector is evolving thanks digital technologies. Virtual Reality (VR) can be a potential teaching tool offering many advantages in the field of training and education, as it allows to acquire theoretical knowledge and practical skills using an immersive experience in less time than the traditional educational process. These assumptions allow to lay the foundations for a new educational environment, involving and stimulating for students. Starting from the objective of strengthening the innovative teaching offer and the learning processes, the case study of the research concerns the digitalization of MastrLAB, High Quality Laboratory (HQL) belonging to the Department of Structural, Building and Geotechnical Engineering (DISEG) of the Polytechnic of Turin, a center specialized in experimental mechanical tests on traditional and innovative building materials and on the structures made with them. The MastrLAB-VR has been developed, a revolutionary innovative training tool designed with the aim of educating the class in total safety on the techniques of use of machinery, thus reducing the dangers arising from the performance of potentially dangerous activities. The virtual laboratory, dedicated to the students of the Building and Civil Engineering Courses of the Polytechnic of Turin, has been projected to simulate in an absolutely realistic way the experimental approach to the structural tests foreseen in their courses of study: from the tensile tests to the relaxation tests, from the steel qualification tests to the resilience tests on elements at environmental conditions or at characterizing temperatures. The research work proposes a methodology for the virtualization of technical laboratories through the application of Building Information Modelling (BIM), starting from the creation of a digital model. The process includes the creation of an independent application, which with Oculus Rift technology will allow the user to explore the environment and interact with objects through the use of joypads. The application has been tested in prototype way on volunteers, obtaining results related to the acquisition of the educational notions exposed in the experience through a virtual quiz with multiple answers, achieving an overall evaluation report. The results have shown that MastrLAB-VR is suitable for both beginners and experts and will be adopted experimentally for other laboratories of the University departments.Keywords: building information modelling, digital learning, education, virtual laboratory, virtual reality
Procedia PDF Downloads 1311867 Factors Influencing Capital Structure: Evidence from the Oil and Gas Industry of Pakistan
Authors: Muhammad Tahir, Mushtaq Muhammad
Abstract:
Capital structure is one of the key decisions taken by the financial managers. This study aims to investigate the factors influencing capital structure decision in Oil and Gas industry of Pakistan using secondary data from published annual reports of listed Oil and Gas Companies of Pakistan. This study covers the time-period from 2008-2014. Capital structure can be affected by profitability, firm size, growth opportunities, dividend payout, liquidity, business risk, and ownership structure. Panel data technique with Ordinary least square (OLS) regression model has been used to find the impact of set of explanatory variables on the capital structure using the Stata. OLS regression results suggest that dividend payout, firm size and government ownership have the most significant impact on financial leverage. Dividend payout and government ownership are found to have significant negative association with financial leverage however firm size indicated positive relationship with financial leverage. Other variables having significant link with financial leverage includes growth opportunities, liquidity and business risk. Results reveal significant positive association between growth opportunities and financial leverage whereas liquidity and business risk are negatively correlated with financial leverage. Profitability and managerial ownership exhibited insignificant relationship with financial leverage. This study contributes to existing Managerial Finance literature with certain managerial implications. Academically, this research study describes the factors affecting capital structure decision of Oil and Gas Companies in Pakistan and adds latest empirical evidence to existing financial literature in Pakistan. Researchers have studies capital structure in Pakistan in general and industry at specific, nevertheless still there is limited literature on this issue. This study will be an attempt to fill this gap in the academic literature. This study has practical implication on both firm level and individual investor/ lenders level. Results of this study can be useful for investors/ lenders in making investment and lending decisions. Further, results of this study can be useful for financial managers to frame optimal capital structure keeping in consideration the factors that can affect capital structure decision as revealed by this study. These results will help financial managers to decide whether to issue stock or issue debt for future investment projects.Keywords: capital structure, multicollinearity, ordinary least square (OLS), panel data
Procedia PDF Downloads 2931866 Examining Terrorism through a Constructivist Framework: Case Study of the Islamic State
Authors: Shivani Yadav
Abstract:
The Study of terrorism lends itself to the constructivist framework as constructivism focuses on the importance of ideas and norms in shaping interests and identities. Constructivism is pertinent to understand the phenomenon of a terrorist organization like the Islamic State (IS), which opportunistically utilizes radical ideas and norms to shape its ‘politics of identity’. This ‘identity’, which is at the helm of preferences and interests of actors, in turn, shapes actions. The paper argues that an effective counter-terrorism policy must recognize the importance of ideas in order to counter the threat arising from acts of radicalism and terrorism. Traditional theories of international relations, with an emphasis on state-centric security problematic, exhibit several limitations and problems in interpreting the phenomena of terrorism. With the changing global order, these theories have failed to adapt to the changing dimensions of terrorism, especially ‘newer’ actors like the Islamic State (IS). The paper observes that IS distinguishes itself from other terrorist organizations in the way that it recruits and spreads its propaganda. Not only are its methods different, but also its tools (like social media) are new. Traditionally, too, force alone has rarely been sufficient to counter terrorism, but it seems especially impossible to completely root out an organization like IS. Time is ripe to change the discourse around terrorism and counter-terrorism strategies. The counter-terrorism measures adopted by states, which primarily focus on mitigating threats to the national security of the state, are preoccupied with statist objectives of the continuance of state institutions and maintenance of order. This limitation prevents these theories from addressing the questions of justice and the ‘human’ aspects of ideas and identity. These counter-terrorism strategies adopt a problem-solving approach that attempts to treat the symptoms without diagnosing the disease. Hence, these restrictive strategies fail to look beyond calculated retaliation against violent actions in order to address the underlying causes of discontent pertaining to ‘why’ actors turn violent in the first place. What traditional theories also overlook is that overt acts of violence may have several causal factors behind them, some of which are rooted in the structural state system. Exploring these root causes through the constructivist framework helps to decipher the process of ‘construction of terror’ and to move beyond the ‘what’ in theorization in order to describe ‘why’, ‘how’ and ‘when’ terrorism occurs. Study of terrorism would much benefit from a constructivist analysis in order to explore non-military options while countering the ideology propagated by the IS.Keywords: constructivism, counter terrorism, Islamic State, politics of identity
Procedia PDF Downloads 1891865 Development of a Stable RNAi-Based Biological Control for Sheep Blowfly Using Bentonite Polymer Technology
Authors: Yunjia Yang, Peng Li, Gordon Xu, Timothy Mahony, Bing Zhang, Neena Mitter, Karishma Mody
Abstract:
Sheep flystrike is one of the most economically important diseases affecting the Australian sheep and wool industry (>356M/annually). Currently, control of Lucillia cuprina relies almost exclusively on chemicals controls and the parasite has developed resistance to nearly all control chemicals used in the past. It is therefore critical to develop an alternative solution for the sustainable control and management of flystrike. RNA interference (RNAi) technologies have been successfully explored in multiple animal industries for developing parasites controls. This research project aims to develop a RNAi based biological control for sheep blowfly. Double-stranded RNA (dsRNA) has already proven successful against viruses, fungi and insects. However, the environmental instability of dsRNA is a major bottleneck for successful RNAi. Bentonite polymer (BenPol) technology can overcome this problem, as it can be tuned for the controlled release of dsRNA in the gut challenging pH environment of the blowfly larvae, prolonging its exposure time to and uptake by target cells. To investigate the potential of BenPol technology for dsRNA delivery, four different BenPol carriers were tested for their dsRNA loading capabilities, and three of them were found to be capable of affording dsRNA stability under multiple temperatures (4°C, 22°C, 40°C, 55°C) in sheep serum. Based on stability results, dsRNA from potential targeted genes was loaded onto BenPol carriers and tested in larvae feeding assays, three genes resulting in knockdowns. Meanwhile, a primary blowfly embryo cell line (BFEC) derived from L. cuprina embryos was successfully established, aim for an effective insect cell model for testing RNAi efficacy for preliminary assessments and screening. The results of this study establish that the dsRNA is stable when loaded on BenPol particles, unlike naked dsRNA rapidly degraded in sheep serum. The stable nanoparticle delivery system offered by BenPol technology can protect and increase the inherent stability of dsRNA molecules at higher temperatures in a complex biological fluid like serum, providing promise for its future use in enhancing animal protection.Keywords: flystrike, RNA interference, bentonite polymer technology, Lucillia cuprina
Procedia PDF Downloads 921864 Transition from Linear to Circular Business Models with Service Design Methodology
Authors: Minna-Maari Harmaala, Hanna Harilainen
Abstract:
Estimates of the economic value of transitioning to circular economy models vary but it has been estimated to represent $1 trillion worth of new business into the global economy. In Europe alone, estimates claim that adopting circular-economy principles could not only have environmental and social benefits but also generate a net economic benefit of €1.8 trillion by 2030. Proponents of a circular economy argue that it offers a major opportunity to increase resource productivity, decrease resource dependence and waste, and increase employment and growth. A circular system could improve competitiveness and unleash innovation. Yet, most companies are not capturing these opportunities and thus the even abundant circular opportunities remain uncaptured even though they would seem inherently profitable. Service design in broad terms relates to developing an existing or a new service or service concept with emphasis and focus on the customer experience from the onset of the development process. Service design may even mean starting from scratch and co-creating the service concept entirely with the help of customer involvement. Service design methodologies provide a structured way of incorporating customer understanding and involvement in the process of designing better services with better resonance to customer needs. A business model is a depiction of how the company creates, delivers, and captures value; i.e. how it organizes its business. The process of business model development and adjustment or modification is also called business model innovation. Innovating business models has become a part of business strategy. Our hypothesis is that in addition to linear models still being easier to adopt and often with lower threshold costs, companies lack an understanding of how circular models can be adopted into their business and how customers will be willing and ready to adopt the new circular business models. In our research, we use robust service design methodology to develop circular economy solutions with two case study companies. The aim of the process is to not only develop the service concepts and portfolio, but to demonstrate the willingness to adopt circular solutions exists in the customer base. In addition to service design, we employ business model innovation methods to develop, test, and validate the new circular business models further. The results clearly indicate that amongst the customer groups there are specific customer personas that are willing to adopt and in fact are expecting the companies to take a leading role in the transition towards a circular economy. At the same time, there is a group of indifferents, to whom the idea of circularity provides no added value. In addition, the case studies clearly show what changes adoption of circular economy principles brings to the existing business model and how they can be integrated.Keywords: business model innovation, circular economy, circular economy business models, service design
Procedia PDF Downloads 1351863 Optimization of Heat Insulation Structure and Heat Flux Calculation Method of Slug Calorimeter
Authors: Zhu Xinxin, Wang Hui, Yang Kai
Abstract:
Heat flux is one of the most important test parameters in the ground thermal protection test. Slug calorimeter is selected as the main sensor measuring heat flux in arc wind tunnel test due to the convenience and low cost. However, because of excessive lateral heat transfer and the disadvantage of the calculation method, the heat flux measurement error of the slug calorimeter is large. In order to enhance measurement accuracy, the heat insulation structure and heat flux calculation method of slug calorimeter were improved. The heat transfer model of the slug calorimeter was built according to the energy conservation principle. Based on the heat transfer model, the insulating sleeve of the hollow structure was designed, which helped to greatly decrease lateral heat transfer. And the slug with insulating sleeve of hollow structure was encapsulated using a package shell. The improved insulation structure reduced heat loss and ensured that the heat transfer characteristics were almost the same when calibrated and tested. The heat flux calibration test was carried out in arc lamp system for heat flux sensor calibration, and the results show that test accuracy and precision of slug calorimeter are improved greatly. In the meantime, the simulation model of the slug calorimeter was built. The heat flux values in different temperature rise time periods were calculated by the simulation model. The results show that extracting the data of the temperature rise rate as soon as possible can result in a smaller heat flux calculation error. Then the different thermal contact resistance affecting calculation error was analyzed by the simulation model. The contact resistance between the slug and the insulating sleeve was identified as the main influencing factor. The direct comparison calibration correction method was proposed based on only heat flux calibration. The numerical calculation correction method was proposed based on the heat flux calibration and simulation model of slug calorimeter after the simulation model was solved by solving the contact resistance between the slug and the insulating sleeve. The simulation and test results show that two methods can greatly reduce the heat flux measurement error. Finally, the improved slug calorimeter was tested in the arc wind tunnel. And test results show that the repeatability accuracy of improved slug calorimeter is less than 3%. The deviation of measurement value from different slug calorimeters is less than 3% in the same fluid field. The deviation of measurement value between slug calorimeter and Gordon Gage is less than 4% in the same fluid field.Keywords: correction method, heat flux calculation, heat insulation structure, heat transfer model, slug calorimeter
Procedia PDF Downloads 1181862 A Comparative Study to Evaluate Changes in Intraocular Pressure with Thiopentone Sodium and Etomidate in Patients Undergoing Surgery for Traumatic Brain Injury
Authors: Vasudha Govil, Prashant Kumar, Ishwar Singh, Kiranpreet Kaur
Abstract:
Traumatic brain injury leads to elevated intracranial pressure. Intraocular pressure (IOP) may also be affected by intracranial pressure. Increased venous pressure in the cavernous sinus is transmitted to the episcleral veins, resulting in an increase in IOP. All drugs used in anesthesia induction can change IOP. Irritation of the gag reflex after usage of the endotracheal tube can also increase IOP; therefore, the administration of anesthetic drugs, which make the lowest change in IOP, is important, while cardiovascular depression must also be avoided. Thiopentone decreases IOP by 40%, whereas etomidate decreases IOP by 30-60% for up to 5 minutes. Hundred patients (age 18-55 years) who underwent emergency craniotomy for TBI are selected for the study. Patients are randomly assigned to two groups of 50 patients each accord¬ing to the drugs used for induction: group T was given thiopentone sodium (5mg kg-1) and group E was given etomi¬date (0.3mg kg-1). Preanaesthesia intraocular pressure (IOP) was measured using Schiotz tonometer. Induction of anesthesia was achieved with etomidate (0.3mg kg-1) or thiopentone (5mg kg-1) along with fentanyl (2 mcg kg-1). Intravenous rocuronium (0.9mg kg-1) was given to facilitate intubation. Intraocular pressure was measured after 1 minute of induction agent administration and 5 minutes after intubation. Maintainance of anesthesia was done with isoflurane in 50% nitrous oxide with fresh gas flow of 5 litres. At the end of the surgery, the residual neuromuscular block was reversed and the patient was shifted to ward/ICU. Patients in both groups were comparable in terms of demographic profile. There was no significant difference between the groups for the hemody¬namic and respiratory variables prior to thiopentone or etomidate administration. Intraocular pressure in thiopentone group in left eye and right eye before induction was 14.97±3.94 mmHg and 14.72±3.75 mmHg respectively and for etomidate group was 15.28±3.69 mmHg and 15.54±4.46 mmHg respectively. After induction IOP decreased significantly in both the eyes (p<0.001) in both the groups. After 5 min of intubation IOP was significantly less than the baseline in both the eyes but it was more than the IOP after induction with the drug. It was found that there was no statistically significant difference in IOP between the two groups at any point of time. Both the drugs caused a significant decrease in IOP after induction and after 5 minutes of endotracheal intubation. The mechanism of decrease in IOP by intravenous induction agents is debatable. Systemic hypotension after the induction of anaesthesia has been shown to cause a decrease in intra-ocular pressure. A decrease in the tone of the extra-ocular muscles can also result in a decrease in intra-ocular pressure. We observed that it is appropriate to use etomidate as an induction agent when elevation of intra-ocular pressure is undesirable owing to the cardiovascular stability it confers in the patients.Keywords: etomidate, intraocular pressure, thiopentone, traumatic
Procedia PDF Downloads 1261861 Long Term Survival after a First Transient Ischemic Attack in England: A Case-Control Study
Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski
Abstract:
Transient ischaemic attacks (TIAs) are warning signs for future strokes. TIA patients are at increased risk of stroke and cardio-vascular events after a first episode. A majority of studies on TIA focused on the occurrence of these ancillary events after a TIA. Long-term mortality after TIA received only limited attention. We undertook this study to determine the long-term hazards of all-cause mortality following a first episode of a TIA using anonymised electronic health records (EHRs). We used a retrospective case-control study using electronic primary health care records from The Health Improvement Network (THIN) database. Patients born prior to or in year 1960, resident in England, with a first diagnosis of TIA between January 1986 and January 2017 were matched to three controls on age, sex and general medical practice. The primary outcome was all-cause mortality. The hazards of all-cause mortality were estimated using a time-varying Weibull-Cox survival model which included both scale and shape effects and a random frailty effect of GP practice. 20,633 cases and 58,634 controls were included. Cases aged 39 to 60 years at the first TIA event had the highest hazard ratio (HR) of mortality compared to matched controls (HR = 3.04, 95% CI (2.91 - 3.18)). The HRs for cases aged 61-70 years, 71-76 years and 77+ years were 1.98 (1.55 - 2.30), 1.79 (1.20 - 2.07) and 1.52 (1.15 - 1.97) compared to matched controls. Aspirin provided long-term survival benefits to cases. Cases aged 39-60 years on aspirin had HR of 0.93 (0.84 - 1.00), 0.90 (0.82 - 0.98) and 0.88 (0.80 - 0.96) at 5 years, 10 years and 15 years, respectively, compared to cases in the same age group who were not on antiplatelets. Similar beneficial effects of aspirin were observed in other age groups. There were no significant survival benefits with other antiplatelet options. No survival benefits of antiplatelet drugs were observed in controls. Our study highlights the excess long-term risk of death of TIA patients and cautions that TIA should not be treated as a benign condition. The study further recommends aspirin as the better option for secondary prevention for TIA patients compared to clopidogrel recommended by NICE guidelines. Management of risk factors and treatment strategies should be important challenges to reduce the burden of disease.Keywords: dual antiplatelet therapy (DAPT), General Practice, Multiple Imputation, The Health Improvement Network(THIN), hazard ratio (HR), Weibull-Cox model
Procedia PDF Downloads 149