Search results for: Fully Homomorphic Encryption Scheme
379 Personality Based Tailored Learning Paths Using Cluster Analysis Methods: Increasing Students' Satisfaction in Online Courses
Authors: Orit Baruth, Anat Cohen
Abstract:
Online courses have become common in many learning programs and various learning environments, particularly in higher education. Social distancing forced in response to the COVID-19 pandemic has increased the demand for these courses. Yet, despite the frequency of use, online learning is not free of limitations and may not suit all learners. Hence, the growth of online learning alongside with learners' diversity raises the question: is online learning, as it currently offered, meets the needs of each learner? Fortunately, today's technology allows to produce tailored learning platforms, namely, personalization. Personality influences learner's satisfaction and therefore has a significant impact on learning effectiveness. A better understanding of personality can lead to a greater appreciation of learning needs, as well to assists educators ensure that an optimal learning environment is provided. In the context of online learning and personality, the research on learning design according to personality traits is lacking. This study explores the relations between personality traits (using the 'Big-five' model) and students' satisfaction with five techno-pedagogical learning solutions (TPLS): discussion groups, digital books, online assignments, surveys/polls, and media, in order to provide an online learning process to students' satisfaction. Satisfaction level and personality identification of 108 students who participated in a fully online learning course at a large, accredited university were measured. Cluster analysis methods (k-mean) were applied to identify learners’ clusters according to their personality traits. Correlation analysis was performed to examine the relations between the obtained clusters and satisfaction with the offered TPLS. Findings suggest that learners associated with the 'Neurotic' cluster showed low satisfaction with all TPLS compared to learners associated with the 'Non-neurotics' cluster. learners associated with the 'Consciences' cluster were satisfied with all TPLS except discussion groups, and those in the 'Open-Extroverts' cluster were satisfied with assignments and media. All clusters except 'Neurotic' were highly satisfied with the online course in general. According to the findings, dividing learners into four clusters based on personality traits may help define tailor learning paths for them, combining various TPLS to increase their satisfaction. As personality has a set of traits, several TPLS may be offered in each learning path. For the neurotics, however, an extended selection may suit more, or alternatively offering them the TPLS they less dislike. Study findings clearly indicate that personality plays a significant role in a learner's satisfaction level. Consequently, personality traits should be considered when designing personalized learning activities. The current research seeks to bridge the theoretical gap in this specific research area. Establishing the assumption that different personalities need different learning solutions may contribute towards a better design of online courses, leaving no learner behind, whether he\ she likes online learning or not, since different personalities need different learning solutions.Keywords: online learning, personality traits, personalization, techno-pedagogical learning solutions
Procedia PDF Downloads 105378 Bilingual Books in British Sign Language and English: The Development of E-Book
Authors: Katherine O'Grady-Bray
Abstract:
For some deaf children, reading books can be a challenge. Frank Barnes School (FBS) provides guided reading time with Teachers of the Deaf, in which they read books with deaf children using a bilingual approach. The vocabulary and context of the story is explained to deaf children in BSL so they develop skills bridging English and BSL languages. However, the success of this practice is only achieved if the person is fluent in both languages. FBS piloted a scheme to convert an Oxford Reading Tree (ORT) book into an e-book that can be read using tablets. Deaf readers at FBS have access to both languages (BSL and English) during lessons and outside the classroom. The pupils receive guided reading sessions with a Teacher of the Deaf every morning, these one to one sessions give pupils the opportunity to learn how to bridge both languages e.g. how to translate English to BSL and vice versa. Generally, due to our pupils’ lack of access to incidental learning, gaining new information about the world around them is limited. This highlights the importance of quality time to scaffold their language development. In some cases, there is a shortfall of parental support at home due to poor communication skills or an unawareness of how to interact with deaf children. Some families have a limited knowledge of sign language or simply don’t have the required learning environment and strategies needed for language development with deaf children. As the majority of our pupils’ preferred language is BSL we use that to teach reading and writing English. If this is not mirrored at home, there is limited opportunity for joint reading sessions. Development of the e-Book required planning and technical development. The overall production took time as video footage needed to be shot and then edited individually for each page. There were various technical considerations such as having an appropriate background colour so not to draw attention away from the signer. Appointing a signer with the required high level of BSL was essential. The language and pace of the sign language was an important consideration as it was required to match the age and reading level of the book. When translating English text to BSL, careful consideration was given to the nonlinear nature of BSL and the differences in language structure and syntax. The e-book was produced using Apple’s ‘iBook Author’ software which allowed video footage of the signer to be embedded on pages opposite the text and illustration. This enabled BSL translation of the content of the text and inferences of the story. An interpreter was used to directly ‘voice over’ the signer rather than the actual text. The aim behind the structure and layout of the e-book is to allow parents to ‘read’ with their deaf child which helps to develop both languages. From observations, the use of e-books has given pupils confidence and motivation with their reading, developing skills bridging both BSL and English languages and more effective reading time with parents.Keywords: bilingual book, e-book, BSL and English, bilingual e-book
Procedia PDF Downloads 170377 Analytical Study of the Structural Response to Near-Field Earthquakes
Authors: Isidro Perez, Maryam Nazari
Abstract:
Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.Keywords: near-field, pulse, pushover, time-history
Procedia PDF Downloads 147376 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches
Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang
Abstract:
Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis
Procedia PDF Downloads 403375 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise
Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou
Abstract:
Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction
Procedia PDF Downloads 123374 Outcome-Based Education as Mediator of the Effect of Blended Learning on the Student Performance in Statistics
Authors: Restituto I. Rodelas
Abstract:
The higher education has adopted the outcomes-based education from K-12. In this approach, the teacher uses any teaching and learning strategies that enable the students to achieve the learning outcomes. The students may be required to exert more effort and figure things out on their own. Hence, outcomes-based students are assumed to be more responsible and more capable of applying the knowledge learned. Another approach that the higher education in the Philippines is starting to adopt from other countries is blended learning. This combination of classroom and fully online instruction and learning is expected to be more effective. Participating in the online sessions, however, is entirely up to the students. Thus, the effect of blended learning on the performance of students in Statistics may be mediated by outcomes-based education. If there is a significant positive mediating effect, then blended learning can be optimized by integrating outcomes-based education. In this study, the sample will consist of four blended learning Statistics classes at Jose Rizal University in the second semester of AY 2015–2016. Two of these classes will be assigned randomly to the experimental group that will be handled using outcomes-based education. The two classes in the control group will be handled using the traditional lecture approach. Prior to the discussion of the first topic, a pre-test will be administered. The same test will be given as posttest after the last topic is covered. In order to establish equality of the groups’ initial knowledge, single factor ANOVA of the pretest scores will be performed. Single factor ANOVA of the posttest-pretest score differences will also be conducted to compare the performance of the experimental and control groups. When a significant difference is obtained in any of these ANOVAs, post hoc analysis will be done using Tukey's honestly significant difference test (HSD). Mediating effect will be evaluated using correlation and regression analyses. The groups’ initial knowledge are equal when the result of pretest scores ANOVA is not significant. If the result of score differences ANOVA is significant and the post hoc test indicates that the classes in the experimental group have significantly different scores from those in the control group, then outcomes-based education has a positive effect. Let blended learning be the independent variable (IV), outcomes-based education be the mediating variable (MV), and score difference be the dependent variable (DV). There is mediating effect when the following requirements are satisfied: significant correlation of IV to DV, significant correlation of IV to MV, significant relationship of MV to DV when both IV and MV are predictors in a regression model, and the absolute value of the coefficient of IV as sole predictor is larger than that when both IV and MV are predictors. With a positive mediating effect of outcomes-base education on the effect of blended learning on student performance, it will be recommended to integrate outcomes-based education into blended learning. This will yield the best learning results.Keywords: outcome-based teaching, blended learning, face-to-face, student-centered
Procedia PDF Downloads 291373 A Text in Movement in the Totonac Flyers’ Dance: A Performance-Linguistic Theory
Authors: Luisa Villani
Abstract:
The proposal aims to express concerns about the connection between mind, body, society, and environment in the Flyers’ dance, a very well-known rotatory dance in Mexico, to create meanings and to make the apprehension of the world possible. The interaction among the brain, mind, body, and environment, and the intersubjective relation among them, means the world creates and recreates a social interaction. The purpose of this methodology, based on the embodied cognition theory, which was named “A Performance-Embodied Theory” is to find the principles and patterns that organize the culture and the rules of the apprehension of the environment by Totonac people while the dance is being performed. The analysis started by questioning how anthropologists can interpret how Totonacs transform their unconscious knowledge into conscious knowledge and how the scheme formation of imagination and their collective imagery is understood in the context of public-facing rituals, such as Flyers’ dance. The problem is that most of the time, researchers interpret elements in a separate way and not as a complex ritual dancing whole, which is the original contribution of this study. This theory, which accepts the fact that people are body-mind agents, wants to interpret the dance as a whole, where the different elements are joined to an integral interpretation. To understand incorporation, data was recollected in prolonged periods of fieldwork, with participant observation and linguistic and extralinguistic data analysis. Laban’s notation for the description and analysis of gestures and movements in the space was first used, but it was later transformed and gone beyond this method, which is still a linear and compositional one. Performance in a ritual is the actualization of a potential complex of meanings or cognitive domains among many others in a culture: one potential dimension becomes probable and then real because of the activation of specific meanings in a context. It can only be thought what language permits thinking, and the lexicon that is used depends on the individual culture. Only some parts of this knowledge can be activated at once, and these parts of knowledge are connected. Only in this way, the world can be understood. It can be recognized that as languages geometrize the physical world thanks to the body, also ritual does. In conclusion, the ritual behaves as an embodied grammar or a text in movement, which, depending on the ritual phases and the words and sentences pronounced in the ritual, activates bits of encyclopedic knowledge that people have about the world. Gestures are not given by the performer but emerge from the intentional perception in which gestures are “understood” by the audio-spectator in an inter-corporeal way. The impact of this study regards the possibility not only to disseminate knowledge effectively but also to generate a balance between different parts of the world where knowledge is shared, rather than being received by academic institutions alone. This knowledge can be exchanged, so indigenous communities and academies could be together as part of the activation and the sharing of this knowledge with the world.Keywords: dance, flyers, performance, embodied, cognition
Procedia PDF Downloads 59372 Dengue Prevention and Control in Kaohsiung City
Authors: Chiu-Wen Chang, I-Yun Chang, Wei-Ting Chen, Hui-Ping Ho, Ruei-Hun Chang, Joh-Jong Huang
Abstract:
Kaohsiung City is located in the tropical region where has Aedes aegypti and Aedes albopictus distributed; once the virus invades, it’s can easily trigger local epidemic. Besides, Kaohsiung City has a world-class airport and harbor, trade and tourism are close and frequently with every country, especially with the Southeast Asian countries which also suffer from dengue. Therefore, Kaohsiung City faces the difficult challenge of dengue every year. The objectives of this study was to enhance dengue clinical care, border management and vector surveillance in Kaohsiung City by establishing an larger scale, innovatively and more coordinated dengue prevention and control strategies in 2016, including (1) Integrated medical programs: facilitated 657 contract medical institutions, widely set up NS1 rapid test in clinics, enhanced triage and referrals system, dengue case daily-monitoring management (2) Border quarantine: comprehensive NS1 screening for foreign workers and fisheries when immigration, hospitalization and isolation for suspected cases, health education for high risk groups (foreign students, other tourists) (3) Mosquito control: Widely use Gravitrap to monitor mosquito density in environment, use NS1 rapid screening test to detect community dengue virus (4) Health education: create a dengue app for people to immediately inquire the risk map and nearby medical resources, routine health education to all districts to strengthen public’s dengue knowledge, neighborhood cleaning awards program. The results showed that after new integration of dengue prevention and control strategies fully implemented in Kaohsiung City, the number of confirmed cases in 2016 declined to 342 cases, the majority of these cases are the continuation epidemic in 2015; in fact, only two cases confirmed after the 2016 summer. Besides, the dengue mortality rate successfully decreased to 0% in 2016. Moreover, according to the reporting rate from medical institutions in 2014 and 2016, it dropped from 27.07% to 19.45% from medical center, and it decreased from 36.55% to 29.79% from regional hospital; however, the reporting rate of district hospital increased from 11.88% to 15.87% and also increased from 24.51% to 34.89% in general practice clinics. Obviously, it showed that under the action of strengthening medical management, it reduced the medical center’s notification ratio and improved the notification ratio of general clinics which achieved the great effect of dengue clinical management and dengue control.Keywords: dengue control, integrated control strategies, clinical management, NS1
Procedia PDF Downloads 271371 A Doctrinal Research and Review of Hashtag Trademarks
Authors: Hetvi Trivedi
Abstract:
Technological escalation cannot be negated. The same is true for the benefits of technology. However, such escalation has interfered with the traditional theories of protection under Intellectual Property Rights. Out of the many trends that have disrupted the old-school understanding of Intellectual Property Rights, one is hashtags. What began modestly in the year 2007 has now earned a remarkable status, and coupled with the unprecedented rise in social media the hashtag culture has witnessed a monstrous growth. A tiny symbol on the keypad of phones or computers is now a major trend which also serves companies as a critical investment measure in establishing their brand in the market. Due to this a section of the Intellectual Property Rights- Trademarks is undergoing a humungous transformation with hashtags like #icebucket, #tbt or #smilewithacoke, getting trademark protection. So, as the traditional theories of IP take on the modern trends, it is necessary to understand the change and challenge at a theoretical and proportional level and where need be, question the change. Traditionally, Intellectual Property Rights serves the societal need for intellectual productions that ensure its holistic development as well as cultural, economic, social and technological progress. In a two-pronged effort at ensuring continuity of creativity, IPRs recognize the investment of individual efforts that go into creation by way of offering protection. Commonly placed under two major theories- Utilitarian and Natural, IPRs aim to accord protection and recognition to an individual’s creation or invention which serve as an incentive for further creations or inventions, thus fully protecting the creative, inventive or commercial labour invested in the same. In return, the creator by lending the public the access to the creation reaps various benefits. This way Intellectual Property Rights form a ‘social contract’ between the author and society. IPRs are similarly attached to a social function, whereby individual rights must be weighed against competing rights and to the farthest limit possible, both sets of rights must be treated in a balanced manner. To put it differently, both the society and the creator must be put on an equal footing with neither party’s rights subservient to the other. A close look through doctrinal research, at the recent trend of trademark protection, makes the social function of IPRs seem to be moving far from the basic philosophy. Thus, where technology interferes with the philosophies of law, it is important to check and allow such growth only in moderation, for none is superior than the other. The human expansionist nature may need everything under the sky that can be tweaked slightly to be counted and protected as Intellectual Property- like a common parlance word transformed into a hashtag, however IP in order to survive on its philosophies needs to strike a balance. A unanimous global decision on the judicious use of IPR recognition and protection is the need of the hour.Keywords: hashtag trademarks, intellectual property, social function, technology
Procedia PDF Downloads 132370 Natural Monopolies and Their Regulation in Georgia
Authors: Marina Chavleishvili
Abstract:
Introduction: Today, the study of monopolies, including natural monopolies, is topical. In real life, pure monopolies are natural monopolies. Natural monopolies are used widely and are regulated by the state. In particular, the prices and rates are regulated. The paper considers the problems associated with the operation of natural monopolies in Georgia, in particular, their microeconomic analysis, pricing mechanisms, and legal mechanisms of their operation. The analysis was carried out on the example of the power industry. The rates of natural monopolies in Georgia are controlled by the Georgian National Energy and Water Supply Regulation Commission. The paper analyzes the positive role and importance of the regulatory body and the issues of improving the legislative base that will support the efficient operation of the branch. Methodology: In order to highlight natural monopolies market tendencies, the domestic and international markets are studied. An analysis of monopolies is carried out based on the endogenous and exogenous factors that determine the condition of companies, as well as the strategies chosen by firms to increase the market share. According to the productivity-based competitiveness assessment scheme, the segmentation opportunities, business environment, resources, and geographical location of monopolist companies are revealed. Main Findings: As a result of the analysis, certain assessments and conclusions were made. Natural monopolies are quite a complex and versatile economic element, and it is important to specify and duly control their frame conditions. It is important to determine the pricing policy of natural monopolies. The rates should be transparent, should show the level of life in the country, and should correspond to the incomes. The analysis confirmed the significance of the role of the Antimonopoly Service in the efficient management of natural monopolies. The law should adapt to reality and should be applied only to regulate the market. The present-day differential electricity tariffs varying depending on the consumed electrical power need revision. The effects of the electricity price discrimination are important, segmentation in different seasons in particular. Consumers use more electricity in winter than in summer, which is associated with extra capacities and maintenance costs. If the price of electricity in winter is higher than in summer, the electricity consumption will decrease in winter. The consumers will start to consume the electricity more economically, what will allow reducing extra capacities. Conclusion: Thus, the practical realization of the views given in the paper will contribute to the efficient operation of natural monopolies. Consequently, their activity will be oriented not on the reduction but on the increase of increments of the consumers or producers. Overall, the optimal management of the given fields will allow for improving the well-being throughout the country. In the article, conclusions are made, and the recommendations are developed to deliver effective policies and regulations toward the natural monopolies in Georgia.Keywords: monopolies, natural monopolies, regulation, antimonopoly service
Procedia PDF Downloads 87369 Seismic Reinforcement of Existing Japanese Wooden Houses Using Folded Exterior Thin Steel Plates
Authors: Jiro Takagi
Abstract:
Approximately 90 percent of the casualties in the near-fault-type Kobe earthquake in 1995 resulted from the collapse of wooden houses, although a limited number of collapses of this type of building were reported in the more recent off-shore-type Tohoku Earthquake in 2011 (excluding direct damage by the Tsunami). Kumamoto earthquake in 2016 also revealed the vulnerability of old wooden houses in Japan. There are approximately 24.5 million wooden houses in Japan and roughly 40 percent of them are considered to have the inadequate seismic-resisting capacity. Therefore, seismic strengthening of these wooden houses is an urgent task. However, it has not been quickly done for various reasons, including cost and inconvenience during the reinforcing work. Residents typically spend their money on improvements that more directly affect their daily housing environment (such as interior renovation, equipment renewal, and placement of thermal insulation) rather than on strengthening against extremely rare events such as large earthquakes. Considering this tendency of residents, a new approach to developing a seismic strengthening method for wooden houses is needed. The seismic reinforcement method developed in this research uses folded galvanized thin steel plates as both shear walls and the new exterior architectural finish. The existing finish is not removed. Because galvanized steel plates are aesthetic and durable, they are commonly used in modern Japanese buildings on roofs and walls. Residents could feel a physical change through the reinforcement, covering existing exterior walls with steel plates. Also, this exterior reinforcement can be installed with only outdoor work, thereby reducing inconvenience for residents since they would not be required to move out temporarily during construction. The Durability of the exterior is enhanced, and the reinforcing work can be done efficiently since perfect water protection is not required for the new finish. In this method, the entire exterior surface would function as shear walls and thus the pull-out force induced by seismic lateral load would be significantly reduced as compared with a typical reinforcement scheme of adding braces in selected frames. Consequently, reinforcing details of anchors to the foundations would be less difficult. In order to attach the exterior galvanized thin steel plates to the houses, new wooden beams are placed next to the existing beams. In this research, steel connections between the existing and new beams are developed, which contain a gap for the existing finish between the two beams. The thin steel plates are screwed to the new beams and the connecting vertical members. The seismic-resisting performance of the shear walls with thin steel plates is experimentally verified both for the frames and connections. It is confirmed that the performance is high enough for bracing general wooden houses.Keywords: experiment, seismic reinforcement, thin steel plates, wooden houses
Procedia PDF Downloads 226368 Marine Environmental Monitoring Using an Open Source Autonomous Marine Surface Vehicle
Authors: U. Pruthviraj, Praveen Kumar R. A. K. Athul, K. V. Gangadharan, S. Rao Shrikantha
Abstract:
An open source based autonomous unmanned marine surface vehicle (UMSV) is developed for some of the marine applications such as pollution control, environmental monitoring and thermal imaging. A double rotomoulded hull boat is deployed which is rugged, tough, quick to deploy and moves faster. It is suitable for environmental monitoring, and it is designed for easy maintenance. A 2HP electric outboard marine motor is used which is powered by a lithium-ion battery and can also be charged from a solar charger. All connections are completely waterproof to IP67 ratings. In full throttle speed, the marine motor is capable of up to 7 kmph. The motor is integrated with an open source based controller using cortex M4F for adjusting the direction of the motor. This UMSV can be operated by three modes: semi-autonomous, manual and fully automated. One of the channels of a 2.4GHz radio link 8 channel transmitter is used for toggling between different modes of the USMV. In this electric outboard marine motor an on board GPS system has been fitted to find the range and GPS positioning. The entire system can be assembled in the field in less than 10 minutes. A Flir Lepton thermal camera core, is integrated with a 64-bit quad-core Linux based open source processor, facilitating real-time capturing of thermal images and the results are stored in a micro SD card which is a data storage device for the system. The thermal camera is interfaced to an open source processor through SPI protocol. These thermal images are used for finding oil spills and to look for people who are drowning at low visibility during the night time. A Real Time clock (RTC) module is attached with the battery to provide the date and time of thermal images captured. For the live video feed, a 900MHz long range video transmitter and receiver is setup by which from a higher power output a longer range of 40miles has been achieved. A Multi-parameter probe is used to measure the following parameters: conductivity, salinity, resistivity, density, dissolved oxygen content, ORP (Oxidation-Reduction Potential), pH level, temperature, water level and pressure (absolute).The maximum pressure it can withstand 160 psi, up to 100m. This work represents a field demonstration of an open source based autonomous navigation system for a marine surface vehicle.Keywords: open source, autonomous navigation, environmental monitoring, UMSV, outboard motor, multi-parameter probe
Procedia PDF Downloads 242367 Congruency of English Teachers’ Assessments Vis-à-Vis 21st Century Skills Assessment Standards
Authors: Mary Jane Suarez
Abstract:
A massive educational overhaul has taken place at the onset of the 21st century addressing the mismatches of employability skills with that of scholastic skills taught in schools. For a community to thrive in an ever-developing economy, the teaching of the necessary skills for job competencies should be realized by every educational institution. However, in harnessing 21st-century skills amongst learners, teachers, who often lack familiarity and thorough insights into the emerging 21st-century skills, are chained with the restraint of the need to comprehend the physiognomies of 21st-century skills learning and the requisite to implement the tenets of 21st-century skills teaching. With the endeavor to espouse 21st-century skills learning and teaching, a United States-based national coalition called Partnership 21st Century Skills (P21) has identified the four most important skills in 21st-century learning: critical thinking, communication, collaboration, and creativity and innovation with an established framework for 21st-century skills standards. Assessment of skills is the lifeblood of every teaching and learning encounter. It is correspondingly crucial to look at the 21st century standards and the assessment guides recognized by P21 to ensure that learners are 21st century ready. This mixed-method study sought to discover and describe what classroom assessments were used by English teachers in a public secondary school in the Philippines with course offerings on science, technology, engineering, and mathematics (STEM). The research evaluated the assessment tools implemented by English teachers and how these assessment tools were congruent to the 21st assessment standards of P21. A convergent parallel design was used to analyze assessment tools and practices in four phases. In the data-gathering phase, survey questionnaires, document reviews, interviews, and classroom observations were used to gather quantitative and qualitative data simultaneously, and how assessment tools and practices were consistent with the P21 framework with the four Cs as its foci. In the analysis phase, the data were treated using mean, frequency, and percentage. In the merging and interpretation phases, a side-by-side comparison was used to identify convergent and divergent aspects of the results. In conclusion, the results yielded assessments tools and practices that were inconsistent, if not at all, used by teachers. Findings showed that there were inconsistencies in implementing authentic assessments, there was a scarcity of using a rubric to critically assess 21st skills in both language and literature subjects, there were incongruencies in using portfolio and self-reflective assessments, there was an exclusion of intercultural aspects in assessing the four Cs and the lack of integrating collaboration in formative and summative assessments. As a recommendation, a harmonized assessment scheme of P21 skills was fashioned for teachers to plan, implement, and monitor classroom assessments of 21st-century skills, ensuring the alignment of such assessments to P21 standards for the furtherance of the institution’s thrust to effectively integrate 21st-century skills assessment standards to its curricula.Keywords: 21st-century skills, 21st-century skills assessments, assessment standards, congruency, four Cs
Procedia PDF Downloads 194366 God, The Master Programmer: The Relationship Between God and Computers
Authors: Mohammad Sabbagh
Abstract:
Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.Keywords: programming, the Quran, object orientation, computers and humans, GOD
Procedia PDF Downloads 107365 Glasshouse Experiment to Improve Phytomanagement Solutions for Cu-Polluted Mine Soils
Authors: Marc Romero-Estonllo, Judith Ramos-Castro, Yaiza San Miguel, Beatriz Rodríguez-Garrido, Carmela Monterroso
Abstract:
Mining activity is among the main sources of trace and heavy metal(loid) pollution worldwide, which is a hazard to human and environmental health. That is why several projects have been emerging for the remediation of such polluted places. Phytomanagement strategies draw good performances besides big side benefits. In this work, a glasshouse assay with trace element polluted soils from an old Cu mine ore (NW of Spain) which forms part of the PhytoSUDOE network of phytomanaged contaminated field sites (PhytoSUDOE Project (SOE1/P5/E0189)) was set. The objective was to evaluate improvements induced by the following phytoremediation-related treatments. Three increasingly complex amendments alone or together with plant growth (Populus nigra L. alone and together with Tripholium repens L.) were tested. And three different rhizosphere bioinocula were applied (Plant Growth Promoting Bacteria (PGP), mycorrhiza (MYC), or mixed (PGP+MYC)). After 110 days of growth, plants were collected, biomass was weighed, and tree length was measured. Physical-chemical analyses were carried out to determine pH, effective Cation Exchange Capacity, carbon and nitrogen contents, bioavailable phosphorous (Olsen bicarbonate method), pseudo total element content (microwave acid digested fraction), EDTA extractable metals (complexed fraction), and NH4NO3 extractable metals (easily bioavailable fraction). On plant material, nitrogen content and acid digestion elements were determined. Amendment usage, plant growth, and bioinoculation were demonstrated to improve soil fertility and/or plant health within the time span of this study. Particularly, pH levels increased from 3 (highly acidic) to 5 (acidic) in the worst-case scenario, even reaching 7 (neutrality) in the best plots. Organic matter and pH increments were related to polluting metals’ bioavailability decrements. Plants grew better both with the most complex amendment and the middle one, with few differences due to bioinoculation. Using the less complex amendment (just compost) beneficial effects of bioinoculants were more observable, although plants didn’t thrive very well. On unamended soils, plants neither sprouted nor bloomed. The scheme assayed in this study is suitable for phytomanagement of these kinds of soils affected by mining activity. These findings should be tested now on a larger scale.Keywords: aided phytoremediation, mine pollution, phytostabilization, soil pollution, trace elements
Procedia PDF Downloads 67364 A Disappearing Radiolucency of the Mandible Caused by Inadvertent Trauma Following IMF Screw Placement
Authors: Anna Ghosh, Dominic Shields, Ceri McIntosh, Stephen Crank
Abstract:
A 29-year-old male was a referral to the maxillofacial unit following a referral from his general dental practitioner via a routine pathway regarding a large periapical lesion on the LR4 with root resorption. The patient was asymptomatic, the LR4 vital and unrestored, and this was an incidental finding at a routine check-up. The patient's past medical history was unremarkable. Examination revealed no extra or intra-oral pathology and non-mobile teeth. No focal neurology was detected. An orthopantogram demonstrated a well-defined unilocular corticated radiolucency associated with the LR4. The root appeared shortened with the radiolucency between the root and a radio-opacity, possibly representing the displacement of the apical tip of the tooth. It was recommended that the referring general practitioner should proceed with orthograde root canal therapy, after which time exploration, enucleation, and retrograde root filling of the LR4 would be carried out by a maxillofacial unit. The patient was reviewed six months later where, due to the COVID-19 pandemic, the patient had been unable to access general dental services for the root canal treatment. He was still entirely asymptomatic. A one-year review was planned in the hope this would allow time for the orthograde root canal therapy to be completed. At this review, the orthograde root canal therapy had still not been completed. Interestingly, a repeat orthopantogram revealed a significant reduction in size with good bony infill and a significant reduction in the size of the lesion. Due to the ongoing delays with primary care dental therapy, the patient was subsequently internally referred to the restorative dentistry department for care. The patient was seen again by oral and maxillo-facial surgery in mid-2022 where he still reports this tooth as asymptomatic with no focal neurology. The patient's history was fully reviewed, and noted that 15 years previously, the patient underwent open reduction and internal fixation of a left angle of mandible fracture. Temporary IMF involving IMF screws and fixation wires were employed to maintain occlusion during plating and subsequently removed post-operatively. It is proposed that the radiolucency was, as a result of the IMF screw placement, penetrating the LR4 root resulting in resorption of the tooth root and development of a radiolucency. This case highlights the importance of careful screw size and physical site location, and placement of IMF screws, as there can be permeant damage to a patient’s dentition.Keywords: facial trauma, inter-maxillary fixation, mandibular radiolucency, oral and maxillo-facial surgery
Procedia PDF Downloads 137363 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 160362 Social Problems and Gender Wage Gap Faced by Working Women in Readymade Garment Sector of Pakistan
Authors: Narjis Kahtoon
Abstract:
The issue of the wage discrimination on the basis of gender and social problem has been a significant research problem for several decades. Whereas lots of have explored reasons for the persistence of an inequality in the wages of male and female, none has successfully explained away the entire differentiation. The wage discrimination on the basis of gender and social problem of working women is a global issue. Although inequality in political and economic and social make-up of countries all over the world, the gender wage discrimination, and social constraint is present. The aim of the research is to examine the gender wage discrimination and social constraint from an international perspective and to determine whether any pattern exists among cultural dimensions of a country and the man and women remuneration gap in Readymade Garment Sector of Pakistan. Population growth rate is significant indicator used to explain the change in population and play a crucial point in the economic development of a country. In Pakistan, readymade garment sector consists of small, medium and large sized firms. With an estimated 30 percent of the workforce in textile- Garment is females’. Readymade garment industry is a labor intensive industry and relies on the skills of individual workers and provides highest value addition in the textile sector. In the Garment sector, female workers are concentrated in poorly paid, labor-intensive down-stream production (readymade garments, linen, towels, etc.), while male workers dominate capital- intensive (ginning, spinning and weaving) processes. Gender wage discrimination and social constraint are reality in Pakistan Labor Market. This research allows us not only to properly detect the size of gender wage discrimination and social constraint but to also fully understand its consequences in readymade garment sector of Pakistan. Furthermore, research will evaluated this measure for the three main clusters like Lahore, Karachi, and Faisalabad. These data contain complete details of male and female workers and supervisors in the readymade garment sector of Pakistan. These sources of information provide a unique opportunity to reanalyze the previous finding in the literature. The regression analysis focused on the standard 'Mincerian' earning equation and estimates it separately by gender, the research will also imply the cultural dimensions developed by Hofstede (2001) to profile a country’s cultural status and compare those cultural dimensions to the wage inequalities. Readymade garment of Pakistan is one of the important sectors since its products have huge demand at home and abroad. These researches will a major influence on the measures undertaken to design a public policy regarding wage discrimination and social constraint in readymade garment sector of Pakistan.Keywords: gender wage differentials, decomposition, garment, cultural
Procedia PDF Downloads 210361 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties
Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier
Abstract:
The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA
Procedia PDF Downloads 68360 The Report of Co-Construction into a Trans-National Education Teaching Team
Authors: Juliette MacDonald, Jun Li, Wenji Xiang, Mingwei Zhao
Abstract:
Shanghai International College of Fashion and Innovation (SCF) was created as a result of a collaborative partnership agreement between the University of Edinburgh and Donghua University. The College provides two programmes: Fashion Innovation and Fashion Interior Design and the overarching curriculum has the intention of developing innovation and creativity within an international learning, teaching, knowledge exchange and research context. The research problem presented here focuses on the multi-national/cultural faculty in the team, the challenges arising from difficulties in communication and the associated limitations of management frameworks. The teaching faculty at SCF are drawn from China, Finland, Korea, Singapore and the UK with input from Flying Faculty from Fashion and Interior Design, Edinburgh College of Art (ECA), for 5 weeks each semester. Rather than fully replicating the administrative and pedagogical style of one or other of the institutions within this joint partnership the aim from the outset was to create a third way which acknowledges the quality assurance requirements of both Donghua and Edinburgh, the academic and technical needs of the students and provides relevant development and support for all the SCF-based staff and Flying Academics. It has been well acknowledged by those who are involved in teaching across cultures that there is often a culture shock associated with transnational education but that the experience of being involved in the delivery of a curriculum at a Joint Institution can also be very rewarding for staff and students. It became clear at SCF that if a third way might be achieved which encourages innovative approaches to fashion education whilst balancing the expectations of Chinese and western concepts of education and the aims of two institutions, then it was going to be necessary to construct a framework which developed close working relationships for the entire teaching team, so not only between academics and students but also between technicians and administrators at ECA and SCF. The attempts at co-construction and integration are built on the sharing of cultural and educational experiences and knowledge as well as provision of opportunities for reflection on the pedagogical purpose of the curriculum and its delivery. Methods on evaluating the effectiveness of these aims include a series of surveys and interviews and analysis of data drawn from teaching projects delivered to the students along with graduate successes from the last five years, since SCF first opened its doors. This paper will provide examples of best practice developed by SCF which have helped guide the faculty and embed common core values and aims of co-construction regulations and management, whilst building a pro-active TNE (Trans-National Education) team which enhances the learning experience for staff and students alike.Keywords: cultural co-construction, educational team management, multi-cultural challenges, TNE integration for teaching teams
Procedia PDF Downloads 121359 Salmonella Emerging Serotypes in Northwestern Italy: Genetic Characterization by Pulsed-Field Gel Electrophoresis
Authors: Clara Tramuta, Floris Irene, Daniela Manila Bianchi, Monica Pitti, Giulia Federica Cazzaniga, Lucia Decastelli
Abstract:
This work presents the results obtained by the Regional Reference Centre for Salmonella Typing (CeRTiS) in a retrospective study aimed to investigate, through Pulsed-field Gel Electrophoresis (PFGE) analysis, the genetic relatedness of emerging Salmonella serotypes of human origin circulating in North-West of Italy. Furthermore, the goal of this work was to create a Regional database to facilitate foodborne outbreak investigation and to monitor them at an earlier stage. A total of 112 strains, isolated from 2016 to 2018 in hospital laboratories, were included in this study. The isolates were previously identified as Salmonella according to standard microbiological techniques and serotyping was performed according to ISO 6579-3 and the Kaufmann-White scheme using O and H antisera (Statens Serum Institut®). All strains were characterized by PFGE: analysis was conducted according to a standardized PulseNet protocol. The restriction enzyme XbaI was used to generate several distinguishable genomic fragments on the agarose gel. PFGE was performed on a CHEF Mapper system, separating large fragments and generating comparable genetic patterns. The agarose gel was then stained with GelRed® and photographed under ultraviolet transillumination. The PFGE patterns obtained from the 112 strains were compared using Bionumerics version 7.6 software with the Dice coefficient with 2% band tolerance and 2% optimization. For each serotype, the data obtained with the PFGE were compared according to the geographical origin and the year in which they were isolated. Salmonella strains were identified as follow: S. Derby n. 34; S. Infantis n. 38; S. Napoli n. 40. All the isolates had appreciable restricted digestion patterns ranging from approximately 40 to 1100 kb. In general, a fairly heterogeneous distribution of pulsotypes has emerged in the different provinces. Cluster analysis indicated high genetic similarity (≥ 83%) among strains of S. Derby (n. 30; 88%), S. Infantis (n. 36; 95%) and S. Napoli (n. 38; 95%) circulating in north-western Italy. The study underlines the genomic similarities shared by the emerging Salmonella strains in Northwest Italy and allowed to create a database to detect outbreaks in an early stage. Therefore, the results confirmed that PFGE is a powerful and discriminatory tool to investigate the genetic relationships among strains in order to monitoring and control Salmonellosis outbreak spread. Pulsed-field gel electrophoresis (PFGE) still represents one of the most suitable approaches to characterize strains, in particular for the laboratories for which NGS techniques are not available.Keywords: emerging Salmonella serotypes, genetic characterization, human strains, PFGE
Procedia PDF Downloads 108358 Preliminary Result on the Impact of Anthropogenic Noise on Understory Bird Population in Primary Forest of Gaya Island
Authors: Emily A. Gilbert, Jephte Sompud, Andy R. Mojiol, Cynthia B. Sompud, Alim Biun
Abstract:
Gaya Island of Sabah is known for its wildlife and marine biodiversity. It has marks itself as one of the hot destinations of tourists from all around the world. Gaya Island tourism activities have contributed to Sabah’s economy revenue with the high number of tourists visiting the island. However, it has led to the increased anthropogenic noise derived from tourism activities. This may greatly interfere with the animals such as understory birds that rely on acoustic signals as a tool for communication. Many studies in other parts of the regions reveal that anthropogenic noise does decrease species richness of avian community. However, in Malaysia, published research regarding the impact of anthropogenic noise on the understory birds is still very lacking. This study was conducted in order to fill up this gap. This study aims to investigate the anthropogenic noise’s impact towards understory bird population. There were three sites within the Primary forest of Gaya Island that were chosen to sample the level of anthropogenic noise in relation to the understory bird population. Noise mapping method was used to measure the anthropogenic noise level and identify the zone with high anthropogenic noise level (> 60dB) and zone with low anthropogenic noise level (< 60dB) based on the standard threshold of noise level. The methods that were used for this study was solely mist netting and ring banding. This method was chosen as it can determine the diversity of the understory bird population in Gaya Island. The preliminary study was conducted from 15th to 26th April and 5th to 10th May 2015 whereby there were 2 mist nets that were set up at each of the zones within the selected site. The data was analyzed by using the descriptive analysis, presence and absence analysis, diversity indices and diversity t-test. Meanwhile, PAST software was used to analyze the obtain data. The results from this study present a total of 60 individuals that consisted of 12 species from 7 families of understory birds were recorded in three of the sites in Gaya Island. The Shannon-Wiener index shows that diversity of species in high anthropogenic noise zone and low anthropogenic noise zone were 1.573 and 2.009, respectively. However, the statistical analysis shows that there was no significant difference between these zones. Nevertheless, based on the presence and absence analysis, it shows that the species at the low anthropogenic noise zone was higher as compared to the high anthropogenic noise zone. Thus, this result indicates that there is an impact of anthropogenic noise on the population diversity of understory birds. There is still an urgent need to conduct an in-depth study by increasing the sample size in the selected sites in order to fully understand the impact of anthropogenic noise towards the understory birds population so that it can then be in cooperated into the wildlife management for a sustainable environment in Gaya Island.Keywords: anthropogenic noise, biodiversity, Gaya Island, understory bird
Procedia PDF Downloads 365357 Freight Forwarders’ Liability: A Need for Revival of Unidroit Draft Convention after Six Decades
Authors: Mojtaba Eshraghi Arani
Abstract:
The freight forwarders, who are known as the Architect of Transportation, play a vital role in the supply chain management. The package of various services which they provide has made the legal nature of freight forwarders very controversial, so that they might be qualified once as principal or carrier and, on other occasions, as agent of the shipper as the case may be. They could even be involved in the transportation process as the agent of shipping line, which makes the situation much more complicated. The courts in all countries have long had trouble in distinguishing the “forwarder as agent” from “forwarder as principal” (as it is outstanding in the prominent case of “Vastfame Camera Ltd v Birkart Globistics Ltd And Others” 2005, Hong Kong). It is not fully known that in the case of a claim against the forwarder, what particular parameter would be used by the judge among multiple, and sometimes contradictory, tests for determining the scope of the forwarder liability. In particular, every country has its own legal parameters for qualifying the freight forwarders that is completely different from others, as it is the case in France in comparison with Germany and England. The unpredictability of the courts’ decisions in this regard has provided the freight forwarders with the opportunity to impose any limitation or exception of liability while pretending to play the role of a principal, consequently making the cargo interests incur ever-increasing damage. The transportation industry needs to remove such uncertainty by unifying national laws governing freight forwarders liability. A long time ago, in 1967, The International Institute for Unification of Private Law (UNIDROIT) prepared a draft convention called “Draft Convention on Contract of Agency for Forwarding Agents Relating to International Carriage of Goods” (hereinafter called “UNIDROIT draft convention”). The UNIDROIT draft convention provided a clear and certain framework for the liability of freight forwarder in each capacity as agent or carrier, but it failed to transform to a convention, and eventually, it was consigned to oblivion. Today, after nearly 6 decades from that era, the necessity of such convention can be felt apparently. However, one might reason that the same grounds, in particular, the resistance by forwarders’ association, FIATA, exist yet, and thus it is not logical to revive a forgotten draft convention after such long period of time. It is argued in this article that the main reason for resisting the UNIDROIT draft convention in the past was pending efforts for developing the “1980 United Nation Convention on International Multimodal Transport of Goods”. However, the latter convention failed to become in force on due time in a way that there was no new accession since 1996, as a result of which the UNIDROIT draft convention must be revived strongly and immediately submitted to the relevant diplomatic conference. A qualitative method with the concept of interpretation of data collection has been used in this manuscript. The source of the data is the analysis of international conventions and cases.Keywords: freight forwarder, revival, agent, principal, uidroit, draft convention
Procedia PDF Downloads 75356 Integrating Computational Modeling and Analysis with in Vivo Observations for Enhanced Hemodynamics Diagnostics and Prognosis
Authors: Shreyas S. Hegde, Anindya Deb, Suresh Nagesh
Abstract:
Computational bio-mechanics is developing rapidly as a non-invasive tool to assist the medical fraternity to help in both diagnosis and prognosis of human body related issues such as injuries, cardio-vascular dysfunction, atherosclerotic plaque etc. Any system that would help either properly diagnose such problems or assist prognosis would be a boon to the doctors and medical society in general. Recently a lot of work is being focused in this direction which includes but not limited to various finite element analysis related to dental implants, skull injuries, orthopedic problems involving bones and joints etc. Such numerical solutions are helping medical practitioners to come up with alternate solutions for such problems and in most cases have also reduced the trauma on the patients. Some work also has been done in the area related to the use of computational fluid mechanics to understand the flow of blood through the human body, an area of hemodynamics. Since cardio-vascular diseases are one of the main causes of loss of human life, understanding of the blood flow with and without constraints (such as blockages), providing alternate methods of prognosis and further solutions to take care of issues related to blood flow would help save valuable life of such patients. This project is an attempt to use computational fluid dynamics (CFD) to solve specific problems related to hemodynamics. The hemodynamics simulation is used to gain a better understanding of functional, diagnostic and theoretical aspects of the blood flow. Due to the fact that many fundamental issues of the blood flow, like phenomena associated with pressure and viscous forces fields, are still not fully understood or entirely described through mathematical formulations the characterization of blood flow is still a challenging task. The computational modeling of the blood flow and mechanical interactions that strongly affect the blood flow patterns, based on medical data and imaging represent the most accurate analysis of the blood flow complex behavior. In this project the mathematical modeling of the blood flow in the arteries in the presence of successive blockages has been analyzed using CFD technique. Different cases of blockages in terms of percentages have been modeled using commercial software CATIA V5R20 and simulated using commercial software ANSYS 15.0 to study the effect of varying wall shear stress (WSS) values and also other parameters like the effect of increase in Reynolds number. The concept of fluid structure interaction (FSI) has been used to solve such problems. The model simulation results were validated using in vivo measurement data from existing literatureKeywords: computational fluid dynamics, hemodynamics, blood flow, results validation, arteries
Procedia PDF Downloads 408355 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 224354 Computational Code for Solving the Navier-Stokes Equations on Unstructured Meshes Applied to the Leading Edge of the Brazilian Hypersonic Scramjet 14-X
Authors: Jayme R. T. Silva, Paulo G. P. Toro, Angelo Passaro, Giannino P. Camillo, Antonio C. Oliveira
Abstract:
An in-house C++ code has been developed, at the Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics from the Institute of Advanced Studies (Brazil), to estimate the aerothermodynamic properties around the Hypersonic Vehicle Integrated to the Scramjet. In the future, this code will be applied to the design of the Brazilian Scramjet Technological Demonstrator 14-X B. The first step towards accomplishing this objective, is to apply the in-house C++ code at the leading edge of a flat plate, simulating the leading edge of the 14-X Hypersonic Vehicle, making possible the wave phenomena of oblique shock and boundary layer to be analyzed. The development of modern hypersonic space vehicles requires knowledge regarding the characteristics of hypersonic flows in the vicinity of a leading edge of lifting surfaces. The strong interaction between a shock wave and a boundary layer, in a high supersonic Mach number 4 viscous flow, close to the leading edge of the plate, considering no slip condition, is numerically investigated. The small slip region is neglecting. The study consists of solving the fluid flow equations for unstructured meshes applying the SIMPLE algorithm for Finite Volume Method. Unstructured meshes are generated by the in-house software ‘Modeler’ that was developed at Virtual’s Engineering Laboratory from the Institute of Advanced Studies, initially developed for Finite Element problems and, in this work, adapted to the resolution of the Navier-Stokes equations based on the SIMPLE pressure-correction scheme for all-speed flows, Finite Volume Method based. The in-house C++ code is based on the two-dimensional Navier-Stokes equations considering non-steady flow, with nobody forces, no volumetric heating, and no mass diffusion. Air is considered as calorically perfect gas, with constant Prandtl number and Sutherland's law for the viscosity. Solutions of the flat plate problem for Mach number 4 include pressure, temperature, density and velocity profiles as well as 2-D contours. Also, the boundary layer thickness, boundary conditions, and mesh configurations are presented. The same problem has been solved by the academic license of the software Ansys Fluent and for another C++ in-house code, which solves the fluid flow equations in structured meshes, applying the MacCormack method for Finite Difference Method, and the results will be compared.Keywords: boundary-layer, scramjet, simple algorithm, shock wave
Procedia PDF Downloads 493353 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050
Authors: Ali Hashemifarzad, Jens Zum Hingst
Abstract:
The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production
Procedia PDF Downloads 134352 Effect of Minimalist Footwear on Running Economy Following Exercise-Induced Fatigue
Authors: Jason Blair, Adeboye Adebayo, Mohamed Saad, Jeannette M. Byrne, Fabien A. Basset
Abstract:
Running economy is a key physiological parameter of an individual’s running efficacy and a valid tool for predicting performance outcomes. Of the many factors known to influence running economy (RE), footwear certainly plays a role owing to its characteristics that vary substantially from model to model. Although minimalist footwear is believed to enhance RE and thereby endurance performance, conclusive research reports are scarce. Indeed, debates remain as to which footwear characteristics most alter RE. The purposes of this study were, therefore, two-fold: (a) to determine whether wearing minimalist shoes results in better RE compared to shod and to identify relationships with kinematic and muscle activation patterns; (b) to determine whether changes in RE with minimalist shoes are still evident following a fatiguing bout of exercise. Well-trained male distance runners (n=10; 29.0 ± 7.5 yrs; 71.0 ± 4.8 kg; 176.3 ± 6.5 cm) partook first in a maximal O₂ uptake determination test (VO₂ₘₐₓ = 61.6 ± 7.3 ml min⁻¹ kg⁻¹) 7 days prior to the experimental sessions. Second, in a fully randomized fashion, an RE test consisting of three 8-min treadmill runs in shod and minimalist footwear were performed prior to and following exercise induced fatigue (EIF). The minimalist and shod conditions were tested with a minimum of 7-day wash-out period between conditions. The RE bouts, interspaced by 2-min rest periods, were run at 2.79, 3.33, and 3.89 m s⁻¹ with a 1% grade. EIF consisted of 7 times 1000 m at 94-97% VO₂ₘₐₓ interspaced with 3-min recovery. Cardiorespiratory, electromyography (EMG), kinematics, rate of perceived exertion (RPE) and blood lactate were measured throughout the experimental sessions. A significant main speed effect on RE (p=0.001) and stride frequency (SF) (p=0.001) was observed. The pairwise comparisons showed that running at 2.79 m s⁻¹ was less economic compared to 3.33, and 3.89 m s⁻¹ (3.56 ± 0.38, 3.41 ± 0.45, 3.40 ± 0.45 ml O₂ kg⁻¹ km⁻¹; respectively) and that SF increased as a function of speed (79 ± 5, 82 ± 5, 84 ± 5 strides min⁻¹). Further, EMG analyses revealed that root mean square EMG significantly increased as a function of speed for all muscles (Biceps femoris, Gluteus maximus, Gastrocnemius, Tibialis anterior, Vastus lateralis). During EIF, the statistical analysis revealed a significant main effect of time on lactate production (from 2.7 ± 5.7 to 11.2 ± 6.2 mmol L⁻¹), RPE scores (from 7.6 ± 4.0 to 18.4 ± 2.7) and peak HR (from 171 ± 30 to 181 ± 20 bpm), expect for the recovery period. Surprisingly, a significant main footwear effect was observed on running speed during intervals (p=0.041). Participants ran faster with minimalist shoes compared to shod (3:24 ± 0:44 min [95%CI: 3:14-3:34] vs. 3:30 ± 0:47 min [95%CI: 3:19-3:41]). Although EIF altered lactate production and RPE scores, no other effect was noticeable on RE, EMG, and SF pre- and post-EIF, except for the expected speed effect. The significant footwear effect on running speed during EIF was unforeseen but could be due to shoe mass and/or heel-toe-drop differences. We also cannot discard the effect of speed on foot-strike pattern and therefore, running performance.Keywords: exercise-induced fatigue, interval training, minimalist footwear, running economy
Procedia PDF Downloads 248351 Development of the Integrated Quality Management System of Cooked Sausage Products
Authors: Liubov Lutsyshyn, Yaroslava Zhukova
Abstract:
Over the past twenty years, there has been a drastic change in the mode of nutrition in many countries which has been reflected in the development of new products, production techniques, and has also led to the expansion of sales markets for food products. Studies have shown that solution of the food safety problems is almost impossible without the active and systematic activity of organizations directly involved in the production, storage and sale of food products, as well as without management of end-to-end traceability and exchange of information. The aim of this research is development of the integrated system of the quality management and safety assurance based on the principles of HACCP, traceability and system approach with creation of an algorithm for the identification and monitoring of parameters of technological process of manufacture of cooked sausage products. Methodology of implementation of the integrated system based on the principles of HACCP, traceability and system approach during the manufacturing of cooked sausage products for effective provision for the defined properties of the finished product has been developed. As a result of the research evaluation technique and criteria of performance of the implementation and operation of the system of the quality management and safety assurance based on the principles of HACCP have been developed and substantiated. In the paper regularities of influence of the application of HACCP principles, traceability and system approach on parameters of quality and safety of the finished product have been revealed. In the study regularities in identification of critical control points have been determined. The algorithm of functioning of the integrated system of the quality management and safety assurance has also been described and key requirements for the development of software allowing the prediction of properties of finished product, as well as the timely correction of the technological process and traceability of manufacturing flows have been defined. Based on the obtained results typical scheme of the integrated system of the quality management and safety assurance based on HACCP principles with the elements of end-to-end traceability and system approach for manufacture of cooked sausage products has been developed. As a result of the studies quantitative criteria for evaluation of performance of the system of the quality management and safety assurance have been developed. A set of guidance documents for the implementation and evaluation of the integrated system based on the HACCP principles in meat processing plants have also been developed. On the basis of the research the effectiveness of application of continuous monitoring of the manufacturing process during the control on the identified critical control points have been revealed. The optimal number of critical control points in relation to the manufacture of cooked sausage products has been substantiated. The main results of the research have been appraised during 2013-2014 under the conditions of seven enterprises of the meat processing industry and have been implemented at JSC «Kyiv meat processing plant».Keywords: cooked sausage products, HACCP, quality management, safety assurance
Procedia PDF Downloads 248350 Control of Asthma in Children with Asthma during the Containment Period following the Covid-19 Pandemic
Authors: Meryam Labyad, Karima Fakiri, Widad Lahmini, Ghizlane Draiss, Mohamed Bouskraoui, Nadia Ouzennou
Abstract:
Background: Asthma is the most common chronic disease in children, affecting nearly 235 million people worldwide (OMS). In Morocco, asthma is much more common in children than in adults; the prevalence rate in children between 13 and 14 years of age is 20%.1 This pathology is marked by high morbidity, a significant impact on the quality of life and development of children 2 This requires a rigorous management strategy in order to achieve clinical control and reduce any risk to the patient 3 A search for aggravating factors is mandatory if a child has difficulty maintaining good asthma control. The objective of the present study is to describe asthma control during this confinement period in children aged 4 to 11 years followed by a pneumo-paediatric consultation. For children whose asthma is not controlled, a search for associations with promoting factors and adherence to treatment is also among the objectives of the study. Knowing the level of asthma control and influencing factors is a therapeutic priority in order to reduce hospitalizations and emergency care use. Objective: To assess asthma control and determine the factors influencing asthma levels in children with asthma during confinement following the COVID 19 pandemic. Method: Prospective cross-sectional study by questionnaire and structured interview among 66 asthmatic children followed in pediatric pneumology consultation at the CHU MED VI of Marrakech from 13/06/2020 to 13/07/2020, asthma control was assessed by the Childhood Asthma Control Test (C-ACT). Results: 66 children and their parents were included (mean age is 7.5 years), asthma was associated with allergic rhinitis (13.5% of cases), conjunctivitis (9% of cases), eczema (12% of cases), occurrence of infection (10.5% of cases). The period of confinement was marked by a decrease in the number of asthma attacks translated by a decrease in the number of emergency room visits (7.5%) of these asthmatic children, control was well controlled in 71% of the children, this control was significantly associated with good adherence to treatment (p<0.001), no infection (p<0.001) and no conjunctivitis (p=002) or rhinitis (p<0.001). This improvement in asthma control during confinement can be explained by the measures taken in the Kingdom to prevent the spread of COVID 19 (school closures, reduction in industrial activity, fewer means of transport, etc.), leading to a decrease in children's exposure to triggers, which justifies the decrease in the number of children having had an infection, allergic rhinitis or conjunctivitis during this period. In addition, the close monitoring of parents resulted in better therapeutic adherence (42.4% were fully observant). Confinement was positively perceived by 68% of the parents; this perception is significantly associated with the level of asthma control (p<0.001). Conclusion: Maintaining good control can be achieved through improved therapeutic adherence and avoidance of triggers, both of which were achieved during the containment period following the VIDOC pandemic 19.Keywords: Asthma, control , COVID-19 , children
Procedia PDF Downloads 186