Search results for: fully homomorphic encryption
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1900

Search results for: fully homomorphic encryption

280 The Effects of SCMs on the Mechanical Properties and Durability of Fibre Cement Plates

Authors: Ceren Ince, Berkay Zafer Erdem, Shahram Derogar, Nabi Yuzer

Abstract:

Fibre cement plates, often used in construction, generally are made using quartz as an inert material, cement as a binder and cellulose as a fibre. This paper first of all investigates the mechanical properties and durability of fibre cement plates when quartz is both partly and fully replaced with diatomite. Diatomite does not only have lower density compared to quartz but also has high pozzolanic activity. The main objective of this paper is the investigation of the effects of supplementary cementing materials (SCMs) on the short and long term mechanical properties and durability characteristics of fibre cement plates prepared using diatomite. Supplementary cementing materials such as ground granulated blast furnace slug (GGBS) and fly ash (FA) are used in this study. 10, 20, 30 and 40% of GGBS and FA are used as partial replacement materials to cement. Short and long term mechanical properties such as compressive and flexural strengths as well as capillary absorption, sorptivity characteristics and mass were investigated. Consistency and setting time at each replacement levels of SCMs were also recorded. The effects of using supplementary cementing materials on the carbonation and sulphate resistance of fibre cement plates were then experimented. The results, first of all, show that the use of diatomite as a full or partial replacement to quartz resulted in a systematic decrease in total mass of the fibre cement plates. The reduction of mass was largely due to the lower density and finer particle size of diatomite compared to quartz. The use of diatomite did not only reduce the mass of these plates but also increased the compressive strength significantly as a result of its high pozzolanic activity. The replacement levels of both GGBS and FA resulted in a systematic decrease in short term compressive strength with increasing replacement levels. This was essentially expected as the total heat of hydration is much lower in GGBS and FA than that of cement. Long term results however, indicated that the compressive strength of fibre cement plates prepared using both GGBS and FA increases with time and hence the compressive strength of plates prepared using SCMs is either equivalent or more than the compressive strength of plates prepared using cement alone. Durability characteristics of fibre cement plates prepared using SCMs were enhanced significantly. Measurements of capillary absorption and sopritivty characteristics were also indicated that the plates prepared using SCMs has much lower permeability compared to plates prepared cement alone. Much higher resistance to carbonation and sulphate attach were observed with plates prepared using SCMs. The results presented in this paper show that the use of SCMs does not only support the production of more sustainable construction materials but also enhances the mechanical properties and durability characteristics of fibre cement plates.

Keywords: diatomite, fibre, strength, supplementary cementing material

Procedia PDF Downloads 305
279 Quantifying Fatigue during Periods of Intensified Competition in Professional Ice Hockey Players: Magnitude of Fatigue in Selected Markers

Authors: Eoin Kirwan, Christopher Nulty, Declan Browne

Abstract:

The professional ice hockey season consists of approximately 60 regular season games with periods of fixture congestion occurring several times in the average season. These periods of congestion provide limited time for recovery, exposing the athletes to the risk of competing whilst not fully recovered. Although a body of research is growing with respect to monitoring fatigue, particularly during periods of congested fixtures in team sports such as rugby and soccer, it has received little to no attention thus far in ice hockey athletes. Consequently, there is limited knowledge on monitoring tools that might effectively detect a fatigue response and the magnitude of fatigue that can accumulate when recovery is limited by competitive fixtures. The benefit of quantifying and establishing fatigue status is the ability to optimise training and provide pertinent information on player health, injury risk, availability and readiness. Some commonly used methods to assess fatigue and recovery status of athletes include the use of perceived fatigue and wellbeing questionnaires, tests of muscular force and ratings of perceive exertion (RPE). These measures are widely used in popular team sports such as soccer and rugby and show promise as assessments of fatigue and recovery status for ice hockey athletes. As part of a larger study, this study explored the magnitude of changes in adductor muscle strength after game play and throughout a period of fixture congestion and examined the relationship between internal game load and perceived wellbeing with adductor muscle strength. Methods 8 professional ice hockey players from a British Elite League club volunteered to participate (age = 29.3 ± 2.49 years, height = 186.15 ± 6.75 cm, body mass = 90.85 ± 8.64 kg). Prior to and after competitive games each player performed trials of the adductor squeeze test at 0˚ hip flexion with the lead investigator using hand-held dynamometry. Rate of perceived exertion was recorded for each game and from data of total ice time individual session RPE was calculated. After each game players completed a 5- point questionnaire to assess perceived wellbeing. Data was collected from six competitive games, 1 practice and 36 hours post the final game, over a 10 – day period. Results Pending final data collection in February Conclusions Pending final data collection in February.

Keywords: Conjested fixtures, fatigue monitoring, ice hockey, readiness

Procedia PDF Downloads 117
278 Personality Based Tailored Learning Paths Using Cluster Analysis Methods: Increasing Students' Satisfaction in Online Courses

Authors: Orit Baruth, Anat Cohen

Abstract:

Online courses have become common in many learning programs and various learning environments, particularly in higher education. Social distancing forced in response to the COVID-19 pandemic has increased the demand for these courses. Yet, despite the frequency of use, online learning is not free of limitations and may not suit all learners. Hence, the growth of online learning alongside with learners' diversity raises the question: is online learning, as it currently offered, meets the needs of each learner? Fortunately, today's technology allows to produce tailored learning platforms, namely, personalization. Personality influences learner's satisfaction and therefore has a significant impact on learning effectiveness. A better understanding of personality can lead to a greater appreciation of learning needs, as well to assists educators ensure that an optimal learning environment is provided. In the context of online learning and personality, the research on learning design according to personality traits is lacking. This study explores the relations between personality traits (using the 'Big-five' model) and students' satisfaction with five techno-pedagogical learning solutions (TPLS): discussion groups, digital books, online assignments, surveys/polls, and media, in order to provide an online learning process to students' satisfaction. Satisfaction level and personality identification of 108 students who participated in a fully online learning course at a large, accredited university were measured. Cluster analysis methods (k-mean) were applied to identify learners’ clusters according to their personality traits. Correlation analysis was performed to examine the relations between the obtained clusters and satisfaction with the offered TPLS. Findings suggest that learners associated with the 'Neurotic' cluster showed low satisfaction with all TPLS compared to learners associated with the 'Non-neurotics' cluster. learners associated with the 'Consciences' cluster were satisfied with all TPLS except discussion groups, and those in the 'Open-Extroverts' cluster were satisfied with assignments and media. All clusters except 'Neurotic' were highly satisfied with the online course in general. According to the findings, dividing learners into four clusters based on personality traits may help define tailor learning paths for them, combining various TPLS to increase their satisfaction. As personality has a set of traits, several TPLS may be offered in each learning path. For the neurotics, however, an extended selection may suit more, or alternatively offering them the TPLS they less dislike. Study findings clearly indicate that personality plays a significant role in a learner's satisfaction level. Consequently, personality traits should be considered when designing personalized learning activities. The current research seeks to bridge the theoretical gap in this specific research area. Establishing the assumption that different personalities need different learning solutions may contribute towards a better design of online courses, leaving no learner behind, whether he\ she likes online learning or not, since different personalities need different learning solutions.

Keywords: online learning, personality traits, personalization, techno-pedagogical learning solutions

Procedia PDF Downloads 80
277 Analytical Study of the Structural Response to Near-Field Earthquakes

Authors: Isidro Perez, Maryam Nazari

Abstract:

Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.

Keywords: near-field, pulse, pushover, time-history

Procedia PDF Downloads 116
276 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches

Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang

Abstract:

Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.

Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis

Procedia PDF Downloads 379
275 Outcome-Based Education as Mediator of the Effect of Blended Learning on the Student Performance in Statistics

Authors: Restituto I. Rodelas

Abstract:

The higher education has adopted the outcomes-based education from K-12. In this approach, the teacher uses any teaching and learning strategies that enable the students to achieve the learning outcomes. The students may be required to exert more effort and figure things out on their own. Hence, outcomes-based students are assumed to be more responsible and more capable of applying the knowledge learned. Another approach that the higher education in the Philippines is starting to adopt from other countries is blended learning. This combination of classroom and fully online instruction and learning is expected to be more effective. Participating in the online sessions, however, is entirely up to the students. Thus, the effect of blended learning on the performance of students in Statistics may be mediated by outcomes-based education. If there is a significant positive mediating effect, then blended learning can be optimized by integrating outcomes-based education. In this study, the sample will consist of four blended learning Statistics classes at Jose Rizal University in the second semester of AY 2015–2016. Two of these classes will be assigned randomly to the experimental group that will be handled using outcomes-based education. The two classes in the control group will be handled using the traditional lecture approach. Prior to the discussion of the first topic, a pre-test will be administered. The same test will be given as posttest after the last topic is covered. In order to establish equality of the groups’ initial knowledge, single factor ANOVA of the pretest scores will be performed. Single factor ANOVA of the posttest-pretest score differences will also be conducted to compare the performance of the experimental and control groups. When a significant difference is obtained in any of these ANOVAs, post hoc analysis will be done using Tukey's honestly significant difference test (HSD). Mediating effect will be evaluated using correlation and regression analyses. The groups’ initial knowledge are equal when the result of pretest scores ANOVA is not significant. If the result of score differences ANOVA is significant and the post hoc test indicates that the classes in the experimental group have significantly different scores from those in the control group, then outcomes-based education has a positive effect. Let blended learning be the independent variable (IV), outcomes-based education be the mediating variable (MV), and score difference be the dependent variable (DV). There is mediating effect when the following requirements are satisfied: significant correlation of IV to DV, significant correlation of IV to MV, significant relationship of MV to DV when both IV and MV are predictors in a regression model, and the absolute value of the coefficient of IV as sole predictor is larger than that when both IV and MV are predictors. With a positive mediating effect of outcomes-base education on the effect of blended learning on student performance, it will be recommended to integrate outcomes-based education into blended learning. This will yield the best learning results.

Keywords: outcome-based teaching, blended learning, face-to-face, student-centered

Procedia PDF Downloads 271
274 Dengue Prevention and Control in Kaohsiung City

Authors: Chiu-Wen Chang, I-Yun Chang, Wei-Ting Chen, Hui-Ping Ho, Ruei-Hun Chang, Joh-Jong Huang

Abstract:

Kaohsiung City is located in the tropical region where has Aedes aegypti and Aedes albopictus distributed; once the virus invades, it’s can easily trigger local epidemic. Besides, Kaohsiung City has a world-class airport and harbor, trade and tourism are close and frequently with every country, especially with the Southeast Asian countries which also suffer from dengue. Therefore, Kaohsiung City faces the difficult challenge of dengue every year. The objectives of this study was to enhance dengue clinical care, border management and vector surveillance in Kaohsiung City by establishing an larger scale, innovatively and more coordinated dengue prevention and control strategies in 2016, including (1) Integrated medical programs: facilitated 657 contract medical institutions, widely set up NS1 rapid test in clinics, enhanced triage and referrals system, dengue case daily-monitoring management (2) Border quarantine: comprehensive NS1 screening for foreign workers and fisheries when immigration, hospitalization and isolation for suspected cases, health education for high risk groups (foreign students, other tourists) (3) Mosquito control: Widely use Gravitrap to monitor mosquito density in environment, use NS1 rapid screening test to detect community dengue virus (4) Health education: create a dengue app for people to immediately inquire the risk map and nearby medical resources, routine health education to all districts to strengthen public’s dengue knowledge, neighborhood cleaning awards program. The results showed that after new integration of dengue prevention and control strategies fully implemented in Kaohsiung City, the number of confirmed cases in 2016 declined to 342 cases, the majority of these cases are the continuation epidemic in 2015; in fact, only two cases confirmed after the 2016 summer. Besides, the dengue mortality rate successfully decreased to 0% in 2016. Moreover, according to the reporting rate from medical institutions in 2014 and 2016, it dropped from 27.07% to 19.45% from medical center, and it decreased from 36.55% to 29.79% from regional hospital; however, the reporting rate of district hospital increased from 11.88% to 15.87% and also increased from 24.51% to 34.89% in general practice clinics. Obviously, it showed that under the action of strengthening medical management, it reduced the medical center’s notification ratio and improved the notification ratio of general clinics which achieved the great effect of dengue clinical management and dengue control.

Keywords: dengue control, integrated control strategies, clinical management, NS1

Procedia PDF Downloads 240
273 A Doctrinal Research and Review of Hashtag Trademarks

Authors: Hetvi Trivedi

Abstract:

Technological escalation cannot be negated. The same is true for the benefits of technology. However, such escalation has interfered with the traditional theories of protection under Intellectual Property Rights. Out of the many trends that have disrupted the old-school understanding of Intellectual Property Rights, one is hashtags. What began modestly in the year 2007 has now earned a remarkable status, and coupled with the unprecedented rise in social media the hashtag culture has witnessed a monstrous growth. A tiny symbol on the keypad of phones or computers is now a major trend which also serves companies as a critical investment measure in establishing their brand in the market. Due to this a section of the Intellectual Property Rights- Trademarks is undergoing a humungous transformation with hashtags like #icebucket, #tbt or #smilewithacoke, getting trademark protection. So, as the traditional theories of IP take on the modern trends, it is necessary to understand the change and challenge at a theoretical and proportional level and where need be, question the change. Traditionally, Intellectual Property Rights serves the societal need for intellectual productions that ensure its holistic development as well as cultural, economic, social and technological progress. In a two-pronged effort at ensuring continuity of creativity, IPRs recognize the investment of individual efforts that go into creation by way of offering protection. Commonly placed under two major theories- Utilitarian and Natural, IPRs aim to accord protection and recognition to an individual’s creation or invention which serve as an incentive for further creations or inventions, thus fully protecting the creative, inventive or commercial labour invested in the same. In return, the creator by lending the public the access to the creation reaps various benefits. This way Intellectual Property Rights form a ‘social contract’ between the author and society. IPRs are similarly attached to a social function, whereby individual rights must be weighed against competing rights and to the farthest limit possible, both sets of rights must be treated in a balanced manner. To put it differently, both the society and the creator must be put on an equal footing with neither party’s rights subservient to the other. A close look through doctrinal research, at the recent trend of trademark protection, makes the social function of IPRs seem to be moving far from the basic philosophy. Thus, where technology interferes with the philosophies of law, it is important to check and allow such growth only in moderation, for none is superior than the other. The human expansionist nature may need everything under the sky that can be tweaked slightly to be counted and protected as Intellectual Property- like a common parlance word transformed into a hashtag, however IP in order to survive on its philosophies needs to strike a balance. A unanimous global decision on the judicious use of IPR recognition and protection is the need of the hour.

Keywords: hashtag trademarks, intellectual property, social function, technology

Procedia PDF Downloads 104
272 Marine Environmental Monitoring Using an Open Source Autonomous Marine Surface Vehicle

Authors: U. Pruthviraj, Praveen Kumar R. A. K. Athul, K. V. Gangadharan, S. Rao Shrikantha

Abstract:

An open source based autonomous unmanned marine surface vehicle (UMSV) is developed for some of the marine applications such as pollution control, environmental monitoring and thermal imaging. A double rotomoulded hull boat is deployed which is rugged, tough, quick to deploy and moves faster. It is suitable for environmental monitoring, and it is designed for easy maintenance. A 2HP electric outboard marine motor is used which is powered by a lithium-ion battery and can also be charged from a solar charger. All connections are completely waterproof to IP67 ratings. In full throttle speed, the marine motor is capable of up to 7 kmph. The motor is integrated with an open source based controller using cortex M4F for adjusting the direction of the motor. This UMSV can be operated by three modes: semi-autonomous, manual and fully automated. One of the channels of a 2.4GHz radio link 8 channel transmitter is used for toggling between different modes of the USMV. In this electric outboard marine motor an on board GPS system has been fitted to find the range and GPS positioning. The entire system can be assembled in the field in less than 10 minutes. A Flir Lepton thermal camera core, is integrated with a 64-bit quad-core Linux based open source processor, facilitating real-time capturing of thermal images and the results are stored in a micro SD card which is a data storage device for the system. The thermal camera is interfaced to an open source processor through SPI protocol. These thermal images are used for finding oil spills and to look for people who are drowning at low visibility during the night time. A Real Time clock (RTC) module is attached with the battery to provide the date and time of thermal images captured. For the live video feed, a 900MHz long range video transmitter and receiver is setup by which from a higher power output a longer range of 40miles has been achieved. A Multi-parameter probe is used to measure the following parameters: conductivity, salinity, resistivity, density, dissolved oxygen content, ORP (Oxidation-Reduction Potential), pH level, temperature, water level and pressure (absolute).The maximum pressure it can withstand 160 psi, up to 100m. This work represents a field demonstration of an open source based autonomous navigation system for a marine surface vehicle.

Keywords: open source, autonomous navigation, environmental monitoring, UMSV, outboard motor, multi-parameter probe

Procedia PDF Downloads 213
271 God, The Master Programmer: The Relationship Between God and Computers

Authors: Mohammad Sabbagh

Abstract:

Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.

Keywords: programming, the Quran, object orientation, computers and humans, GOD

Procedia PDF Downloads 87
270 A Disappearing Radiolucency of the Mandible Caused by Inadvertent Trauma Following IMF Screw Placement

Authors: Anna Ghosh, Dominic Shields, Ceri McIntosh, Stephen Crank

Abstract:

A 29-year-old male was a referral to the maxillofacial unit following a referral from his general dental practitioner via a routine pathway regarding a large periapical lesion on the LR4 with root resorption. The patient was asymptomatic, the LR4 vital and unrestored, and this was an incidental finding at a routine check-up. The patient's past medical history was unremarkable. Examination revealed no extra or intra-oral pathology and non-mobile teeth. No focal neurology was detected. An orthopantogram demonstrated a well-defined unilocular corticated radiolucency associated with the LR4. The root appeared shortened with the radiolucency between the root and a radio-opacity, possibly representing the displacement of the apical tip of the tooth. It was recommended that the referring general practitioner should proceed with orthograde root canal therapy, after which time exploration, enucleation, and retrograde root filling of the LR4 would be carried out by a maxillofacial unit. The patient was reviewed six months later where, due to the COVID-19 pandemic, the patient had been unable to access general dental services for the root canal treatment. He was still entirely asymptomatic. A one-year review was planned in the hope this would allow time for the orthograde root canal therapy to be completed. At this review, the orthograde root canal therapy had still not been completed. Interestingly, a repeat orthopantogram revealed a significant reduction in size with good bony infill and a significant reduction in the size of the lesion. Due to the ongoing delays with primary care dental therapy, the patient was subsequently internally referred to the restorative dentistry department for care. The patient was seen again by oral and maxillo-facial surgery in mid-2022 where he still reports this tooth as asymptomatic with no focal neurology. The patient's history was fully reviewed, and noted that 15 years previously, the patient underwent open reduction and internal fixation of a left angle of mandible fracture. Temporary IMF involving IMF screws and fixation wires were employed to maintain occlusion during plating and subsequently removed post-operatively. It is proposed that the radiolucency was, as a result of the IMF screw placement, penetrating the LR4 root resulting in resorption of the tooth root and development of a radiolucency. This case highlights the importance of careful screw size and physical site location, and placement of IMF screws, as there can be permeant damage to a patient’s dentition.

Keywords: facial trauma, inter-maxillary fixation, mandibular radiolucency, oral and maxillo-facial surgery

Procedia PDF Downloads 96
269 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 137
268 Social Problems and Gender Wage Gap Faced by Working Women in Readymade Garment Sector of Pakistan

Authors: Narjis Kahtoon

Abstract:

The issue of the wage discrimination on the basis of gender and social problem has been a significant research problem for several decades. Whereas lots of have explored reasons for the persistence of an inequality in the wages of male and female, none has successfully explained away the entire differentiation. The wage discrimination on the basis of gender and social problem of working women is a global issue. Although inequality in political and economic and social make-up of countries all over the world, the gender wage discrimination, and social constraint is present. The aim of the research is to examine the gender wage discrimination and social constraint from an international perspective and to determine whether any pattern exists among cultural dimensions of a country and the man and women remuneration gap in Readymade Garment Sector of Pakistan. Population growth rate is significant indicator used to explain the change in population and play a crucial point in the economic development of a country. In Pakistan, readymade garment sector consists of small, medium and large sized firms. With an estimated 30 percent of the workforce in textile- Garment is females’. Readymade garment industry is a labor intensive industry and relies on the skills of individual workers and provides highest value addition in the textile sector. In the Garment sector, female workers are concentrated in poorly paid, labor-intensive down-stream production (readymade garments, linen, towels, etc.), while male workers dominate capital- intensive (ginning, spinning and weaving) processes. Gender wage discrimination and social constraint are reality in Pakistan Labor Market. This research allows us not only to properly detect the size of gender wage discrimination and social constraint but to also fully understand its consequences in readymade garment sector of Pakistan. Furthermore, research will evaluated this measure for the three main clusters like Lahore, Karachi, and Faisalabad. These data contain complete details of male and female workers and supervisors in the readymade garment sector of Pakistan. These sources of information provide a unique opportunity to reanalyze the previous finding in the literature. The regression analysis focused on the standard 'Mincerian' earning equation and estimates it separately by gender, the research will also imply the cultural dimensions developed by Hofstede (2001) to profile a country’s cultural status and compare those cultural dimensions to the wage inequalities. Readymade garment of Pakistan is one of the important sectors since its products have huge demand at home and abroad. These researches will a major influence on the measures undertaken to design a public policy regarding wage discrimination and social constraint in readymade garment sector of Pakistan.

Keywords: gender wage differentials, decomposition, garment, cultural

Procedia PDF Downloads 183
267 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 34
266 The Report of Co-Construction into a Trans-National Education Teaching Team

Authors: Juliette MacDonald, Jun Li, Wenji Xiang, Mingwei Zhao

Abstract:

Shanghai International College of Fashion and Innovation (SCF) was created as a result of a collaborative partnership agreement between the University of Edinburgh and Donghua University. The College provides two programmes: Fashion Innovation and Fashion Interior Design and the overarching curriculum has the intention of developing innovation and creativity within an international learning, teaching, knowledge exchange and research context. The research problem presented here focuses on the multi-national/cultural faculty in the team, the challenges arising from difficulties in communication and the associated limitations of management frameworks. The teaching faculty at SCF are drawn from China, Finland, Korea, Singapore and the UK with input from Flying Faculty from Fashion and Interior Design, Edinburgh College of Art (ECA), for 5 weeks each semester. Rather than fully replicating the administrative and pedagogical style of one or other of the institutions within this joint partnership the aim from the outset was to create a third way which acknowledges the quality assurance requirements of both Donghua and Edinburgh, the academic and technical needs of the students and provides relevant development and support for all the SCF-based staff and Flying Academics. It has been well acknowledged by those who are involved in teaching across cultures that there is often a culture shock associated with transnational education but that the experience of being involved in the delivery of a curriculum at a Joint Institution can also be very rewarding for staff and students. It became clear at SCF that if a third way might be achieved which encourages innovative approaches to fashion education whilst balancing the expectations of Chinese and western concepts of education and the aims of two institutions, then it was going to be necessary to construct a framework which developed close working relationships for the entire teaching team, so not only between academics and students but also between technicians and administrators at ECA and SCF. The attempts at co-construction and integration are built on the sharing of cultural and educational experiences and knowledge as well as provision of opportunities for reflection on the pedagogical purpose of the curriculum and its delivery. Methods on evaluating the effectiveness of these aims include a series of surveys and interviews and analysis of data drawn from teaching projects delivered to the students along with graduate successes from the last five years, since SCF first opened its doors. This paper will provide examples of best practice developed by SCF which have helped guide the faculty and embed common core values and aims of co-construction regulations and management, whilst building a pro-active TNE (Trans-National Education) team which enhances the learning experience for staff and students alike.

Keywords: cultural co-construction, educational team management, multi-cultural challenges, TNE integration for teaching teams

Procedia PDF Downloads 100
265 Preliminary Result on the Impact of Anthropogenic Noise on Understory Bird Population in Primary Forest of Gaya Island

Authors: Emily A. Gilbert, Jephte Sompud, Andy R. Mojiol, Cynthia B. Sompud, Alim Biun

Abstract:

Gaya Island of Sabah is known for its wildlife and marine biodiversity. It has marks itself as one of the hot destinations of tourists from all around the world. Gaya Island tourism activities have contributed to Sabah’s economy revenue with the high number of tourists visiting the island. However, it has led to the increased anthropogenic noise derived from tourism activities. This may greatly interfere with the animals such as understory birds that rely on acoustic signals as a tool for communication. Many studies in other parts of the regions reveal that anthropogenic noise does decrease species richness of avian community. However, in Malaysia, published research regarding the impact of anthropogenic noise on the understory birds is still very lacking. This study was conducted in order to fill up this gap. This study aims to investigate the anthropogenic noise’s impact towards understory bird population. There were three sites within the Primary forest of Gaya Island that were chosen to sample the level of anthropogenic noise in relation to the understory bird population. Noise mapping method was used to measure the anthropogenic noise level and identify the zone with high anthropogenic noise level (> 60dB) and zone with low anthropogenic noise level (< 60dB) based on the standard threshold of noise level. The methods that were used for this study was solely mist netting and ring banding. This method was chosen as it can determine the diversity of the understory bird population in Gaya Island. The preliminary study was conducted from 15th to 26th April and 5th to 10th May 2015 whereby there were 2 mist nets that were set up at each of the zones within the selected site. The data was analyzed by using the descriptive analysis, presence and absence analysis, diversity indices and diversity t-test. Meanwhile, PAST software was used to analyze the obtain data. The results from this study present a total of 60 individuals that consisted of 12 species from 7 families of understory birds were recorded in three of the sites in Gaya Island. The Shannon-Wiener index shows that diversity of species in high anthropogenic noise zone and low anthropogenic noise zone were 1.573 and 2.009, respectively. However, the statistical analysis shows that there was no significant difference between these zones. Nevertheless, based on the presence and absence analysis, it shows that the species at the low anthropogenic noise zone was higher as compared to the high anthropogenic noise zone. Thus, this result indicates that there is an impact of anthropogenic noise on the population diversity of understory birds. There is still an urgent need to conduct an in-depth study by increasing the sample size in the selected sites in order to fully understand the impact of anthropogenic noise towards the understory birds population so that it can then be in cooperated into the wildlife management for a sustainable environment in Gaya Island.

Keywords: anthropogenic noise, biodiversity, Gaya Island, understory bird

Procedia PDF Downloads 338
264 Freight Forwarders’ Liability: A Need for Revival of Unidroit Draft Convention after Six Decades

Authors: Mojtaba Eshraghi Arani

Abstract:

The freight forwarders, who are known as the Architect of Transportation, play a vital role in the supply chain management. The package of various services which they provide has made the legal nature of freight forwarders very controversial, so that they might be qualified once as principal or carrier and, on other occasions, as agent of the shipper as the case may be. They could even be involved in the transportation process as the agent of shipping line, which makes the situation much more complicated. The courts in all countries have long had trouble in distinguishing the “forwarder as agent” from “forwarder as principal” (as it is outstanding in the prominent case of “Vastfame Camera Ltd v Birkart Globistics Ltd And Others” 2005, Hong Kong). It is not fully known that in the case of a claim against the forwarder, what particular parameter would be used by the judge among multiple, and sometimes contradictory, tests for determining the scope of the forwarder liability. In particular, every country has its own legal parameters for qualifying the freight forwarders that is completely different from others, as it is the case in France in comparison with Germany and England. The unpredictability of the courts’ decisions in this regard has provided the freight forwarders with the opportunity to impose any limitation or exception of liability while pretending to play the role of a principal, consequently making the cargo interests incur ever-increasing damage. The transportation industry needs to remove such uncertainty by unifying national laws governing freight forwarders liability. A long time ago, in 1967, The International Institute for Unification of Private Law (UNIDROIT) prepared a draft convention called “Draft Convention on Contract of Agency for Forwarding Agents Relating to International Carriage of Goods” (hereinafter called “UNIDROIT draft convention”). The UNIDROIT draft convention provided a clear and certain framework for the liability of freight forwarder in each capacity as agent or carrier, but it failed to transform to a convention, and eventually, it was consigned to oblivion. Today, after nearly 6 decades from that era, the necessity of such convention can be felt apparently. However, one might reason that the same grounds, in particular, the resistance by forwarders’ association, FIATA, exist yet, and thus it is not logical to revive a forgotten draft convention after such long period of time. It is argued in this article that the main reason for resisting the UNIDROIT draft convention in the past was pending efforts for developing the “1980 United Nation Convention on International Multimodal Transport of Goods”. However, the latter convention failed to become in force on due time in a way that there was no new accession since 1996, as a result of which the UNIDROIT draft convention must be revived strongly and immediately submitted to the relevant diplomatic conference. A qualitative method with the concept of interpretation of data collection has been used in this manuscript. The source of the data is the analysis of international conventions and cases.

Keywords: freight forwarder, revival, agent, principal, uidroit, draft convention

Procedia PDF Downloads 54
263 Integrating Computational Modeling and Analysis with in Vivo Observations for Enhanced Hemodynamics Diagnostics and Prognosis

Authors: Shreyas S. Hegde, Anindya Deb, Suresh Nagesh

Abstract:

Computational bio-mechanics is developing rapidly as a non-invasive tool to assist the medical fraternity to help in both diagnosis and prognosis of human body related issues such as injuries, cardio-vascular dysfunction, atherosclerotic plaque etc. Any system that would help either properly diagnose such problems or assist prognosis would be a boon to the doctors and medical society in general. Recently a lot of work is being focused in this direction which includes but not limited to various finite element analysis related to dental implants, skull injuries, orthopedic problems involving bones and joints etc. Such numerical solutions are helping medical practitioners to come up with alternate solutions for such problems and in most cases have also reduced the trauma on the patients. Some work also has been done in the area related to the use of computational fluid mechanics to understand the flow of blood through the human body, an area of hemodynamics. Since cardio-vascular diseases are one of the main causes of loss of human life, understanding of the blood flow with and without constraints (such as blockages), providing alternate methods of prognosis and further solutions to take care of issues related to blood flow would help save valuable life of such patients. This project is an attempt to use computational fluid dynamics (CFD) to solve specific problems related to hemodynamics. The hemodynamics simulation is used to gain a better understanding of functional, diagnostic and theoretical aspects of the blood flow. Due to the fact that many fundamental issues of the blood flow, like phenomena associated with pressure and viscous forces fields, are still not fully understood or entirely described through mathematical formulations the characterization of blood flow is still a challenging task. The computational modeling of the blood flow and mechanical interactions that strongly affect the blood flow patterns, based on medical data and imaging represent the most accurate analysis of the blood flow complex behavior. In this project the mathematical modeling of the blood flow in the arteries in the presence of successive blockages has been analyzed using CFD technique. Different cases of blockages in terms of percentages have been modeled using commercial software CATIA V5R20 and simulated using commercial software ANSYS 15.0 to study the effect of varying wall shear stress (WSS) values and also other parameters like the effect of increase in Reynolds number. The concept of fluid structure interaction (FSI) has been used to solve such problems. The model simulation results were validated using in vivo measurement data from existing literature

Keywords: computational fluid dynamics, hemodynamics, blood flow, results validation, arteries

Procedia PDF Downloads 379
262 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050

Authors: Ali Hashemifarzad, Jens Zum Hingst

Abstract:

The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.

Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production

Procedia PDF Downloads 113
261 Effect of Minimalist Footwear on Running Economy Following Exercise-Induced Fatigue

Authors: Jason Blair, Adeboye Adebayo, Mohamed Saad, Jeannette M. Byrne, Fabien A. Basset

Abstract:

Running economy is a key physiological parameter of an individual’s running efficacy and a valid tool for predicting performance outcomes. Of the many factors known to influence running economy (RE), footwear certainly plays a role owing to its characteristics that vary substantially from model to model. Although minimalist footwear is believed to enhance RE and thereby endurance performance, conclusive research reports are scarce. Indeed, debates remain as to which footwear characteristics most alter RE. The purposes of this study were, therefore, two-fold: (a) to determine whether wearing minimalist shoes results in better RE compared to shod and to identify relationships with kinematic and muscle activation patterns; (b) to determine whether changes in RE with minimalist shoes are still evident following a fatiguing bout of exercise. Well-trained male distance runners (n=10; 29.0 ± 7.5 yrs; 71.0 ± 4.8 kg; 176.3 ± 6.5 cm) partook first in a maximal O₂ uptake determination test (VO₂ₘₐₓ = 61.6 ± 7.3 ml min⁻¹ kg⁻¹) 7 days prior to the experimental sessions. Second, in a fully randomized fashion, an RE test consisting of three 8-min treadmill runs in shod and minimalist footwear were performed prior to and following exercise induced fatigue (EIF). The minimalist and shod conditions were tested with a minimum of 7-day wash-out period between conditions. The RE bouts, interspaced by 2-min rest periods, were run at 2.79, 3.33, and 3.89 m s⁻¹ with a 1% grade. EIF consisted of 7 times 1000 m at 94-97% VO₂ₘₐₓ interspaced with 3-min recovery. Cardiorespiratory, electromyography (EMG), kinematics, rate of perceived exertion (RPE) and blood lactate were measured throughout the experimental sessions. A significant main speed effect on RE (p=0.001) and stride frequency (SF) (p=0.001) was observed. The pairwise comparisons showed that running at 2.79 m s⁻¹ was less economic compared to 3.33, and 3.89 m s⁻¹ (3.56 ± 0.38, 3.41 ± 0.45, 3.40 ± 0.45 ml O₂ kg⁻¹ km⁻¹; respectively) and that SF increased as a function of speed (79 ± 5, 82 ± 5, 84 ± 5 strides min⁻¹). Further, EMG analyses revealed that root mean square EMG significantly increased as a function of speed for all muscles (Biceps femoris, Gluteus maximus, Gastrocnemius, Tibialis anterior, Vastus lateralis). During EIF, the statistical analysis revealed a significant main effect of time on lactate production (from 2.7 ± 5.7 to 11.2 ± 6.2 mmol L⁻¹), RPE scores (from 7.6 ± 4.0 to 18.4 ± 2.7) and peak HR (from 171 ± 30 to 181 ± 20 bpm), expect for the recovery period. Surprisingly, a significant main footwear effect was observed on running speed during intervals (p=0.041). Participants ran faster with minimalist shoes compared to shod (3:24 ± 0:44 min [95%CI: 3:14-3:34] vs. 3:30 ± 0:47 min [95%CI: 3:19-3:41]). Although EIF altered lactate production and RPE scores, no other effect was noticeable on RE, EMG, and SF pre- and post-EIF, except for the expected speed effect. The significant footwear effect on running speed during EIF was unforeseen but could be due to shoe mass and/or heel-toe-drop differences. We also cannot discard the effect of speed on foot-strike pattern and therefore, running performance.

Keywords: exercise-induced fatigue, interval training, minimalist footwear, running economy

Procedia PDF Downloads 216
260 Control of Asthma in Children with Asthma during the Containment Period following the Covid-19 Pandemic

Authors: Meryam Labyad, Karima Fakiri, Widad Lahmini, Ghizlane Draiss, Mohamed Bouskraoui, Nadia Ouzennou

Abstract:

Background: Asthma is the most common chronic disease in children, affecting nearly 235 million people worldwide (OMS). In Morocco, asthma is much more common in children than in adults; the prevalence rate in children between 13 and 14 years of age is 20%.1 This pathology is marked by high morbidity, a significant impact on the quality of life and development of children 2 This requires a rigorous management strategy in order to achieve clinical control and reduce any risk to the patient 3 A search for aggravating factors is mandatory if a child has difficulty maintaining good asthma control. The objective of the present study is to describe asthma control during this confinement period in children aged 4 to 11 years followed by a pneumo-paediatric consultation. For children whose asthma is not controlled, a search for associations with promoting factors and adherence to treatment is also among the objectives of the study. Knowing the level of asthma control and influencing factors is a therapeutic priority in order to reduce hospitalizations and emergency care use. Objective: To assess asthma control and determine the factors influencing asthma levels in children with asthma during confinement following the COVID 19 pandemic. Method: Prospective cross-sectional study by questionnaire and structured interview among 66 asthmatic children followed in pediatric pneumology consultation at the CHU MED VI of Marrakech from 13/06/2020 to 13/07/2020, asthma control was assessed by the Childhood Asthma Control Test (C-ACT). Results: 66 children and their parents were included (mean age is 7.5 years), asthma was associated with allergic rhinitis (13.5% of cases), conjunctivitis (9% of cases), eczema (12% of cases), occurrence of infection (10.5% of cases). The period of confinement was marked by a decrease in the number of asthma attacks translated by a decrease in the number of emergency room visits (7.5%) of these asthmatic children, control was well controlled in 71% of the children, this control was significantly associated with good adherence to treatment (p<0.001), no infection (p<0.001) and no conjunctivitis (p=002) or rhinitis (p<0.001). This improvement in asthma control during confinement can be explained by the measures taken in the Kingdom to prevent the spread of COVID 19 (school closures, reduction in industrial activity, fewer means of transport, etc.), leading to a decrease in children's exposure to triggers, which justifies the decrease in the number of children having had an infection, allergic rhinitis or conjunctivitis during this period. In addition, the close monitoring of parents resulted in better therapeutic adherence (42.4% were fully observant). Confinement was positively perceived by 68% of the parents; this perception is significantly associated with the level of asthma control (p<0.001). Conclusion: Maintaining good control can be achieved through improved therapeutic adherence and avoidance of triggers, both of which were achieved during the containment period following the VIDOC pandemic 19.

Keywords: Asthma, control , COVID-19 , children

Procedia PDF Downloads 161
259 Developing Confidence of Visual Literacy through Using MIRO during Online Learning

Authors: Rachel S. E. Lim, Winnie L. C. Tan

Abstract:

Visual literacy is about making meaning through the interaction of images, words, and sounds. Graphic communication students typically develop visual literacy through critique and production of studio-based projects for their portfolios. However, the abrupt switch to online learning during the COVID-19 pandemic has made it necessary to consider new strategies of visualization and planning to scaffold teaching and learning. This study, therefore, investigated how MIRO, a cloud-based visual collaboration platform, could be used to develop the visual literacy confidence of 30 diploma in graphic communication students attending a graphic design course at a Singapore arts institution. Due to COVID-19, the course was taught fully online throughout a 16-week semester. Guided by Kolb’s Experiential Learning Cycle, the two lecturers developed students’ engagement with visual literacy concepts through different activities that facilitated concrete experiences, reflective observation, abstract conceptualization, and active experimentation. Throughout the semester, students create, collaborate, and centralize communication in MIRO with infinite canvas, smart frameworks, a robust set of widgets (i.e., sticky notes, freeform pen, shapes, arrows, smart drawing, emoticons, etc.), and powerful platform capabilities that enable asynchronous and synchronous feedback and interaction. Students then drew upon these multimodal experiences to brainstorm, research, and develop their motion design project. A survey was used to examine students’ perceptions of engagement (E), confidence (C), learning strategies (LS). Using multiple regression, it¬ was found that the use of MIRO helped students develop confidence (C) with visual literacy, which predicted performance score (PS) that was measured against their application of visual literacy to the creation of their motion design project. While students’ learning strategies (LS) with MIRO did not directly predict confidence (C) or performance score (PS), it fostered positive perceptions of engagement (E) which in turn predicted confidence (C). Content analysis of students’ open-ended survey responses about their learning strategies (LS) showed that MIRO provides organization and structure in documenting learning progress, in tandem with establishing standards and expectations as a preparatory ground for generating feedback. With the clarity and sequence of the mentioned conditions set in place, these prerequisites then lead to the next level of personal action for self-reflection, self-directed learning, and time management. The study results show that the affordances of MIRO can develop visual literacy and make up for the potential pitfalls of student isolation, communication, and engagement during online learning. The context of how MIRO could be used by lecturers to orientate students for learning in visual literacy and studio-based projects for future development are discussed.

Keywords: design education, graphic communication, online learning, visual literacy

Procedia PDF Downloads 91
258 Reproductive Behavior of the Red Sea Immigrant Lagocephalus sceleratus (Gmelin, 1789) from the Mediterranean Coast, Egypt

Authors: Mahmoud M. S. Farrag, Alaa A. K. Elhaweet, El-Sayed Kh. A. Akel, Mohsen A. Moustafa

Abstract:

The present work aimed to study the reproductive strategy of the common lessepsian puffer fish Lagocephalus sceleratus (Gmelin, 1879) from the Egyptian Mediterranean Waters. It is a famous migratory species plays an important role in the field of fisheries and ecology of aquatic ecosystem. The specimens were collected monthly from the landing centers along the Egyptian Mediterranean coast during 2012. Six maturity stages were recorded: (I) Thread like stage, (II) Immature stage (Virgin stage), (III) Maturing stage (Developing Virgin and recovering spent), (IV) Nearly ripe stage, (V) Fully ripe, (VI) Spawning stage, (VII) Spent stage. According to sex ratio, males exhibited higher number than females representing 52.44 % of the total fishes with sex ratio 1: 0.91. Fish length corresponding to 50% maturation was 38.5 cm for males and 41 cm for females. The corresponding ages (age at first maturity) are equal to 2.14 and 2.27 years for male and female respectively. The ova diameter ranged from 0.02mm to 0.85mm, the mature ova ranged from 0.16mm to 0.85mm and showed progressive increase from April towards September. Also, the presences of ova diameter in one peak of mature and ripe eggs in the ovaries were observed during spawning period. The relationship between gutted weight and absolute fecundity indicated that that fecundity increased as the fish grew in weight. The absolute fecundity ranged from 260288 to 2372931 for fish weight ranged from 698 to 3285 cm with an average of 1449522±720975. The relative fecundity ranged from 373 to 722 for fish weight ranged from 698 to 3285 cm with an average of 776±231. The spawning season of L. sceleratus was investigated from the data of gonado-somatic index and monthly distribution of maturity stages along the year as well as sequence of ova diameter for mature stages and exhibited a relatively prolong spawning season extending from April for both sexes and ends in August for male while ends in September for female. Fish releases its ripe ova in one batch during the spawning season. Histologically, the ovarian cycle of L. sceleratus was classified into six stages and the testicular cycle into five stages. Histological characters of gonads of L. sceleratus during the year of study had confirmed the previous results of distribution of maturity stages, gonado-somatic index and ova diameter, indicating this fish species has prolonged spawning season from April to September. This species is considered totally or uni spawner with synchronous group as it contained one to two developmental stages at the same gonad.

Keywords: Lagocephalus sceleratus, reproductive biology, oogenesis, histology

Procedia PDF Downloads 277
257 The Concept of Dharma under Hindu, Buddhist and Sikh Religions: A Comparative Analysis

Authors: Venkateswarlu Kappara

Abstract:

The term ‘Dharma’ is complex and ubiquitous. It has no equivalent word in English Initially applied to Aryans. In Rig Veda, it appears in a number of places with different meanings. The word Dharma comes from the roots word ‘dhr’ (Dhri-Dharayatetiiti Dharmaha). Principles of Dharma are all pervading. The closest synonyms for Dharma in English is ‘Righteousness.’ In a holy book Mahabharata, it is mentioned that Dharma destroys those who destroy it, Dharma Protects those who protect it. Also, Dharma might be shadowed, now and then by evil forces, but at the end, Dharma always triumphs. This line embodies the eternal victory of good over evil. In Mahabharata, Lord Krishna says Dharma upholds both, this worldly and other worldly affairs. Rig Veda says, ‘O Indra! Lead us on the path of Rta, on the right path over all evils.’ For Buddhists, Dharma most often means the body of teachings expounded by the Buddha. The Dharma is one of the three Jewels (Tri Ratnas) of Buddhism under which the followers take refuge. They are: the ‘Buddha’ meaning the minds perfection or enlightenment, the Dharma, meaning the teachings and the methods of the Buddha, and the Sangha meaning those awakened people who provide guidance and support followers. Buddha denies a separate permanent ‘I.’ Buddha Accepts Suffering (Dukka). Change / impermanence (Anicca) and not– self (Annatta) Dharma in the Buddhist scriptures has a variety of meanings including ‘phenomenon’ and ‘nature’ or ‘characteristic.’ For Sikhs, the word ‘Dharma’ means the ‘path’ of righteousness’ The Sikh scriptures attempt to answer the exposition of Dharma. The main Holy Scripture of the Sikh religion is called the Guru Granth Sahib. The faithful people are fully bound to do whatever the Dharma wants them to do. Such is the name of the Immaculate Lord. Only one who has faith comes to know such a state of mind. The righteous judge of Dharma, by the Hukam of God’s Command, sits and Administers true justice. From Dharma flow wealth and pleasure. The study indicates that in Sikh religion, the Dharma is the path of righteousness; In Buddhism, the mind’s perfection of enlightenment, and in Hinduism, it is non-violence, purity, truth, control of senses, not coveting the property of others. The comparative study implies that all religions dealt with Dharma for welfare of the mankind. The methodology adapted is theoretical, analytical and comparative. The present study indicates how far Indian philosophical systems influenced the present circumstances and how far the present system is not compatible with Ancient philosophical systems. A tentative generalization would be that the present system which is mostly influenced by the British Governance may not totally reflect the ancient norms. However, the mental make-up continues to be influenced by Ancient philosophical systems.

Keywords: Dharma, Dukka (suffering), Rakshati, righteous

Procedia PDF Downloads 145
256 Knowledge Management in the Tourism Industry in Project Management Paradigm

Authors: Olga A. Burukina

Abstract:

Tourism is a complex socio-economic phenomenon, partly regulated by national tourism industries. The sustainable development of tourism in a region, country or in tourist destination depends on a number of factors (political, economic, social, cultural, legal and technological), the understanding and correct interpretation of which is invariably anthropocentric. It is logical that for the successful functioning of a tour operating company, it is necessary to ensure its sustainable development. Sustainable tourism is defined as tourism that fully considers its current and future economic, social and environmental impacts, taking into account the needs of the industry, the environment and the host communities. For the business enterprise, sustainable development is defined as adopting business strategies and activities that meet the needs of the enterprise and its stakeholders today while protecting, sustaining and enhancing the human and natural resources that will be needed in the future. In addition to a systemic approach to the analysis of tourist destinations, each tourism project can and should be considered as a system characterized by a very high degree of variability, since each particular case of its implementation differs from the previous and subsequent ones, sometimes in a cardinal way. At the same time, it is important to understand that this variability is predominantly of anthropogenic nature (except for force majeure situations that are considered separately and afterwards). Knowledge management is the process of creating, sharing, using and managing the knowledge and information of an organization. It refers to a multidisciplinary approach to achieve organisational objectives by making the best use of knowledge. Knowledge management is seen as a key systems component that allows obtaining, storing, transferring, and maintaining information and knowledge in particular, in a long-term perspective. The study aims, firstly, to identify (1) the dynamic changes in the Italian travel industry in the last 5 years before the COVID19 pandemic, which can be considered the scope of force majeure circumstances, (2) the impact of the pandemic on the industry and (3) efforts required to restore it, and secondly, how project management tools can help to improve knowledge management in tour operating companies to maintain their sustainability, diminish potential risks and restore their pre-pandemic performance level as soon as possible. The pilot research is based upon a systems approach and has employed a pilot survey, semi-structured interviews, prior research analysis (aka literature review), comparative analysis, cross-case analysis, and modelling. The results obtained are very encouraging: PM tools can improve knowledge management in tour operating companies and secure the more sustainable development of the Italian tourism industry based on proper knowledge management and risk management.

Keywords: knowledge management, project management, sustainable development, tourism industr

Procedia PDF Downloads 134
255 (Re)Processing of ND-Fe-B Permanent Magnets Using Electrochemical and Physical Approaches

Authors: Kristina Zuzek, Xuan Xu, Awais Ikram, Richard Sheridan, Allan Walton, Saso Sturm

Abstract:

Recycling of end-of-life REEs based Nd-Fe-B magnets is an important strategy for reducing the environmental dangers associated with rare-earth mining and overcoming the well-documented supply risks related to the REEs. However, challenges on their reprocessing still remain. We report on the possibility of direct electrochemical recycling and reprocessing of Nd-Fe(B)-based magnets. In this investigation, we were able first to electrochemically leach the end-of-life NdFeB magnet and to electrodeposit Nd–Fe using a 1-ethyl-3-methyl imidazolium dicyanamide ([EMIM][DCA]) ionic liquid-based electrolyte. We observed that Nd(III) could not be reduced independently. However, it can be co-deposited on a substrate with the addition of Fe(II). Using advanced TEM techniques of electron-energy-loss spectroscopy (EELS) it was shown that Nd(III) is reduced to Nd(0) during the electrodeposition process. This gave a new insight into determining the Nd oxidation state, as X-ray photoelectron spectroscopy (XPS) has certain limitations. This is because the binding energies of metallic Nd (Nd0) and neodymium oxide (Nd₂O₃) are very close, i. e., 980.5-981.5 eV and 981.7-982.3 eV, respectively, making it almost impossible to differentiate between the two states. These new insights into the electrodeposition process represent an important step closer to efficient recycling of rare piles of earth in metallic form at mild temperatures, thus providing an alternative to high-temperature molten-salt electrolysis and a step closer to deposit Nd-Fe-based magnetic materials. Further, we propose a new concept of recycling the sintered Nd-Fe-B magnets by direct recovering the 2:14:1 matrix phase. Via an electrochemical etching method, we are able to recover pure individual 2:14:1 grains that can be re-used for new types of magnet production. In the frame of physical reprocessing, we have successfully synthesized new magnets out of hydrogen (HDDR)-recycled stocks with a contemporary technique of pulsed electric current sintering (PECS). The optimal PECS conditions yielded fully dense Nd-Fe-B magnets with the coercivity Hc = 1060 kA/m, which was boosted to 1160 kA/m after the post-PECS thermal treatment. The Br and Hc were tackled further and increased applied pressures of 100 – 150 MPa resulted in Br = 1.01 T. We showed that with a fine tune of the PECS and post-annealing it is possible to revitalize the Nd-Fe-B end-of-life magnets. By applying advanced TEM, i.e. atomic-scale Z-contrast STEM combined with EDXS and EELS, the resulting magnetic properties were critically assessed against various types of structural and compositional discontinuities down to atomic-scale, which we believe control the microstructure evolution during the PECS processing route.

Keywords: electrochemistry, Nd-Fe-B, pulsed electric current sintering, recycling, reprocessing

Procedia PDF Downloads 135
254 A Case Study on Quantitatively and Qualitatively Increasing Student Output by Using Available Word Processing Applications to Teach Reluctant Elementary School-Age Writers

Authors: Vivienne Cameron

Abstract:

Background: Between 2010 and 2017, teachers in a suburban public school district struggled to get students to consistently produce adequate writing samples as measured by the Pennsylvania state writing rubric for measuring focus, content, organization, style, and conventions. A common thread in all of the data was the need to develop stamina in the student writers. Method: All of the teachers used the traditional writing process model (prewrite, draft, revise, edit, final copy) during writing instruction. One teacher taught the writing process using word processing and incentivizing with publication instead of the traditional pencil/paper/grading method. Students did not have instruction in typing/keyboarding. The teacher submitted resulting student work to real-life contests, magazines, and publishers. Results: Students in the test group increased both the quantity and quality of their writing over a seven month period as measured by the Pennsylvania state writing rubric. Reluctant writers, as well as students with autism spectrum disorder, benefited from this approach. This outcome was repeated consistently over a five-year period. Interpretation: Removing the burden of pencil and paper allowed students to participate in the writing process more fully. Writing with pencil and paper is physically tiring. Students are discouraged when they submit a draft and are instructed to use the Add, Remove, Move, Substitute (ARMS) method to revise their papers. Each successive version becomes shorter. Allowing students to type their papers frees them to quickly and easily make changes. The result is longer writing pieces in shorter time frames, allowing the teacher to spend more time working on individual needs. With this additional time, the teacher can concentrate on teaching focus, content, organization, style, conventions, and audience. S/he also has a larger body of works from which to work on whole group instruction such as developing effective leads. The teacher submitted the resulting student work to contests, magazines, and publishers. Although time-consuming, the submission process was an invaluable lesson for teaching about audience and tone. All students in the test sample had work accepted for publication. Students became highly motivated to succeed when their work was accepted for publication. This motivation applied to special needs students, regular education students, and gifted students.

Keywords: elementary-age students, reluctant writers, teaching strategies, writing process

Procedia PDF Downloads 145
253 Neuropsychiatric Outcomes of Intensive Music Therapy in Stroke Rehabilitation A Premilitary Investigation

Authors: Honey Bryant, Elvina Chu

Abstract:

Stroke is the leading cause of disability in adults in Canada and directly related to depression, anxiety, and sleep disorders; with an estimated annual cost of $50 billion in health care. Strokes not only impact the individual but society as a whole. Current stroke rehabilitation does not include Music Therapy, although it has success in clinical research in the use of stroke rehabilitation. This study examines the use of neurologic music therapy (NMT) in conjunction with stroke rehabilitation to improve sleep quality, reduce stress levels, and promote neurogenesis. Existing research on NMT in stroke is limited, which means any conclusive information gathered during this study will be significant. My novel hypotheses are a.) stroke patients will become less depressed and less anxious with improved sleep following NMT. b.) NMT will reduce stress levels and promote neurogenesis in stroke patients admitted for rehabilitation. c.) Beneficial effects of NMT will be sustained at least short-term following treatment. Participants were recruited from the in-patient stroke rehabilitation program at Providence Care Hospital in Kingston, Ontario, Canada. All participants-maintained stroke rehabilitation treatment as normal. The study was spilt into two groups, the first being Passive Music Listening (PML) and the second Neurologic Music Therapy (NMT). Each group underwent 10 sessions of intensive music therapy lasting 45 minutes for 10 consecutive days, excluding weekends. Psychiatric Assessments, Epworth Sleepiness Scale (ESS), Hospital Anxiety & Depression Rating Scale (HADS), and Music Engagement Questionnaire (MusEQ), were completed, followed by a general feedback interview. Physiological markers of stress were measured through blood pressure measurements and heart rate variability. Serum collections reviewed neurogenesis via Brain-derived neurotrophic factor (BDNF) and stress markers of cortisol levels. As this study is still on-going, a formal analysis of data has not been fully completed, although trends are following our hypotheses. A decrease in sleepiness and anxiety is seen upon the first cohort of PML. Feedback interviews have indicated most participants subjectively felt more relaxed and thought PML was useful in their recovery. If the hypothesis is supported, larger external funding which will allow for greater investigation of the use of NMT in stroke rehabilitation. As we know, NMT is not covered under Ontario Health Insurance Plan (OHIP), so there is limited scientific data surrounding its uses as a clinical tool. This research will provide detailed findings of the treatment of neuropsychiatric aspects of stroke. Concurrently, a passive music listening study is being designed to further review the use of PML in rehabilitation as well.

Keywords: music therapy, psychotherapy, neurologic music therapy, passive music listening, neuropsychiatry, counselling, behavioural, stroke, stroke rehabilitation, rehabilitation, neuroscience

Procedia PDF Downloads 81
252 An Artificially Intelligent Teaching-Agent to Enhance Learning Interactions in Virtual Settings

Authors: Abdulwakeel B. Raji

Abstract:

This paper introduces a concept of an intelligent virtual learning environment that involves communication between learners and an artificially intelligent teaching agent in an attempt to replicate classroom learning interactions. The benefits of this technology over current e-learning practices is that it creates a virtual classroom where real time adaptive learning interactions are made possible. This is a move away from the static learning practices currently being adopted by e-learning systems. Over the years, artificial intelligence has been applied to various fields, including and not limited to medicine, military applications, psychology, marketing etc. The purpose of e-learning applications is to ensure users are able to learn outside of the classroom, but a major limitation has been the inability to fully replicate classroom interactions between teacher and students. This study used comparative surveys to gain information and understanding of the current learning practices in Nigerian universities and how they compare to these practices compare to the use of a developed e-learning system. The study was conducted by attending several lectures and noting the interactions between lecturers and tutors and as an aftermath, a software has been developed that deploys the use of an artificial intelligent teaching-agent alongside an e-learning system to enhance user learning experience and attempt to create the similar learning interactions to those found in classroom and lecture hall settings. Dialogflow has been used to implement a teaching-agent, which has been developed using JSON, which serves as a virtual teacher. Course content has been created using HTML, CSS, PHP and JAVASCRIPT as a web-based application. This technology can run on handheld devices and Google based home technologies to give learners an access to the teaching agent at any time. This technology also implements the use of definite clause grammars and natural language processing to match user inputs and requests with defined rules to replicate learning interactions. This technology developed covers familiar classroom scenarios such as answering users’ questions, asking ‘do you understand’ at regular intervals and answering subsequent requests, taking advanced user queries to give feedbacks at other periods. This software technology uses deep learning techniques to learn user interactions and patterns to subsequently enhance user learning experience. A system testing has been undergone by undergraduate students in the UK and Nigeria on the course ‘Introduction to Database Development’. Test results and feedback from users shows that this study and developed software is a significant improvement on existing e-learning systems. Further experiments are to be run using the software with different students and more course contents.

Keywords: virtual learning, natural language processing, definite clause grammars, deep learning, artificial intelligence

Procedia PDF Downloads 112
251 Effect of Ageing of Laser-Treated Surfaces on Corrosion Resistance of Fusion-bonded Al Joints

Authors: Rio Hirakawa, Christian Gundlach, Sven Hartwig

Abstract:

Aluminium has been used in a wide range of industrial applications due to its numerous advantages, including excellent specific strength, thermal conductivity, corrosion resistance, workability and recyclability. The automotive industry is increasingly adopting multi-materials, including aluminium in structures and components to improve the mechanical usability and performance of individual components. A common method for assembling dissimilar materials is mechanical joining, but mechanical joining requires multiple manufacturing steps, affects the mechanical properties of the base material and increases the weight due to additional metal parts. Fusion bonding is being used in more and more industries as a way of avoiding the above drawbacks. Infusion bonding, and surface pre-treatment of the base material is essential to ensure the long-life durability of the joint. Laser surface treatment of aluminium has been shown to improve the durability of the joint by forming a passive oxide film and roughening the substrate surface. Infusion bonding, the polymer bonds directly to the metal instead of the adhesive, but the sensitivity to interfacial contamination is higher due to the chemical activity and molecular size of the polymer. Laser-treated surfaces are expected to absorb impurities from the storage atmosphere over time, but the effect of such changes in the treated surface over time on the durability of fusion-bonded joints has not yet been fully investigated. In this paper, the effect of the ageing of laser-treated surfaces of aluminum alloys on the corrosion resistance of fusion-bonded joints is therefore investigated. AlMg3 of 1.5 mm thickness was cut using a water-jet cutting machine, cleaned and degreased with isopropanol and surface pre-treated with a pulsed fiber laser at a wavelength of 1060 nm, maximum power of 70 W and repetition rate of 55 kHz. The aluminum surfaces were then stored in air for various periods of time and their corrosion resistance was assessed by cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS). For the aluminum joints, induction heating was employed as the fusion bonding method and single-lap shear specimens were prepared. The corrosion resistance of the joints was assessed by measuring the lap shear strength before and after neutral salt spray. Cross-sectional observations by scanning electron microscopy (SEM) were also carried out to investigate changes in the microstructure of the bonded interface. Finally, the corrosion resistance of the surface and the joint were compared and the differences in the mechanisms of corrosion resistance enhancement between the two were discussed.

Keywords: laser surface treatment, pre-treatment, bonding, corrosion, durability, interface, automotive, aluminium alloys, joint, fusion bonding

Procedia PDF Downloads 50