Search results for: shared memory parallel programming
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4042

Search results for: shared memory parallel programming

172 Courtyard Evolution in Contemporary Sustainable Living

Authors: Yiorgos Hadjichristou

Abstract:

The paper will focus on the strategic development deriving from the evolution of the traditional courtyard spatial organization towards a new, contemporary sustainable way of living. New sustainable approaches that engulf the social issues, the notion of place, the understanding of weather architecture blended together with the bioclimatic behaviour will be seen through a series of experimental case studies in the island of Cyprus, inspired and originated from its traditional wisdom, ranging from small scale of living to urban interventions. Weather and nature will be seen as co-architectural authors with architects as intelligently claimed by Jonathan Hill in his Weather Architecture discourse. Furthermore, following Pallasmaa’s understanding, the building will be seen not as an end itself and the elements of an architectural experience as having a verb form rather than being nouns. This will further enhance the notion of merging the subject-human and the object-building as discussed by Julio Bermudez. This eventually will enable to generate the discussion of the understanding of the building constructed according to the specifics of place and inhabitants, shaped by its physical and human topography as referred by Adam Sharr in relation to Heidegger’s thinking. The specificities of the divided island and the dealing with sites that are in vicinity with the diving Green Line will further trigger explorations dealing with the regeneration issues and the social sustainability offering unprecedented opportunities for innovative sustainable ways of living. The above premises will lead us to develop innovative strategies for a profound, both technical and social sustainability, which fruitfully yields to innovative living built environments, responding to the ever changing environmental and social needs. As a starting point, a case study in Kaimakli in Nicosia a refurbishment with an extension of a traditional house, already engulfs all the traditional/ vernacular wisdom of the bioclimatic architecture. It aims at capturing not only its direct and quite obvious bioclimatic features, but rather to evolve them by adjusting the whole house in a contemporary living environment. In order to succeed this, evolutions of traditional architectural elements and spatial conditions are integrated in a way that does not only respond to some certain weather conditions, but they integrate and blend the weather within the built environment. A series of innovations aiming at maximum flexibility is proposed. The house can finally be transformed into a winter enclosure, while for the most part of the year it turns into a ‘camping’ living environment. Parallel to experimental interventions in existing traditional units, we will proceed examining the implementation of the same developed methodology in designing living units and complexes. Malleable courtyard organizations that attempt to blend the traditional wisdom with the contemporary needs for living, the weather and nature with the built environment will be seen tested in both horizontal and vertical developments. A new social identity of people, directly involved and interacting with the weather and climatic conditions will be seen as the result of balancing the social with the technological sustainability, the immaterial and the material aspects of the built environment.

Keywords: building as a verb, contemporary living, traditional bioclimatic wisdom, weather architecture

Procedia PDF Downloads 410
171 Bio-Functionalized Silk Nanofibers for Peripheral Nerve Regeneration

Authors: Kayla Belanger, Pascale Vigneron, Guy Schlatter, Bernard Devauchelle, Christophe Egles

Abstract:

A severe injury to a peripheral nerve leads to its degeneration and the loss of sensory and motor function. To this day, there still lacks a more effective alternative to the autograft which has long been considered the gold standard for nerve repair. In order to overcome the numerous drawbacks of the autograft, tissue engineered biomaterials may be effective alternatives. Silk fibroin is a favorable biomaterial due to its many advantageous properties such as its biocompatibility, its biodegradability, and its robust mechanical properties. In this study, bio-mimicking multi-channeled nerve guidance conduits made of aligned nanofibers achieved by electrospinning were functionalized with signaling biomolecules and were tested in vitro and in vivo for nerve regeneration support. Silk fibroin (SF) extracted directly from silkworm cocoons was put in solution at a concentration of 10wt%. Poly(ethylene oxide) (PEO) was added to the resulting SF solution to increase solution viscosity and the following three electrospinning solutions were made: (1) SF/PEO solution, (2) SF/PEO solution with nerve growth factor and ciliary neurotrophic factor, and (3) SF/PEO solution with nerve growth factor and neurotrophin-3. Each of these solutions was electrospun into a multi-layer architecture to obtain mechanically optimized aligned nanofibrous mats. For in vitro studies, aligned fibers were treated to induce β-sheet formation and thoroughly rinsed to eliminate presence of PEO. Each material was tested using rat embryo neuron cultures to evaluate neurite extension and the interaction with bio-functionalized or non-functionalized aligned fibers. For in vivo studies, the mats were rolled into 5mm long multi-, micro-channeled conduits then treated and thoroughly rinsed. The conduits were each subsequently implanted between a severed rat sciatic nerve. The effectiveness of nerve repair over a period of 8 months was extensively evaluated by cross-referencing electrophysiological, histological, and movement analysis results to comprehensively evaluate the progression of nerve repair. In vitro results show a more favorable interaction between growing neurons and bio-functionalized silk fibers compared to pure silk fibers. Neurites can also be seen having extended unidirectionally along the alignment of the nanofibers which confirms a guidance factor for the electrospun material. The in vivo study has produced positive results for the regeneration of the sciatic nerve over the length of the study, showing contrasts between the bio-functionalized material and the non-functionalized material along with comparisons to the experimental control. Nerve regeneration has been evaluated not only by histological analysis, but also by electrophysiological assessment and motion analysis of two separate natural movements. By studying these three components in parallel, the most comprehensive evaluation of nerve repair for the conduit designs can be made which can, therefore, more accurately depict their overall effectiveness. This work was supported by La Région Picardie and FEDER.

Keywords: electrospinning, nerve guidance conduit, peripheral nerve regeneration, silk fibroin

Procedia PDF Downloads 231
170 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 185
169 Influence of Protein Malnutrition and Different Stressful Conditions on Aluminum-Induced Neurotoxicity in Rats: Focus on the Possible Protection Using Epigallocatechin-3-Gallate

Authors: Azza A. Ali, Asmaa Abdelaty, Mona G. Khalil, Mona M. Kamal, Karema Abu-Elfotuh

Abstract:

Background: Aluminium (Al) is known as a neurotoxin environmental pollutant that can cause certain diseases as Dementia, Alzheimer's disease, and Parkinsonism. It is widely used in antacid drugs as well as in food additives and toothpaste. Stresses have been linked to cognitive impairment; Social isolation (SI) may exacerbate memory deficits while protein malnutrition (PM) increases oxidative damage in cortex, hippocampus and cerebellum. The risk of cognitive decline may be lower by maintaining social connections. Epigallocatechin-3-gallate (EGCG) is the most abundant catechin in green tea and has antioxidant, anti-inflammatory and anti-atherogenic effects as well as health-promoting effects in CNS. Objective: To study the influence of different stressful conditions as social isolation, electric shock (EC) and inadequate Nutritional condition as PM on neurotoxicity induced by Al in rats as well as to investigate the possible protective effect of EGCG in these stressful and PM conditions. Methods: Rats were divided into two major groups; protected group which was daily treated during three weeks of the experiment by EGCG (10 mg/kg, IP) or non-treated. Protected and non-protected groups included five subgroups as following: One normal control received saline and four Al toxicity groups injected daily for three weeks by ALCl3 (70 mg/kg, IP). One of them served as Al toxicity model, two groups subjected to different stresses either by isolation as mild stressful condition (SI-associated Al toxicity model) or by electric shock as high stressful condition (EC- associated Al toxicity model). The last was maintained on 10% casein diet (PM -associated Al toxicity model). Isolated rats were housed individually in cages covered with black plastic. Biochemical changes in the brain as acetyl cholinesterase (ACHE), Aβ, brain derived neurotrophic factor (BDNF), inflammatory mediators (TNF-α, IL-1β), oxidative parameters (MDA, SOD, TAC) were estimated for all groups. Histopathological changes in different brain regions were also evaluated. Results: Rats exposed to Al for three weeks showed brain neurotoxicity and neuronal degenerations. Both mild (SI) and high (EC) stressful conditions as well as inadequate nutrition (PM) enhanced Al-induced neurotoxicity and brain neuronal degenerations; the enhancement induced by stresses especially in its higher conditions (ES) was more pronounced than that of inadequate nutritional conditions (PM) as indicated by the significant increase in Aβ, ACHE, MDA, TNF-α, IL-1β together with the significant decrease in SOD, TAC, BDNF. On the other hand, EGCG showed more pronounced protection against hazards of Al in both stressful conditions (SI and EC) rather than in PM .The protective effects of EGCG were indicated by the significant decrease in Aβ, ACHE, MDA, TNF-α, IL-1β together with the increase in SOD, TAC, BDNF and confirmed by brain histopathological examinations. Conclusion: Neurotoxicity and brain neuronal degenerations induced by Al were more severe with stresses than with PM. EGCG can protect against Al-induced brain neuronal degenerations in all conditions. Consequently, administration of EGCG together with socialization as well as adequate protein nutrition is advised especially on excessive Al-exposure to avoid the severity of its neuronal toxicity.

Keywords: environmental pollution, aluminum, social isolation, protein malnutrition, neuronal degeneration, epigallocatechin-3-gallate, rats

Procedia PDF Downloads 377
168 Characterization and Evaluation of the Dissolution Increase of Molecular Solid Dispersions of Efavirenz

Authors: Leslie Raphael de M. Ferraz, Salvana Priscylla M. Costa, Tarcyla de A. Gomes, Giovanna Christinne R. M. Schver, Cristóvão R. da Silva, Magaly Andreza M. de Lyra, Danilo Augusto F. Fontes, Larissa A. Rolim, Amanda Carla Q. M. Vieira, Miracy M. de Albuquerque, Pedro J. Rolim-Neto

Abstract:

Efavirenz (EFV) is a drug used as first-line treatment of AIDS. However, it has poor aqueous solubility and wettability, presenting problems in the gastrointestinal tract absorption and bioavailability. One of the most promising strategies to improve the solubility is the use of solid dispersions (SD). Therefore, this study aimed to characterize SD EFZ with the polymers: PVP-K30, PVPVA 64 and SOLUPLUS in order to find an optimal formulation to compose a future pharmaceutical product for AIDS therapy. Initially, Physical Mixtures (PM) and SD with the polymers were obtained containing 10, 20, 50 and 80% of drug (w/w) by the solvent method. The best formulation obtained between the SD was selected by in vitro dissolution test. Finally, the drug-carrier system chosen, in all ratios obtained, were analyzed by the following techniques: Differential Scanning Calorimetry (DSC), polarization microscopy, Scanning Electron Microscopy (SEM) and spectrophotometry of absorption in the region of infrared (IR). From the dissolution profiles of EFV, PM and SD, the values of area Under The Curve (AUC) were calculated. The data showed that the AUC of all PM is greater than the isolated EFV, this result is derived from the hydrophilic properties of the polymers thus favoring a decrease in surface tension between the drug and the dissolution medium. In adittion, this ensures an increasing of wettability of the drug. In parallel, it was found that SD whom had higher AUC values, were those who have the greatest amount of polymer (with only 10% drug). As the amount of drug increases, it was noticed that these results either decrease or are statistically similar. The AUC values of the SD using the three different polymers, followed this decreasing order: SD PVPVA 64-EFV 10% > SD PVP-K30-EFV 10% > SD Soluplus®-EFV 10%. The DSC curves of SD’s did not show the characteristic endothermic event of drug melt process, suggesting that the EFV was converted to its amorphous state. The analysis of polarized light microscopy showed significant birefringence of the PM’s, but this was not observed in films of SD’s, thus suggesting the conversion of the drug from the crystalline to the amorphous state. In electron micrographs of all PM, independently of the percentage of the drug, the crystal structure of EFV was clearly detectable. Moreover, electron micrographs of the SD with the two polymers in different ratios investigated, we observed the presence of particles with irregular size and morphology, also occurring an extensive change in the appearance of the polymer, not being possible to differentiate the two components. IR spectra of PM corresponds to the overlapping of polymer and EFV bands indicating thereby that there is no interaction between them, unlike the spectra of all SD that showed complete disappearance of the band related to the axial deformation of the NH group of EFV. Therefore, this study was able to obtain a suitable formulation to overcome the solubility limitations of the EFV, since SD PVPVA 64-EFZ 10% was chosen as the best system in delay crystallization of the prototype, reaching higher levels of super saturation.

Keywords: characterization, dissolution, Efavirenz, solid dispersions

Procedia PDF Downloads 619
167 Spectral Responses of the Laser Generated Coal Aerosol

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki

Abstract:

Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.

Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation

Procedia PDF Downloads 349
166 Reduction of Specific Energy Consumption in Microfiltration of Bacillus velezensis Broth by Air Sparging and Turbulence Promoter

Authors: Jovana Grahovac, Ivana Pajcin, Natasa Lukic, Jelena Dodic, Aleksandar Jokic

Abstract:

To obtain purified biomass to be used in the plant pathogen biocontrol or as soil biofertilizer, it is necessary to eliminate residual broth components at the end of the fermentation process. The main drawback of membrane separation techniques is permeate flux decline due to the membrane fouling. Fouling mitigation measures increase the pressure drop along membrane channel due to the increased resistance to flow of the feed suspension, thus increasing the hydraulic power drop. At the same time, these measures lead to an increase in the permeate flux due to the reduced resistance of the filtration cake on the membrane surface. Because of these opposing effects, the energy efficiency of fouling mitigation measures is limited, and the justification of its application is provided by information on a reducing specific energy consumption compared to a case without any measures employed. In this study, the influence of static mixer (Kenics) and air-sparging (two-phase flow) on reduction of specific energy consumption (ER) was investigated. Cultivation Bacillus velezensis was carried out in the 3-L bioreactor (Biostat® Aplus) containing 2 L working volume with two parallel Rushton turbines and without internal baffles. Cultivation was carried out at 28 °C on at 150 rpm with an aeration rate of 0.75 vvm during 96 h. The experiments were carried out in a conventional cross-flow microfiltration unit. During experiments, permeate and retentate were recycled back to the broth vessel to simulate continuous process. The single channel ceramic membrane (TAMI Deutschland) used had a nominal pore size 200 nm with the length of 250 mm and an inner/external diameter of 6/10 mm. The useful membrane channel surface was 4.33×10⁻³ m². Air sparging was brought by the pressurized air connected by a three-way valve to the feed tube by a simple T-connector without diffusor. The different approaches to flux improvement are compared in terms of energy consumption. Reduction of specific energy consumption compared to microfiltration without fouling mitigation is around 49% and 63%, for use of two-phase flow and a static mixer, respectively. In the case of a combination of these two fouling mitigation methods, ER is 60%, i.e., slightly lower compared to the use of turbulence promoter alone. The reason for this result can be found in the fact that flux increase is more affected by the presence of a Kenics static mixer while sparging results in an increase of energy used during microfiltration. By comparing combined method with turbulence promoter flux enhancement method ER is negative (-7%) which can be explained by increased power consumption for air flow with moderate contribution to the flux increase. Another confirmation for this fact can be found by comparing energy consumption values for combined method with energy consumption in the case of two-phase flow. In this instance energy reduction (ER) is 22% that demonstrates that turbulence promoter is more efficient compared to two phase flow. Antimicrobial activity of Bacillus velezensis biomass against phytopathogenic isolates Xanthomonas campestris was preserved under different fouling reduction methods.

Keywords: Bacillus velezensis, microfiltration, static mixer, two-phase flow

Procedia PDF Downloads 109
165 Statistical Models and Time Series Forecasting on Crime Data in Nepal

Authors: Dila Ram Bhandari

Abstract:

Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.

Keywords: time series analysis, forecasting, ARIMA, machine learning

Procedia PDF Downloads 154
164 Digitization and Morphometric Characterization of Botanical Collection of Indian Arid Zones as Informatics Initiatives Addressing Conservation Issues in Climate Change Scenario

Authors: Dipankar Saha, J. P. Singh, C. B. Pandey

Abstract:

Indian Thar desert being the seventh largest in the world is the main hot sand desert occupies nearly 385,000km2 and about 9% of the area of the country harbours several species likely the flora of 682 species (63 introduced species) belonging to 352 genera and 87 families. The degree of endemism of plant species in the Thar desert is 6.4 percent, which is relatively higher than the degree of endemism in the Sahara desert which is very significant for the conservationist to envisage. The advent and development of computer technology for digitization and data base management coupled with the rapidly increasing importance of biodiversity conservation resulted in the invention of biodiversity informatics as discipline of basic sciences with multiple applications. Aichi Target 19 as an outcome of Convention of Biological Diversity (CBD) specifically mandates the development of an advanced and shared biodiversity knowledge base. Information on species distributions in space is the crux of effective management of biodiversity in the rapidly changing world. The efficiency of biodiversity management is being increased rapidly by various stakeholders like researchers, policymakers, and funding agencies with the knowledge and application of biodiversity informatics. Herbarium specimens being a vital repository for biodiversity conservation especially in climate change scenario the digitization process usually aims to improve access and to preserve delicate specimens and in doing so creating large sets of images as a part of the existing repository as arid plant information facility for long-term future usage. As the leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens as well. As a part of this activity, laminar characterization (leaves being the most important characters in assessing climate change impact) initially resulted in classification of more than thousands collections belonging to ten families like Acanthaceae, Aizoaceae, Amaranthaceae, Asclepiadaceae, Anacardeaceae, Apocynaceae, Asteraceae, Aristolochiaceae, Berseraceae and Bignoniaceae etc. Taxonomic diversity indices has also been worked out being one of the important domain of biodiversity informatics approaches. The digitization process also encompasses workflows which incorporate automated systems to enable us to expand and speed up the digitisation process. The digitisation workflows used to be on a modular system which has the potential to be scaled up. As they are being developed with a geo-referencing tool and additional quality control elements and finally placing specimen images and data into a fully searchable, web-accessible database. Our effort in this paper is to elucidate the role of BIs, present effort of database development of the existing botanical collection of institute repository. This effort is expected to be considered as a part of various global initiatives having an effective biodiversity information facility. This will enable access to plant biodiversity data that are fit-for-use by scientists and decision makers working on biodiversity conservation and sustainable development in the region and iso-climatic situation of the world.

Keywords: biodiversity informatics, climate change, digitization, herbarium, laminar characters, web accessible interface

Procedia PDF Downloads 213
163 Pedagogical Opportunities of Physics Education Technology Interactive Simulations for Secondary Science Education in Bangladesh

Authors: Mohosina Jabin Toma, Gerald Tembrevilla, Marina Milner-Bolotin

Abstract:

Science education in Bangladesh is losing its appeal at an alarming rate due to the lack of science laboratory equipment, excessive teacher-student ratio, and outdated teaching strategies. Research-based educational technologies aim to address some of the problems faced by teachers who have limited access to laboratory resources, like many Bangladeshi teachers. Physics Education Technology (PhET) research team has been developing science and mathematics interactive simulations to help students develop deeper conceptual understanding. Still, PhET simulations are rarely used in Bangladesh. The purpose of this study is to explore Bangladeshi teachers’ challenges in learning to implement PhET-enhanced pedagogies and examine teachers’ views on PhET’s pedagogical opportunities in secondary science education. Since it is a new technology for Bangladesh, seven workshops on PhET were conducted in Dhaka city for 129 in-service and pre-service teachers in the winter of 2023 prior to data collection. This study followed an explanatory mixed method approach that included a pre-and post-workshop survey and five semi-structured interviews. Teachers participated in the workshops voluntarily and shared their experiences at the end. Teachers’ challenges were also identified from workshop discussions and observations. The interviews took place three to four weeks after the workshop and shed light on teachers’ experiences of using PhET in actual classroom settings. The results suggest that teachers had difficulty handling new technology; hence, they recommended preparing a booklet and Bengali YouTube videos on PhET to assist them in overcoming their struggles. Teachers also faced challenges in using any inquiry-based learning approach due to the content-loaded curriculum and exam-oriented education system, as well as limited experience with inquiry-based education. The short duration of classes makes it difficult for them to design PhET activities. Furthermore, considering limited access to computers and the internet in school, teachers think PhET simulations can bring positive changes if used in homework activities. Teachers also think they lack pedagogical skills and sound content knowledge to take full advantage of PhET. They highly appreciated the workshops and proposed that the government designs some teacher training modules on how to incorporate PhET simulations. Despite all the challenges, teachers believe PhET can enhance student learning, ensure student engagement and increase student interest in STEM Education. Considering the lack of science laboratory equipment, teachers recognized the potential of PhET as a supplement to hands-on activities for secondary science education in Bangladesh. They believed that if PhET develops more curriculum-relevant sims, it will bring revolutionary changes to how Bangladeshi students learn science. All the participating teachers in this study came from two organizations, and all the workshops took place in urban areas; therefore, the findings cannot be generalized to all secondary science teachers. A nationwide study is required to include teachers from diverse backgrounds. A further study can shed light on how building a professional learning community can lessen teachers’ challenges in incorporating PhET-enhanced pedagogy in their teaching.

Keywords: educational technology, inquiry-based learning, PhET interactive simulations, PhET-enhanced pedagogies, science education, science laboratory equipment, teacher professional development

Procedia PDF Downloads 74
162 Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre of Visual Arts in the Context of Agile, Lean and Hybrid Project Management Approaches

Authors: Maria Ledinskaya

Abstract:

This paper examines the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts in the context of Agile, Lean, and Hybrid project management. It is part case study and part literature review. To date, relatively little has been written about non-traditional project management approaches in heritage conservation. This paper seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation, by referencing their practical application on a recent museum-based conservation project. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre for Visual Arts by private collectors Michael and Joyce Morris. The first part introduces the chronological timeline and key elements of the project. It describes a medium-size conservation project of moderate complexity, which was planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown condition and materials, unconfirmed budget. The project was also impacted by the unknown unknowns of the COVID-19 pandemic, such as indeterminate lockdowns, and the need to accommodate social distancing and remote communications. The author, a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Collection Conservation Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. Subsequent sections examine the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment, due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Collection Conservation Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, as well as the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics, particularly with respect to change management, bespoke ethics, shared decision-making, and value-based cost-benefit conservation strategy. The author concludes that the Morris Collection Conservation Project had multiple Agile and Lean features which were instrumental to the successful delivery of the project. These key features are identified as distributed decision making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point largely in favour of a Hybrid model which combines traditional and alternative project processes and tools to suit the specific needs of the project.

Keywords: project management, conservation, waterfall, agile, lean, hybrid

Procedia PDF Downloads 88
161 Segmentation along the Strike-slip Fault System of the Chotts Belt, Southern Tunisia

Authors: Abdelkader Soumaya, Aymen Arfaoui, Noureddine Ben Ayed, Ali Kadri

Abstract:

The Chotts belt represents the southernmost folded structure in the Tunisian Atlas domain. It is dominated by inherited deep extensional E-W trending fault zones, which are reactivated as strike-slip faults during the Cenozoic compression. By examining the geological maps at different scales and based on the fieldwork data, we propose new structural interpretations for the geometries and fault kinematics in the Chotts chain. A set of ENE-WSW right-lateral en echelon folds, with curved shapes and steeply inclined southern limbs, is visible in the map view of this belt. These asymmetric tight anticlines are affected by E-W trending fault segments linked by local bends and stepovers. The revealed kinematic indicators along one of these E-W striated faults (Tafferna segment), such as breccias and gently inclined slickenlines (N094, 80N, 15°W pitch angles), show direct evidence of dextral strike-slip movement. The calculated stress tensors from corresponding faults slip data reveal an overall strike-slip tectonic regime with reverse component and NW-trending sub-horizontal σ1 axis ranking between N130 to N150. From west to east, we distinguished several types of structures along the segmented dextral fault system of the Chotts Range. The NE-SW striking fold-thrust belt (~25 km-long) between two continuously linked E-W fault segments (NW of Tozeur town) has been suggested as a local restraining bend. The central part of the Chotts chain is occupied by the ENE-striking Ksar Asker anticlines (Taferna, Torrich, and Sif Laham), which are truncated by a set of E-W strike-slip fault segments. Further east, the fault segments of Hachichina and Sif Laham connected across the NW-verging asymmetric fold-thrust system of Bir Oum Ali, which can be interpreted as a left-stepping contractional bend (~20 km-long). The oriental part of the Chotts belt corresponds to an array of subparallel E-W oriented fault segments (i.e., Beidha, Bouloufa, El Haidoudi-Zemlet El Beidha) with similar lengths (around 10 km). Each of these individual separated segments is associated with curved ENE-trending en echelon right-stepping anticlines. These folds are affected by a set of conjugate R and R′ shear-type faults indicating a dextral strike-lip motion. In addition, the relay zones between these E-W overstepping fault segments define local releasing stepovers dominated by NW-SE subsidiary faults. Finally, the Chotts chain provides well-exposed examples of strike-slip tectonics along E-W distributed fault segments. Each fault zone shows a typical strike-slip architecture, including parallel fault segments connecting via local stepovers or bends. Our new structural interpretations for this region reveal a great influence of the E-W deep fault segments on regional tectonic deformations and stress field during the Cenozoic shortening.

Keywords: chotts belt, tunisian atlas, strike-slip fault, stepovers, fault segments

Procedia PDF Downloads 59
160 Glucose Measurement in Response to Environmental and Physiological Challenges: Towards a Non-Invasive Approach to Study Stress in Fishes

Authors: Tomas Makaras, Julija Razumienė, Vidutė Gurevičienė, Gintarė Sauliutė, Milda Stankevičiūtė

Abstract:

Stress responses represent animal’s natural reactions to various challenging conditions and could be used as a welfare indicator. Regardless of the wide use of glucose measurements in stress evaluation, there are some inconsistencies in its acceptance as a stress marker, especially when it comes to comparison with non-invasive cortisol measurements in the fish challenging stress. To meet the challenge and to test the reliability and applicability of glucose measurement in practice, in this study, different environmental/anthropogenic exposure scenarios were simulated to provoke chemical-induced stress in fish (14-days exposure to landfill leachate) followed by a 14-days stress recovery period and under the cumulative effect of leachate fish subsequently exposed to pathogenic oomycetes (Saprolegnia parasitica) to represent a possible infection in fish. It is endemic to all freshwater habitats worldwide and is partly responsible for the decline of natural freshwater fish populations. Brown trout (Salmo trutta fario) and sea trout (Salmo trutta trutta) juveniles were chosen because of a large amount of literature on physiological stress responses in these species was known. Glucose content in fish by applying invasive and non-invasive glucose measurement procedures in different test mediums such as fish blood, gill tissues and fish-holding water were analysed. The results indicated that the quantity of glucose released in the holding water of stressed fish increased considerably (approx. 3.5- to 8-fold) and remained substantially higher (approx. 2- to 4-fold) throughout the stress recovery period than the control level suggesting that fish did not recover from chemical-induced stress. The circulating levels of glucose in blood and gills decreased over time in fish exposed to different stressors. However, the gill glucose level in fish showed a decrease similar to the control levels measured at the same time points, which was found to be insignificant. The data analysis showed that concentrations of β-D glucose measured in gills of fish treated with S. parasitica differed significantly from the control recovery, but did not differ from the leachate recovery group showing that S. parasitica presence in water had no additive effects. In contrast, a positive correlation between blood and gills glucose were determined. Parallel trends in blood and water glucose changes suggest that water glucose measurement has much potency in predicting stress. This study demonstrated that measuring β-D-glucose in fish-holding water is not stressful as it involves no handling and manipulation of an organism and has critical technical advantages concerning current (invasive) methods, mainly using blood samples or specific tissues. The quantification of glucose could be essential for studies examining the stress physiology/aquaculture studies interested in the assessment or long-term monitoring of fish health.

Keywords: brown trout, landfill leachate, sea trout, pathogenic oomycetes, β-D-glucose

Procedia PDF Downloads 162
159 Widely Diversified Macroeconomies in the Super-Long Run Casts a Doubt on Path-Independent Equilibrium Growth Model

Authors: Ichiro Takahashi

Abstract:

One of the major assumptions of mainstream macroeconomics is the path independence of capital stock. This paper challenges this assumption by employing an agent-based approach. The simulation results showed the existence of multiple "quasi-steady state" equilibria of the capital stock, which may cast serious doubt on the validity of the assumption. The finding would give a better understanding of many phenomena that involve hysteresis, including the causes of poverty. The "market-clearing view" has been widely shared among major schools of macroeconomics. They understand that the capital stock, the labor force, and technology, determine the "full-employment" equilibrium growth path and demand/supply shocks can move the economy away from the path only temporarily: the dichotomy between the short-run business cycles and the long-run equilibrium path. The view then implicitly assumes the long-run capital stock to be independent of how the economy has evolved. In contrast, "Old Keynesians" have recognized fluctuations in output as arising largely from fluctuations in real aggregate demand. It will then be an interesting question to ask if an agent-based macroeconomic model, which is known to have path dependence, can generate multiple full-employment equilibrium trajectories of the capital stock in the super-long run. If the answer is yes, the equilibrium level of capital stock, an important supply-side factor, would no longer be independent of the business cycle phenomenon. This paper attempts to answer the above question by using the agent-based macroeconomic model developed by Takahashi and Okada (2010). The model would serve this purpose well because it has neither population growth nor technology progress. The objective of the paper is twofold: (1) to explore the causes of long-term business cycle, and (2) to examine the super-long behaviors of the capital stock of full-employment economies. (1) The simulated behaviors of the key macroeconomic variables such as output, employment, real wages showed widely diversified macro-economies. They were often remarkably stable but exhibited both short-term and long-term fluctuations. The long-term fluctuations occur through the following two adjustments: the quantity and relative cost adjustments of capital stock. The first one is obvious and assumed by many business cycle theorists. The reduced aggregate demand lowers prices, which raises real wages, thereby decreasing the relative cost of capital stock with respect to labor. (2) The long-term business cycles/fluctuations were synthesized with the hysteresis of real wages, interest rates, and investments. In particular, a sequence of the simulation runs with a super-long simulation period generated a wide range of perfectly stable paths, many of which achieved full employment: all the macroeconomic trajectories, including capital stock, output, and employment, were perfectly horizontal over 100,000 periods. Moreover, the full-employment level of capital stock was influenced by the history of unemployment, which was itself path-dependent. Thus, an experience of severe unemployment in the past kept the real wage low, which discouraged a relatively costly investment in capital stock. Meanwhile, a history of good performance sometimes brought about a low capital stock due to a high-interest rate that was consistent with a strong investment.

Keywords: agent-based macroeconomic model, business cycle, hysteresis, stability

Procedia PDF Downloads 198
158 An Analysis of Economical Drivers and Technical Challenges for Large-Scale Biohydrogen Deployment

Authors: Rouzbeh Jafari, Joe Nava

Abstract:

This study includes learnings from an engineering practice normally performed on large scale biohydrogen processes. If properly scale-up is done, biohydrogen can be a reliable pathway for biowaste valorization. Most of the studies on biohydrogen process development have used model feedstock to investigate process key performance indicators (KPIs). This study does not intend to compare different technologies with model feedstock. However, it reports economic drivers and technical challenges which help in developing a road map for expanding biohydrogen economy deployment in Canada. BBA is a consulting firm responsible for the design of hydrogen production projects. Through executing these projects, activity has been performed to identify, register and mitigate technical drawbacks of large-scale hydrogen production. Those learnings, in this study, have been applied to the biohydrogen process. Through data collected by a comprehensive literature review, a base case has been considered as a reference, and several case studies have been performed. Critical parameters of the process were identified and through common engineering practice (process design, simulation, cost estimate, and life cycle assessment) impact of these parameters on the commercialization risk matrix and class 5 cost estimations were reported. The process considered in this study is food waste and woody biomass dark fermentation. To propose a reliable road map to develop a sustainable biohydrogen production process impact of critical parameters was studied on the end-to-end process. These parameters were 1) feedstock composition, 2) feedstock pre-treatment, 3) unit operation selection, and 4) multi-product concept. A couple of emerging technologies also were assessed such as photo-fermentation, integrated dark fermentation, and using ultrasound and microwave to break-down feedstock`s complex matrix and increase overall hydrogen yield. To properly report the impact of each parameter KPIs were identified as 1) Hydrogen yield, 2) energy consumption, 3) secondary waste generated, 4) CO2 footprint, 5) Product profile, 6) $/kg-H2 and 5) environmental impact. The feedstock is the main parameter defining the economic viability of biohydrogen production. Through parametric studies, it was found that biohydrogen production favors feedstock with higher carbohydrates. The feedstock composition was varied, by increasing one critical element (such as carbohydrate) and monitoring KPIs evolution. Different cases were studied with diverse feedstock, such as energy crops, wastewater slug, and lignocellulosic waste. The base case process was applied to have reference KPIs values and modifications such as pretreatment and feedstock mix-and-match were implemented to investigate KPIs changes. The complexity of the feedstock is the main bottleneck in the successful commercial deployment of the biohydrogen process as a reliable pathway for waste valorization. Hydrogen yield, reaction kinetics, and performance of key unit operations highly impacted as feedstock composition fluctuates during the lifetime of the process or from one case to another. In this case, concept of multi-product becomes more reliable. In this concept, the process is not designed to produce only one target product such as biohydrogen but will have two or multiple products (biohydrogen and biomethane or biochemicals). This new approach is being investigated by the BBA team and the results will be shared in another scientific contribution.

Keywords: biohydrogen, process scale-up, economic evaluation, commercialization uncertainties, hydrogen economy

Procedia PDF Downloads 91
157 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 117
156 Validation and Fit of a Biomechanical Bipedal Walking Model for Simulation of Loads Induced by Pedestrians on Footbridges

Authors: Dianelys Vega, Carlos Magluta, Ney Roitman

Abstract:

The simulation of loads induced by walking people in civil engineering structures is still challenging It has been the focus of considerable research worldwide in the recent decades due to increasing number of reported vibration problems in pedestrian structures. One of the most important key in the designing of slender structures is the Human-Structure Interaction (HSI). How moving people interact with structures and the effect it has on their dynamic responses is still not well understood. To rely on calibrated pedestrian models that accurately estimate the structural response becomes extremely important. However, because of the complexity of the pedestrian mechanisms, there are still some gaps in knowledge and more reliable models need to be investigated. On this topic several authors have proposed biodynamic models to represent the pedestrian, whether these models provide a consistent approximation to physical reality still needs to be studied. Therefore, this work comes to contribute to a better understanding of this phenomenon bringing an experimental validation of a pedestrian walking model and a Human-Structure Interaction model. In this study, a bi-dimensional bipedal walking model was used to represent the pedestrians along with an interaction model which was applied to a prototype footbridge. Numerical models were implemented in MATLAB. In parallel, experimental tests were conducted in the Structures Laboratory of COPPE (LabEst), at Federal University of Rio de Janeiro. Different test subjects were asked to walk at different walking speeds over instrumented force platforms to measure the walking force and an accelerometer was placed at the waist of each subject to measure the acceleration of the center of mass at the same time. By fitting the step force and the center of mass acceleration through successive numerical simulations, the model parameters are estimated. In addition, experimental data of a walking pedestrian on a flexible structure was used to validate the interaction model presented, through the comparison of the measured and simulated structural response at mid span. It was found that the pedestrian model was able to adequately reproduce the ground reaction force and the center of mass acceleration for normal and slow walking speeds, being less efficient for faster speeds. Numerical simulations showed that biomechanical parameters such as leg stiffness and damping affect the ground reaction force, and the higher the walking speed the greater the leg length of the model. Besides, the interaction model was also capable to estimate with good approximation the structural response, that remained in the same order of magnitude as the measured response. Some differences in frequency spectra were observed, which are presumed to be due to the perfectly periodic loading representation, neglecting intra-subject variabilities. In conclusion, this work showed that the bipedal walking model could be used to represent walking pedestrians since it was efficient to reproduce the center of mass movement and ground reaction forces produced by humans. Furthermore, although more experimental validations are required, the interaction model also seems to be a useful framework to estimate the dynamic response of structures under loads induced by walking pedestrians.

Keywords: biodynamic models, bipedal walking models, human induced loads, human structure interaction

Procedia PDF Downloads 122
155 Developing Pan-University Collaborative Initiatives in Support of Diversity and Inclusive Campuses

Authors: David Philpott, Karen Kennedy

Abstract:

In recognition of an increasingly diverse student population, a Teaching and Learning Framework was developed at Memorial University of Newfoundland. This framework emphasizes work that is engaging, supportive, inclusive, responsive, committed to discovery, and is outcomes-oriented for both educators and learners. The goal of the Teaching and Learning framework was to develop a number of initiatives that builds on existing knowledge, proven programs, and existing supports in order to respond to the specific needs of identified groups of diverse learners: 1) academically vulnerable first year students; 2) students with individual learning needs associated with disorders and/or mental health issues; 3) international students and those from non-western cultures. This session provides an overview of this process. The strategies employed to develop these initiatives were drawn primarily from research on student success and retention (literature review), information on pre-existing programs (environmental scan), an analysis of in-house data on students at our institution; consultations with key informants at all of Memorial’s campuses. The first initiative that emerged from this research was a pilot project proposal for a first-year success program in support of the first-year experience of academically vulnerable students. This program offers a university experience that is enhanced by smaller classes, supplemental instruction, learning communities, and advising sessions. The second initiative that arose under the mandate of the Teaching and Learning Framework was a collaborative effort between two institutions (Memorial University and the College of the North Atlantic). Both institutions participated in a shared conversation to examine programs and services that support an accessible and inclusive environment for students with disorders and/or mental health issues. A report was prepared based on these conversations and an extensive review of research and programs across the country. Efforts are now being made to explore possible initiatives that address culturally diverse and non-traditional learners. While an expanding literature has emerged on diversity in higher education, the process of developing institutional initiatives is usually excluded from such discussions, while the focus remains on effective practice. The proposals that were developed constitute a co-ordination and strengthening of existing services and programs; a weaving of supports to engage a diverse body of students in a sense of community. This presentation will act as a guide through the process of developing projects addressing learner diversity and engage attendees in a discussion of institutional practices that have been implemented in support of overcoming challenges, as well as provide feedback on institutional and student outcomes. The focus of this session will be on effective practice, and will be of particular interest to university administrators, educational developers, and educators wishing to implement similar initiatives on their campuses; possible adaptations for practice will be addressed. A presentation of findings from this research will be followed by an open discussion where the sharing of research, initiatives, and best practices for the enhancement of teaching and learning is welcomed. There is much insight and understanding to be gained through the sharing of ideas and collaborative practice as we move forward to further develop the program and prepare other initiatives in support of diversity and inclusion.

Keywords: eco-scale, green analysis, environmentally-friendly, pharmaceuticals analysis

Procedia PDF Downloads 281
154 Optical Vortex in Asymmetric Arcs of Rotating Intensity

Authors: Mona Mihailescu, Rebeca Tudor, Irina A. Paun, Cristian Kusko, Eugen I. Scarlat, Mihai Kusko

Abstract:

Specific intensity distributions in the laser beams are required in many fields: optical communications, material processing, microscopy, optical tweezers. In optical communications, the information embedded in specific beams and the superposition of multiple beams can be used to increase the capacity of the communication channels, employing spatial modulation as an additional degree of freedom, besides already available polarization and wavelength multiplexing. In this regard, optical vortices present interest due to their potential to carry independent data which can be multiplexed at the transmitter and demultiplexed at the receiver. Also, in the literature were studied their combinations: 1) axial or perpendicular superposition of multiple optical vortices or 2) with other laser beam types: Bessel, Airy. Optical vortices, characterized by stationary ring-shape intensity and rotating phase, are achieved using computer generated holograms (CGH) obtained by simulating the interference between a tilted plane wave and a wave passing through a helical phase object. Here, we propose a method to combine information through the reunion of two CGHs. One is obtained using the helical phase distribution, characterized by its topological charge, m. The other is obtained using conical phase distribution, characterized by its radial factor, r0. Each CGH is obtained using plane wave with different tilts: km and kr for CGH generated from helical phase object and from conical phase object, respectively. These reunions of two CGHs are calculated to be phase optical elements, addressed on the liquid crystal display of a spatial light modulator, to optically process the incident beam for investigations of the diffracted intensity pattern in far field. For parallel reunion of two CGHs and high values of the ratio between km and kr, the bright ring from the first diffraction order, specific for optical vortices, is changed in an asymmetric intensity pattern: a number of circle arcs. Both diffraction orders (+1 and -1) are asymmetrical relative to each other. In different planes along the optical axis, it is observed that this asymmetric intensity pattern rotates around its centre: in the +1 diffraction order the rotation is anticlockwise and in the -1 diffraction order, the rotation is clockwise. The relation between m and r0 controls the diameter of the circle arcs and the ratio between km and kr controls the number of arcs. For perpendicular reunion of the two CGHs and low values of the ratio between km and kr, the optical vortices are multiplied and focalized in different planes, depending on the radial parameter. The first diffraction order contains information about both phase objects. It is incident on the phase masks placed at the receiver, computed using the opposite values for topological charge or for the radial parameter and displayed successively. In all, the proposed method is exploited in terms of constructive parameters, for the possibility offered by the combination of different types of beams which can be used in robust optical communications.

Keywords: asymmetrical diffraction orders, computer generated holograms, conical phase distribution, optical vortices, spatial light modulator

Procedia PDF Downloads 300
153 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text

Authors: Duncan Wallace, M-Tahar Kechadi

Abstract:

In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.

Keywords: artificial neural networks, data-mining, machine learning, medical informatics

Procedia PDF Downloads 115
152 Sustainable Pavements with Reflective and Photoluminescent Properties

Authors: A.H. Martínez, T. López-Montero, R. Miró, R. Puig, R. Villar

Abstract:

An alternative to mitigate the heat island effect is to pave streets and sidewalks with pavements that reflect incident solar energy, keeping their surface temperature lower than conventional pavements. The “Heat island mitigation to prevent global warming by designing sustainable pavements with reflective and photoluminescent properties (RELUM) Project” has been carried out with this intention in mind. Its objective has been to develop bituminous mixtures for urban pavements that help in the fight against global warming and climate change, while improving the quality of life of citizens. The technology employed has focused on the use of reflective pavements, using bituminous mixes made with synthetic bitumens and light pigments that provide high solar reflectance. In addition to this advantage, the light surface colour achieved with these mixes can improve visibility, especially at night. In parallel and following the latter approach, an appropriate type of treatment has also been developed on bituminous mixtures to make them capable of illuminating at night, giving rise to photoluminescent applications, which can reduce energy consumption and increase road safety due to improved night-time visibility. The work carried out consisted of designing different bituminous mixtures in which the nature of the aggregate was varied (porphyry, granite and limestone) and also the colour of the mixture, which was lightened by adding pigments (titanium dioxide and iron oxide). The reflectance of each of these mixtures was measured, as well as the temperatures recorded throughout the day, at different times of the year. The results obtained make it possible to propose bituminous mixtures whose characteristics can contribute to the reduction of urban heat islands. Among the most outstanding results is the mixture made with synthetic bitumen, white limestone aggregate and a small percentage of titanium dioxide, which would be the most suitable for urban surfaces without road traffic, given its high reflectance and the greater temperature reduction it offers. With this solution, a surface temperature reduction of 9.7°C is achieved at the beginning of the night in the summer season with the highest radiation. As for luminescent pavements, paints with different contents of strontium aluminate and glass microspheres have been applied to asphalt mixtures, and the luminance of all the applications designed has been measured by exciting them with electric bulbs that simulate the effect of sunlight. The results obtained at this stage confirm the ability of all the designed dosages to emit light for a certain time, varying according to the proportions used. Not only the effect of the strontium aluminate and microsphere content has been observed, but also the influence of the colour of the base on which the paint is applied; the lighter the base, the higher the luminance. Ongoing studies are focusing on the evaluation of the durability of the designed solutions in order to determine their lifetime.

Keywords: heat island, luminescent paints, reflective pavement, temperature reduction

Procedia PDF Downloads 4
151 Connectomic Correlates of Cerebral Microhemorrhages in Mild Traumatic Brain Injury Victims with Neural and Cognitive Deficits

Authors: Kenneth A. Rostowsky, Alexander S. Maher, Nahian F. Chowdhury, Andrei Irimia

Abstract:

The clinical significance of cerebral microbleeds (CMBs) due to mild traumatic brain injury (mTBI) remains unclear. Here we use magnetic resonance imaging (MRI), diffusion tensor imaging (DTI) and connectomic analysis to investigate the statistical association between mTBI-related CMBs, post-TBI changes to the human connectome and neurological/cognitive deficits. This study was undertaken in agreement with US federal law (45 CFR 46) and was approved by the Institutional Review Board (IRB) of the University of Southern California (USC). Two groups, one consisting of 26 (13 females) mTBI victims and another comprising 26 (13 females) healthy control (HC) volunteers were recruited through IRB-approved procedures. The acute Glasgow Coma Scale (GCS) score was available for each mTBI victim (mean µ = 13.2; standard deviation σ = 0.4). Each HC volunteer was assigned a GCS of 15 to indicate the absence of head trauma at the time of enrollment in our study. Volunteers in the HC and mTBI groups were matched according to their sex and age (HC: µ = 67.2 years, σ = 5.62 years; mTBI: µ = 66.8 years, σ = 5.93 years). MRI [including T1- and T2-weighted volumes, gradient recalled echo (GRE)/susceptibility weighted imaging (SWI)] and gradient echo (GE) DWI volumes were acquired using the same MRI scanner type (Trio TIM, Siemens Corp.). Skull-stripping and eddy current correction were implemented. DWI volumes were processed in TrackVis (http://trackvis.org) and 3D Slicer (http://www.slicer.org). Tensors were fit to DWI data to perform DTI, and tractography streamlines were then reconstructed using deterministic tractography. A voxel classifier was used to identify image features as CMB candidates using Microbleed Anatomic Rating Scale (MARS) guidelines. For each peri-lesional DTI streamline bundle, the null hypothesis was formulated as the statement that there was no neurological or cognitive deficit associated with between-scan differences in the mean FA of DTI streamlines within each bundle. The statistical significance of each hypothesis test was calculated at the α = 0.05 level, subject to the family-wise error rate (FWER) correction for multiple comparisons. Results: In HC volunteers, the along-track analysis failed to identify statistically significant differences in the mean FA of DTI streamline bundles. In the mTBI group, significant differences in the mean FA of peri-lesional streamline bundles were found in 21 out of 26 volunteers. In those volunteers where significant differences had been found, these differences were associated with an average of ~47% of all identified CMBs (σ = 21%). In 12 out of the 21 volunteers exhibiting significant FA changes, cognitive functions (memory acquisition and retrieval, top-down control of attention, planning, judgment, cognitive aspects of decision-making) were found to have deteriorated over the six months following injury (r = -0.32, p < 0.001). Our preliminary results suggest that acute post-TBI CMBs may be associated with cognitive decline in some mTBI patients. Future research should attempt to identify mTBI patients at high risk for cognitive sequelae.

Keywords: traumatic brain injury, magnetic resonance imaging, diffusion tensor imaging, connectomics

Procedia PDF Downloads 162
150 'iTheory': Mobile Way to Music Fundamentals

Authors: Marina Karaseva

Abstract:

The beginning of our century became a new digital epoch in the educational situation. Last decade the newest stage of this process had been initialized by the touch-screen mobile devices with program applications for them. The touch possibilities for learning fundamentals of music are of especially importance for music majors. The phenomenon of touching, firstly, makes it realistic to play on the screen as on music instrument, secondly, helps students to learn music theory while listening in its sound elements by music ear. Nowadays we can detect several levels of such mobile applications: from the basic ones devoting to the elementary music training such as intervals and chords recognition, to the more advanced applications which deal with music perception of non-major and minor modes, ethnic timbres, and complicated rhythms. The main purpose of the proposed paper is to disclose the main tendencies in this process and to demonstrate the most innovative features of music theory applications on the base of iOS and Android systems as the most common used. Methodological recommendations how to use these digital material musicologically will be done for the professional music education of different levels. These recommendations are based on more than ten year ‘iTheory’ teaching experience of the author. In this paper, we try to logically classify all types of ‘iTheory’mobile applications into several groups, according to their methodological goals. General concepts given below will be demonstrated in concrete examples. The most numerous group of programs is formed with simulators for studying notes with audio-visual links. There are link-pair types as follows: sound — musical notation which may be used as flashcards for studying words and letters, sound — key, sound — string (basically, guitar’s). The second large group of programs is programs-tests containing a game component. As a rule, their basis is made with exercises on ear identification and reconstruction by voice: sounds and intervals on their sounding — harmonical and melodical, music modes, rhythmic patterns, chords, selected instrumental timbres. Some programs are aimed at an establishment of acoustical communications between concepts of the musical theory and their musical embodiments. There are also programs focused on progress of operative musical memory (with repeating of sounding phrases and their transposing in a new pitch), as well as on perfect pitch training In addition a number of programs improvisation skills have been developed. An absolute pitch-system of solmisation is a common base for mobile programs. However, it is possible to find also the programs focused on the relative pitch system of solfegе. In App Store and Google Play Market online store there are also many free programs-simulators of musical instruments — piano, guitars, celesta, violin, organ. These programs may be effective for individual and group exercises in ear training or composition classes. Great variety and good sound quality of these programs give now a unique opportunity to musicians to master their music abilities in a shorter time. That is why such teaching material may be a way to effective study of music theory.

Keywords: ear training, innovation in music education, music theory, mobile devices

Procedia PDF Downloads 195
149 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 252
148 Geodynamic Evolution of the Tunisian Dorsal Backland (Central Mediterranean) from the Cenozoic to Present

Authors: Aymen Arfaoui, Abdelkader Soumaya, Noureddine Ben Ayed

Abstract:

The study region is located in the Tunisian Dorsal Backland (Central Mediterranean), which is the easternmost part of the Saharan Atlas mountain range, trending southwest-northeast. Based on our fieldwork, seismic tomography images, seismicity, and previous studies, we propose an interpretation of the relationship between the surface deformation and fault kinematics in the study area and the internal dynamic processes acting in the Central Mediterranean from the Cenozoic to the present. The subduction and dynamics of internal forces beneath the complicated Maghrebides mobile belt have an impact on the Tertiary and Quaternary tectonic regimes in the Pelagian and Atlassic foreland that is part of our study region. The left lateral reactivation of the major "Tunisian N-S Axis fault" and the development of a compressional relay between the Hammamet Korbous and Messella-Ressas faults are possibly a result of tectonic stresses due to the slab roll-back following the Africa/Eurasia convergence. After the slab segmentation and its eastward migration (5–4 Ma) and the formation of the Strait of Sicily "rift zone" further east, a transtensional tectonic regime has been installed in this area. According to seismic tomography images, the STEP fault of the "North-South Axis" at Hammamet-Korbous coincides with the western edge of the "Slab windows" of the Sicilian Channel and the eastern boundary of the positive anomalies attributed to the residual Slab of Tunisia. On the other hand, significant E-W Plio-Quaternary tectonic activity may be observed along the eastern portion of this STEP fault system in the Grombalia zone as a result of recent vertical lithospheric motion in response to the lateral slab migration eastward to Sicily Channel. According to SKS fast splitting directions, the upper mantle flow pattern beneath Tunisian Dorsal is parallel to the NE-SW to E-W orientation of the Shmin identified in the study area, similar to the Plio-Quaternary extensional orientation in the Central Mediterranean. Additionally, the removal of the lithosphere and the subsequent uplift of the sub-lithospheric mantle beneath the topographic highs of the Dorsal and its surroundings may be the cause of the dominant extensional to transtensional Quaternary regime. The occurrence of strike-slip and extensional seismic events in the Pelagian block reveals that the regional transtensional tectonic regime persists today. Finally, we believe that the geodynamic history of the study area since the Cenozoic is primarily influenced by the preexisting weak zones, the African slab detachment, and the upper mantle flow pattern in the central Mediterranean.

Keywords: Tunisia, lithospheric discontinuity (STEP fault), geodynamic evolution, Tunisian dorsal backland, strike-slip fault, seismic tomography, seismicity, central Mediterranean

Procedia PDF Downloads 58
147 Executive Function and Attention Control in Bilingual and Monolingual Children: A Systematic Review

Authors: Zihan Geng, L. Quentin Dixon

Abstract:

It has been proposed that early bilingual experience confers a number of advantages in the development of executive control mechanisms. Although the literature provides empirical evidence for bilingual benefits, some studies also reported null or mixed results. To make sense of these contradictory findings, the current review synthesize recent empirical studies investigating bilingual effects on children’s executive function and attention control. The publication time of the studies included in the review ranges from 2010 to 2017. The key searching terms are bilingual, bilingualism, children, executive control, executive function, and attention. The key terms were combined within each of the following databases: ERIC (EBSCO), Education Source, PsycINFO, and Social Science Citation Index. Studies involving both children and adults were also included but the analysis was based on the data generated only by the children group. The initial search yielded 137 distinct articles. Twenty-eight studies from 27 articles with a total of 3367 participants were finally included based on the selection criteria. The selective studies were then coded in terms of (a) the setting (i.e., the country where the data was collected), (b) the participants (i.e., age and languages), (c) sample size (i.e., the number of children in each group), (d) cognitive outcomes measured, (e) data collection instruments (i.e., cognitive tasks and tests), and (f) statistic analysis models (e.g., t-test, ANOVA). The results show that the majority of the studies were undertaken in western countries, mainly in the U.S., Canada, and the UK. A variety of languages such as Arabic, French, Dutch, Welsh, German, Spanish, Korean, and Cantonese were involved. In relation to cognitive outcomes, the studies examined children’s overall planning and problem-solving abilities, inhibition, cognitive complexity, working memory (WM), and sustained and selective attention. The results indicate that though bilingualism is associated with several cognitive benefits, the advantages seem to be weak, at least, for children. Additionally, the nature of the cognitive measures was found to greatly moderate the results. No significant differences are observed between bilinguals and monolinguals in overall planning and problem-solving ability, indicating that there is no bilingual benefit in the cooperation of executive function components at an early age. In terms of inhibition, the mixed results suggest that bilingual children, especially young children, may have better conceptual inhibition measured in conflict tasks, but not better response inhibition measured by delay tasks. Further, bilingual children showed better inhibitory control to bivalent displays, which resembles the process of maintaining two language systems. The null results were obtained for both cognitive complexity and WM, suggesting no bilingual advantage in these two cognitive components. Finally, findings on children’s attention system associate bilingualism with heightened attention control. Together, these findings support the hypothesis of cognitive benefits for bilingual children. Nevertheless, whether these advantages are observable appears to highly depend on the cognitive assessments. Therefore, future research should be more specific about the cognitive outcomes (e.g., the type of inhibition) and should report the validity of the cognitive measures consistently.

Keywords: attention, bilingual advantage, children, executive function

Procedia PDF Downloads 173
146 Body of Dialectics: Exploring a Dynamic-Adaptational Model of Physical Self-Integrity and the Pursuit of Happiness in a Hostile World

Authors: Noam Markovitz

Abstract:

People with physical disabilities constitute a very large and simultaneously a diverse group of general population, as the term physical disabilities is extensive and covers a wide range of disabilities. Therefore, individuals with physical disabilities are often faced with a new, threatening and stressful reality leading possibly to a multi-crisis in their lives due to the great changes they experience in somatic, socio-economic, occupational and psychological level. The current study seeks to advance understanding of the complex adaptation to physical disabilities by expanding the dynamic-adaptational model of the pursuit of happiness in a hostile world with a new conception of physical self-integrity. Physical self-integrity incorporates an objective dimension, namely physical self-functioning (PSF), and a subjective dimension, namely physical self-concept (PSC). Both of these dimensions constitute an experience of wholeness in the individual’s identification with her or his physical body. The model guiding this work is dialectical in nature and depicts two systems in the individual’s sense of happiness: subjective well-being (SWB) and meaning in life (MIL). Both systems serve as self-adaptive agents that moderate the complementary system of the hostile-world scenario (HWS), which integrates one’s perceived threats to one’s integrity. Thus, in situations of increased HWS, the moderation may take a form of joint activity in which SWB and MIL are amplified or a form of compensation in which one system produces a stronger effect while the other system produces a weaker effect. The current study investigated PSC in relations to SWB and MIL through pleasantness and meanings that are physically or metaphorically grounded in one’s body. In parallel, PSC also relates to HWS by activating representations of inappropriateness, deformation and vulnerability. In view of possibly dialectical positions of opposing and complementary forces within the current model, the current field study that aims to explore PSC as appearing in an independent, cross-sectional, design addressing the model’s variables in a focal group of people with physical disabilities. This study delineated the participation of the PSC in the adaptational functions of SWB and MIL vis-à-vis HWS-related life adversities. The findings showed that PSC could fully complement the main variables of the pursuit of happiness in a hostile world model. The assumed dialectics in the form of a stronger relationship between SWB and MIL in the face of physical disabilities was not supported. However, it was found that when HWS increased, PSC and MIL were strongly linked, whereas PSC and SWB were weakly linked. This highlights the compensatory role of MIL. From a conceptual viewpoint, the current investigation may clarify the role of PSC as an adaptational agent of the individual’s positive health in complementary senses of bodily wholeness. Methodologically, the advantage of the current investigation is the application of an integrative, model-based approach within a specially focused design with a particular relevance to PSC. Moreover, from an applicative viewpoint, the current investigation may suggest how an innovative model may be translated to therapeutic interventions used by clinicians, counselors and practitioners in improving wellness and psychological well-being, particularly among people with physical disabilities.

Keywords: older adults, physical disabilities, physical self-concept, pursuit of happiness in a hostile-world

Procedia PDF Downloads 136
145 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes

Authors: Madushani Rodrigo, Banuka Athuraliya

Abstract:

In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.

Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16

Procedia PDF Downloads 73
144 Electromagnetic Simulation Based on Drift and Diffusion Currents for Real-Time Systems

Authors: Alexander Norbach

Abstract:

The script in this paper describes the use of advanced simulation environment using electronic systems (Microcontroller, Operational Amplifiers, and FPGA). The simulation may be used for all dynamic systems with the diffusion and the ionisation behaviour also. By additionally required observer structure, the system works with parallel real-time simulation based on diffusion model and the state-space representation for other dynamics. The proposed deposited model may be used for electrodynamic effects, including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time. For further purpose, the spatial temperature distribution may be used also. With upon system, the uncertainties, unknown initial states and disturbances may be determined. This provides the estimation of the more precise system states for the required system, and additionally, the estimation of the ionising disturbances that occur due to radiation effects. The results have shown that a system can be also developed and adopted specifically for space systems with the real-time calculation of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. In order to be able to react to these processes, it must be calculated within a shorter time that ionising radiation and dose is present. All available sensors shall be used to observe the spatial distributions. By measured value of size and known location of the sensors, the entire distribution can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of kind of systems space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms. For the modelling and derivation of equations, the extended current equation is used. The size K represents the proposed charge density drifting vector. The extended diffusion equation was derived and shows the quantising character and has similar law like the Klein-Gordon equation. These kinds of PDE's (Partial Differential Equations) are analytically solvable by giving initial distribution conditions (Cauchy problem) and boundary conditions (Dirichlet boundary condition). For a simpler structure, a transfer function for B- and E- fields was analytically calculated. With known discretised responses g₁(k·Ts) and g₂(k·Ts), the electric current or voltage may be calculated using a convolution; g₁ is the direct function and g₂ is a recursive function. The analytical results are good enough for calculation of fields with diffusion effects. Within the scope of this work, a proposed model of the consideration of the electromagnetic diffusion effects of arbitrary current 'waveforms' has been developed. The advantage of the proposed calculation of diffusion is the real-time capability, which is not really possible with the FEM programs available today. It makes sense in the further course of research to use these methods and to investigate them thoroughly.

Keywords: advanced observer, electrodynamics, systems, diffusion, partial differential equations, solver

Procedia PDF Downloads 120
143 Yu Kwang-Chung vs. Yu Kwang-Chung: Untranslatability as the Touchstone of a Poet

Authors: Min-Hua Wu

Abstract:

The untranslatability of an established poet’s tour de force is thoroughly explored by Matthew Arnold (1822-1888). In his On Translating Homer (1861), Arnold lists the four most striking poetic qualities of Homer, namely his rapidity, plainness and directness of style and diction, plainness and directness of ideas, and nobleness. He concludes that such celebrated English translators as Cowper, Pope, Chapman, and Mr. Newman are all doomed, due to their respective failure in rendering the totality of the four Homeric poetic qualities. Why poetic translation always amounts to being proven such a mission impossible for the translator? According to Arnold, it is because there constantly exists a mist interposed between the translator’s own literary self-obsession and the objective artistic qualities that reside in the work of the original author. Foregrounding such a seemingly empowering yet actually detrimental poetic mist, he explains why the aforementioned translators fail in their attempts to bring the Homeric charm to the British reader. Drawing on Arnold’s analytical study on Homeric translation, the research attempts to bring Yu Kwang-chung the poet vis-à-vis Yu Kwang-chung the translator, with an aim not so much to find any similar mist as revealed by Arnold between his Chinese poetry and English translation as to probe into a latent and veiled literary and lingual mist interposed between Chinese and English, if not between Chinese and English literatures. The major work studied and analyzed for this study is Yu’s own Chinese poetry and his own English translation collected in The Night Watchman: Yu Kwang-chung 1958-2004. The research argues that the following critical elements that characterizes Yu’s poetics are to a certain extent 'transformed,' if not 'lost,' in his English translation: a. the Chinese pictographic and ideographic unit terms which so unfailingly characterize the poet’s incredible creativity, allowing him to habitually and conveniently coin concrete textual images or word-scapes almost at his own will; b. the subtle wordplay and punning which appear at a reasonable frequency; c. the parallel contrastive repetitive syntactic structure within a single poetic line; d. the ambiguous and highly associative diction in the adjective and noun categories; e. the literary allusion that harks back to the old times of Chinese literature; f. the alliteration that adds rhythm and smoothness to the lines; g. the rhyming patterns that bring about impressive sonority and lingering echo to the ears of the reader; h. the grandeur-imposing and sublimity-arousing word-scaping which hinges on the employment of verbs; i. the meandering cultural heritage that embraces such elements as Chinese medicine and kung fu; and j. other features of the like. Once we appeal to the Arnoldian tribunal and resort to the strict standards of such a Victorian cultural and literary critic who insists 'to see the object as in itself it really is,' we may serve as a potential judge for the tug of war between Yu Kwang-chung the poet and Yu Kwang-chung the translator, a tug of war that will not merely broaden our understating of Chinese poetics but deepen our apprehension of Chinese-English translatology.

Keywords: Yu Kwang-chung, The Night Watchman, poetry translation, Chinese-English translation, translation studies, Matthew Arnold

Procedia PDF Downloads 382