Search results for: online processing service
165 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 128164 Optical Vortex in Asymmetric Arcs of Rotating Intensity
Authors: Mona Mihailescu, Rebeca Tudor, Irina A. Paun, Cristian Kusko, Eugen I. Scarlat, Mihai Kusko
Abstract:
Specific intensity distributions in the laser beams are required in many fields: optical communications, material processing, microscopy, optical tweezers. In optical communications, the information embedded in specific beams and the superposition of multiple beams can be used to increase the capacity of the communication channels, employing spatial modulation as an additional degree of freedom, besides already available polarization and wavelength multiplexing. In this regard, optical vortices present interest due to their potential to carry independent data which can be multiplexed at the transmitter and demultiplexed at the receiver. Also, in the literature were studied their combinations: 1) axial or perpendicular superposition of multiple optical vortices or 2) with other laser beam types: Bessel, Airy. Optical vortices, characterized by stationary ring-shape intensity and rotating phase, are achieved using computer generated holograms (CGH) obtained by simulating the interference between a tilted plane wave and a wave passing through a helical phase object. Here, we propose a method to combine information through the reunion of two CGHs. One is obtained using the helical phase distribution, characterized by its topological charge, m. The other is obtained using conical phase distribution, characterized by its radial factor, r0. Each CGH is obtained using plane wave with different tilts: km and kr for CGH generated from helical phase object and from conical phase object, respectively. These reunions of two CGHs are calculated to be phase optical elements, addressed on the liquid crystal display of a spatial light modulator, to optically process the incident beam for investigations of the diffracted intensity pattern in far field. For parallel reunion of two CGHs and high values of the ratio between km and kr, the bright ring from the first diffraction order, specific for optical vortices, is changed in an asymmetric intensity pattern: a number of circle arcs. Both diffraction orders (+1 and -1) are asymmetrical relative to each other. In different planes along the optical axis, it is observed that this asymmetric intensity pattern rotates around its centre: in the +1 diffraction order the rotation is anticlockwise and in the -1 diffraction order, the rotation is clockwise. The relation between m and r0 controls the diameter of the circle arcs and the ratio between km and kr controls the number of arcs. For perpendicular reunion of the two CGHs and low values of the ratio between km and kr, the optical vortices are multiplied and focalized in different planes, depending on the radial parameter. The first diffraction order contains information about both phase objects. It is incident on the phase masks placed at the receiver, computed using the opposite values for topological charge or for the radial parameter and displayed successively. In all, the proposed method is exploited in terms of constructive parameters, for the possibility offered by the combination of different types of beams which can be used in robust optical communications.Keywords: asymmetrical diffraction orders, computer generated holograms, conical phase distribution, optical vortices, spatial light modulator
Procedia PDF Downloads 309163 Enabling Rather Than Managing: Organizational and Cultural Innovation Mechanisms in a Heterarchical Organization
Authors: Sarah M. Schoellhammer, Stephen Gibb
Abstract:
Bureaucracy, in particular, its core element, a formal and stable hierarchy of authority, is proving less and less appropriate under the conditions of today’s knowledge economy. Centralization and formalization were consistently found to hinder innovation, undermining cross-functional collaboration, personal responsibility, and flexibility. With its focus on systematical planning, controlling and monitoring the development of new or improved solutions for customers, even innovation management as a discipline is to a significant extent based on a mechanistic understanding of organizations. The most important drivers of innovation, human creativity, and initiative, however, can be more hindered than supported by central elements of classic innovation management, such as predefined innovation strategies, rigid stage gate processes, and decisions made in management gate meetings. Heterarchy, as an alternative network form of organization, is essentially characterized by its dynamic influence structures, whereby the biggest influence is allocated by the collective to the persons perceived the most competent in a certain issue. Theoretical arguments that the non-hierarchical concept better supports innovation than bureaucracy have been supported by empirical research. These prior studies either focus on the structure and general functioning of non-hierarchical organizations or on their innovativeness, that means innovation as an outcome. Complementing classic innovation management approaches, this work aims to shed light on how innovations are initiated and realized in heterarchies in order to identify alternative solutions practiced under conditions of the post-bureaucratic organization. Through an initial individual case study, which is part of a multiple-case project, the innovation practices of an innovative and highly heterarchical medium-sized company in the German fire engineering industry are investigated. In a pragmatic mixed methods approach media resonance, company documents, and workspace architecture are analyzed, in addition to qualitative interviews with the CEO and employees of the case company, as well as a quantitative survey aiming to characterize the company along five scaled dimensions of a heterarchy spectrum. The analysis reveals some similarities and striking differences to approaches suggested by classic innovation management. The studied heterarchy has no predefined innovation strategy guiding new product and service development. Instead, strategic direction is provided by the CEO, described as visionary and creative. Procedures for innovation are hardly formalized, with new product ideas being evaluated on the basis of gut feeling and flexible, rather general criteria. Employees still being hesitant to take responsibility and make decisions, hierarchical influence is still prominent. Described as open-minded and collaborative, culture and leadership were found largely congruent with definitions of innovation culture. Overall, innovation efforts at the case company tend to be coordinated more through cultural than through formal organizational mechanisms. To better enable innovation in mainstream organizations, responsible practitioners are recommended not to limit changes to reducing the central elements of the bureaucratic organization, formalization, and centralization. The freedoms this entails need to be sustained through cultural coordination mechanisms, with personal initiative and responsibility by employees as well as common innovation-supportive norms and values. These allow to integrate diverse competencies, opinions, and activities and, thus, to guide innovation efforts.Keywords: bureaucracy, heterarchy, innovation management, values
Procedia PDF Downloads 187162 Railway Composite Flooring Design: Numerical Simulation and Experimental Studies
Authors: O. Lopez, F. Pedro, A. Tadeu, J. Antonio, A. Coelho
Abstract:
The future of the railway industry lies in the innovation of lighter, more efficient and more sustainable trains. Weight optimizations in railway vehicles allow reducing power consumption and CO₂ emissions, increasing the efficiency of the engines and the maximum speed reached. Additionally, they reduce wear of wheels and rails, increase the space available for passengers, etc. Among the various systems that integrate railway interiors, the flooring system is one which has greater impact both on passenger safety and comfort, as well as on the weight of the interior systems. Due to the high weight saving potential, relative high mechanical resistance, good acoustic and thermal performance, ease of modular design, cost-effectiveness and long life, the use of new sustainable composite materials and panels provide the latest innovations for competitive solutions in the development of flooring systems. However, one of the main drawbacks of the flooring systems is their relatively poor resistance to point loads. Point loads in railway interiors can be caused by passengers or by components fixed to the flooring system, such as seats and restraint systems, handrails, etc. In this way, they can originate higher fatigue solicitations under service loads or zones with high stress concentrations under exceptional loads (higher longitudinal, transverse and vertical accelerations), thus reducing its useful life. Therefore, to verify all the mechanical and functional requirements of the flooring systems, many physical prototypes would be created during the design phase, with all of the high costs associated with it. Nowadays, the use of virtual prototyping methods by computer-aided design (CAD) and computer-aided engineering (CAE) softwares allow validating a product before committing to making physical test prototypes. The scope of this work was to current computer tools and integrate the processes of innovation, development, and manufacturing to reduce the time from design to finished product and optimise the development of the product for higher levels of performance and reliability. In this case, the mechanical response of several sandwich panels with different cores, polystyrene foams, and composite corks, were assessed, to optimise the weight and the mechanical performance of a flooring solution for railways. Sandwich panels with aluminum face sheets were tested to characterise its mechanical performance and determine the polystyrene foam and cork properties when used as inner cores. Then, a railway flooring solution was fully modelled (including the elastomer pads to provide the required vibration isolation from the car body) and perform structural simulations using FEM analysis to comply all the technical product specifications for the supply of a flooring system. Zones with high stress concentrations are studied and tested. The influence of vibration modes on the comfort level and stability is discussed. The information obtained with the computer tools was then completed with several mechanical tests performed on some solutions, and on specific components. The results of the numerical simulations and experimental campaign carried out are presented in this paper. This research work was performed as part of the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through COMPETE 2020.Keywords: cork agglomerate core, mechanical performance, numerical simulation, railway flooring system
Procedia PDF Downloads 179161 Effects of AI-driven Applications on Bank Performance in West Africa
Authors: Ani Wilson Uchenna, Ogbonna Chikodi
Abstract:
This study examined the impact of artificial intelligence driven applications on banks’ performance in West Africa using Nigeria and Ghana as case studies. Specifically, the study examined the extent to which deployment of smart automated teller machine impacts the banks’ net worth within the reference period in Nigeria and Ghana. It ascertained the impact of point of sale on banks’ net worth within the reference period in Nigeria and Ghana. Thirdly, it verified the extent to which webpay services can influence banks’ performance in Nigeria and Ghana and finally, determined the impact of mobile pay services on banks’ performance in Nigeria and Ghana. The study used automated teller machine (ATM), Point of sale services (POS), Mobile pay services (MOP) and Web pay services (WBP) as proxies for explanatory variables while Bank net worth was used as explained variable for the study. The data for this study were sourced from central bank of Nigeria (CBN) Statistical Bulletin as well as Bank of Ghana (BoGH) Statistical Bulletin, Ghana payment systems oversight annual report and world development indicator (WDI). Furthermore, the mixed order of integration observed from the panel unit test result justified the use of autoregressive distributed lag (ARDL) approach to data analysis which the study adopted. While the cointegration test showed the existence of cointegration among the studied variables, bound test result justified the presence of long-run relationship among the series. Again, ARDL error correction estimate established satisfactory (13.92%) speed of adjustment from long run disequilibrium back to short run dynamic relationship. The study found that while Automated teller machine (ATM) had statistically significant impact on bank net worth (BNW) of Nigeria and Ghana, point of sale services application (POS) statistically and significantly impact on bank net worth within the study period, mobile pay services application was statistically significant in impacting the changes in the bank net worth of the countries of study while web pay services (WBP) had no statistically significant impact on bank net worth of the countries of reference. The study concluded that artificial intelligence driven application have significant an positive impact on bank performance with exception of web pay which had negative impact on bank net worth. The study recommended that management of banks both in Nigerian and Ghanaian should encourage more investments in AI-powered smart ATMs aimed towards delivering more secured banking services in order to increase revenue, discourage excessive queuing in the banking hall, reduced fraud and minimize error in processing transaction. Banks within the scope of this study should leverage on modern technologies to checkmate the excesses of the private operators POS in order to build more confidence on potential customers. Government should convert mobile pay services to a counter terrorism tool by ensuring that restrictions on over-the-counter withdrawals to a minimum amount is maintained and place sanctions on withdrawals above that limit.Keywords: artificial intelligence (ai), bank performance, automated teller machines (atm), point of sale (pos)
Procedia PDF Downloads 7160 Symptomatic Strategies: Artistic Approaches Resembling Psychiatric Symptoms
Authors: B. Körner
Abstract:
This paper compares deviant behaviour in two different readings: 1) as symptomatic for so-called ‘mental illness’ and 2) as part of artistic creation. It analyses works of performance art in the respective frames of psychiatric evaluation and performance studies. This speculative comparison offers an alternative interpretation of mad behaviour beyond pathologisation. It questions the distinction of psychiatric diagnosis, which can contribute to reducing the stigmatisation of mad people. The stigma associated with madness entails exclusion, prejudice, and systemic oppression. Symptoms of psychiatric diagnoses can be considered as behaviour exceptional to the psychological norm. This deviant behaviour constitutes an outsider role which is also defining for the societal role of ‘the artist’, whose transgressions of the norm are expected and celebrated. The research proposes the term ‘artistic exceptionalism’ for this phenomenon. In this study, a set of performance artworks are analysed within the frame of an art-theoretical interpretation and as if they were the basis of a psychiatric assessment. This critical comparison combines the perspective on ‘mental illness’ of mad studies with methods of interpretation used in performance studies. The research employs auto theory and artistic research; interweaving lived experience with scientific theory building through the double role of the author as both performance artist and survivor researcher. It is a distinctly personal and mad thought experiment. The research proposes three major categories of artistic strategies approaching madness: (a) confronting madness (processing and publicly addressing one's own experiences with mental distress through artistic creation), (b) creating critical conditions (conscious or unconscious, voluntary or involuntary creation of crisis situations in order to create an intense experience for a work of art), and (c) symptomatic strategies. This paper focuses on the last of the three categories: symptomatic strategies. These can be described as artistic methods with parallels to forms of coping with and/or symptoms of ‘mental disorders.’ These include, for example feverish activity, a bleak worldview, additional perceptions, an urge for order, and the intensification of emotional experience. The proposed categories are to be understood as a spectrum of approaches that are not mutually exclusive. This research does not aim to diagnose or pathologise artists or their strategies; disease value is neither sought nor assumed. Neither does it intend to belittle psychological suffering, implying that it cannot be so bad if it is productive for artists. It excludes certain approaches that romanticise and/or exoticise mental distress, for example, artistic portrayal of people in mental crisis (e.g., documentary-observational or exoticising depictions) or the deliberate and exaggerated imitation of their forms of expression and behaviour as ‘authentic’ (e.g., Art Brut). These are based on the othering of the Mad and thus perpetuate the social stigma to which they are subjected. By noting that the same deviant behaviour can be interpreted as the opposite in different contexts, this research offers an alternative approach to madness beyond the confines of psychiatry. It challenges the distinction of psychiatric diagnosis and exposes its social constructedness. Hereby, it aims to empower survivors and reduce the stigmatisation of madness.Keywords: artistic research, mad studies, mental health, performance art, psychiatric stigma
Procedia PDF Downloads 79159 Even When the Passive Resistance Is Obligatory: Civil Intellectuals’ Solidarity Activism in Tea Workers Movement
Authors: Moshreka Aditi Huq
Abstract:
This study shows how a progressive portion of civil intellectuals in Bangladesh contributed as the solidarity activist entities in a movement of tea workers that became the symbol of their unique moral struggle. Their passive yet sharp way of resistance, with the integration of mass tea workers of a tea estate, got demonstrated against certain private companies and government officials who approached to establish a special economic zone inside the tea garden without offering any compensation and rehabilitation for poor tea workers. Due to massive protests and rebellion, the authorized entrepreneurs had to step back and called off the project immediately. The extraordinary features of this movement generated itself from the deep core social need of indigenous tea workers who are still imprisoned in the colonial cage. Following an anthropological and ethnographic perspective, this study adopted the main three techniques of intensive interview, focus group discussion, and laborious observation, to extract empirical data. The intensive interviews were undertaken informally using a mostly conversational approach. Focus group discussions were piloted among various representative groups where observations prevailed as part of the regular documentation process. These were conducted among civil intellectual entities, tea workers, tea estate authorities, civil service authorities, and business officials to obtain a holistic view of the situation. The fieldwork was executed in capital Dhaka city, along with northern areas like Chandpur-Begumkhan Tea Estate of Chunarughat Upazilla and Habiganj city of Habiganj District of Bangladesh. Correspondingly, secondary data were accessed through books, scholarly papers, archives, newspapers, reports, leaflets, posters, writing blog, and electronic pages of social media. The study results find that: (1) civil intellectuals opposed state-sponsored business impositions by producing counter-discourse and struggled against state hegemony through the phases of the movement; (2) instead of having the active physical resistance, civil intellectuals’ strength was preferably in passive form which was portrayed through their intellectual labor; (3) the combined movement of tea workers and civil intellectuals reflected on social security of ethnic worker communities that contrasts state’s pseudo-development motives which ultimately supports offensive and oppressive neoliberal growths of economy; (4) civil intellectuals are revealed as having certain functional limitations in the process of movement organization as well as resource mobilization; (5) in specific contexts, the genuine need of protest by indigenous subaltern can overshadow intellectual elitism and helps to raise the voices of ‘subjugated knowledge’. This study is quite likely to represent two sets of apparent protagonist entities in the discussion of social injustice and oppressive development intervention. On the one, hand it may help us to find the basic functional characteristics of civil intellectuals in Bangladesh when they are in a passive mode of resistance in social movement issues. On the other hand, it represents the community ownership and inherent protest tendencies of indigenous workers when they feel threatened and insecure. The study seems to have the potential to understand the conditions of ‘subjugated knowledge’ of subalterns. Furthermore, being the memory and narratives, these ‘activism mechanisms’ of social entities broadens the path to understand ‘power’ and ‘resistance’ in more fascinating ways.Keywords: civil intellectuals, resistance, subjugated knowledge, indigenous
Procedia PDF Downloads 125158 Technological Challenges for First Responders in Civil Protection; the RESPOND-A Solution
Authors: Georgios Boustras, Cleo Varianou Mikellidou, Christos Argyropoulos
Abstract:
Summer 2021 was marked by a number of prolific fires in the EU (Greece, Cyprus, France) as well as outside the EU (USA, Turkey, Israel). This series of dramatic events have stretched national civil protection systems and first responders in particular. Despite the introduction of National, Regional and International frameworks (e.g. rescEU), a number of challenges have arisen, not only related to climate change. RESPOND-A (funded by the European Commission by Horizon 2020, Contract Number 883371) introduces a unique five-tier project architectural structure for best associating modern telecommunications technology with novel practices for First Responders of saving lives, while safeguarding themselves, more effectively and efficiently. The introduced architecture includes Perception, Network, Processing, Comprehension, and User Interface layers, which can be flexibly elaborated to support multiple levels and types of customization, so, the intended technologies and practices can adapt to any European Environment Agency (EEA)-type disaster scenario. During the preparation of the RESPOND-A proposal, some of our First Responder Partners expressed the need for an information management system that could boost existing emergency response tools, while some others envisioned a complete end-to-end network management system that would offer high Situational Awareness, Early Warning and Risk Mitigation capabilities. The intuition behind these needs and visions sits on the long-term experience of these Responders, as well, their smoldering worry that the evolving threat of climate change and the consequences of industrial accidents will become more frequent and severe. Three large-scale pilot studies are planned in order to illustrate the capabilities of the RESPOND-A system. The first pilot study will focus on the deployment and operation of all available technologies for continuous communications, enhanced Situational Awareness and improved health and safety conditions for First Responders, according to a big fire scenario in a Wildland Urban Interface zone (WUI). An important issue will be examined during the second pilot study. Unobstructed communication in the form of the flow of information is severely affected during a crisis; the flow of information between the wider public, from the first responders to the public and vice versa. Call centers are flooded with requests and communication is compromised or it breaks down on many occasions, which affects in turn – the effort to build a common operations picture for all firstr esponders. At the same time the information that reaches from the public to the operational centers is scarce, especially in the aftermath of an incident. Understandably traffic if disrupted leaves no other way to observe but only via aerial means, in order to perform rapid area surveys. Results and work in progress will be presented in detail and challenges in relation to civil protection will be discussed.Keywords: first responders, safety, civil protection, new technologies
Procedia PDF Downloads 142157 Next-Generation Lunar and Martian Laser Retro-Reflectors
Authors: Simone Dell'Agnello
Abstract:
There are laser retroreflectors on the Moon and no laser retroreflectors on Mars. Here we describe the design, construction, qualification and imminent deployment of next-generation, optimized laser retroreflectors on the Moon and on Mars (where they will be the first ones). These instruments are positioned by time-of-flight measurements of short laser pulses, the so-called 'laser ranging' technique. Data analysis is carried out with PEP, the Planetary Ephemeris Program of CfA (Center for Astrophysics). Since 1969 Lunar Laser Ranging (LLR) to Apollo/Lunokhod laser retro-reflector (CCR) arrays supplied accurate tests of General Relativity (GR) and new gravitational physics: possible changes of the gravitational constant Gdot/G, weak and strong equivalence principle, gravitational self-energy (Parametrized Post Newtonian parameter beta), geodetic precession, inverse-square force-law; it can also constraint gravitomagnetism. Some of these measurements also allowed for testing extensions of GR, including spacetime torsion, non-minimally coupled gravity. LLR has also provides significant information on the composition of the deep interior of the Moon. In fact, LLR first provided evidence of the existence of a fluid component of the deep lunar interior. In 1969 CCR arrays contributed a negligible fraction of the LLR error budget. Since laser station range accuracy improved by more than a factor 100, now, because of lunar librations, current array dominate the error due to their multi-CCR geometry. We developed a next-generation, single, large CCR, MoonLIGHT (Moon Laser Instrumentation for General relativity high-accuracy test) unaffected by librations that supports an improvement of the space segment of the LLR accuracy up to a factor 100. INFN also developed INRRI (INstrument for landing-Roving laser Retro-reflector Investigations), a microreflector to be laser-ranged by orbiters. Their performance is characterized at the SCF_Lab (Satellite/lunar laser ranging Characterization Facilities Lab, INFN-LNF, Frascati, Italy) for their deployment on the lunar surface or the cislunar space. They will be used to accurately position landers, rovers, hoppers, orbiters of Google Lunar X Prize and space agency missions, thanks to LLR observations from station of the International Laser Ranging Service in the USA, in France and in Italy. INRRI was launched in 2016 with the ESA mission ExoMars (Exobiology on Mars) EDM (Entry, descent and landing Demonstration Module), deployed on the Schiaparelli lander and is proposed for the ExoMars 2020 Rover. Based on an agreement between NASA and ASI (Agenzia Spaziale Italiana), another microreflector, LaRRI (Laser Retro-Reflector for InSight), was delivered to JPL (Jet Propulsion Laboratory) and integrated on NASA’s InSight Mars Lander in August 2017 (launch scheduled in May 2018). Another microreflector, LaRA (Laser Retro-reflector Array) will be delivered to JPL for deployment on the NASA Mars 2020 Rover. The first lunar landing opportunities will be from early 2018 (with TeamIndus) to late 2018 with commercial missions, followed by opportunities with space agency missions, including the proposed deployment of MoonLIGHT and INRRI on NASA’s Resource Prospectors and its evolutions. In conclusion, we will extend significantly the CCR Lunar Geophysical Network and populate the Mars Geophysical Network. These networks will enable very significantly improved tests of GR.Keywords: general relativity, laser retroreflectors, lunar laser ranging, Mars geodesy
Procedia PDF Downloads 270156 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes
Authors: Madushani Rodrigo, Banuka Athuraliya
Abstract:
In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16
Procedia PDF Downloads 118155 Ways for University to Conduct Research Evaluation: Based on National Research University Higher School of Economics Example
Authors: Svetlana Petrikova, Alexander Yu Kostinskiy
Abstract:
Management of research evaluation in the Higher School of Economics (HSE) originates from the HSE Academic Fund created in 2004 to facilitate and support academic research and presents its results to international academic community. As the means to inspire the applicants, science projects went through competitive selection process evaluated by the group of experts. Drastic development of HSE, quantity of applied projects for each Academic Fund competition and the need to coordinate the conduct of expert evaluation resulted in founding of the Office for Research Evaluation in 2013. The Office’s primary objective is management of research evaluation of science projects. The standards to conduct the evaluation are defined as follows: - The exercise of the process approach, the unification of the functioning of department. - The uniformity of regulatory, organizational and methodological framework. - The development of proper on-line evaluation system. - The broad involvement of external Russian and international experts, the renouncement of the usage of own employees. - The development of an algorithm to make a correspondence between experts and science projects. - The methodical usage of opened/closed international and Russian databases to extend the expert database. - The transparency of evaluation results – free access to assessment while keeping experts confidentiality. The management of research evaluation of projects is based on the sole standard, organization and financing. The standard way of conducting research evaluation at HSE is based upon Regulations on basic principles for research evaluation at HSE. These Regulations have been developed from the moment of establishment of the Office for Research Evaluation and are based on conventional corporate standards for regulatory document management. The management system of research evaluation is implemented on the process approach basis. Process approach means deployment of work as a process, which is the aggregation of interrelated and interacting activities processing inputs into outputs. Inputs are firstly client asking for the assessment to be conducted, defining the conditions for organizing and carrying of the assessment and secondly the applicant with proper for the competition application; output is assessment given to the client. While exercising process approach to clarify interrelation and interacting main parties or subjects of the assessment are determined and the way for interaction between them forms up. Parties to expert assessment are: - Ordering Party – The department of the university taking the decision to subject a project to expert assessment; - Providing Party – The department of the university authorized to provide such assessment by the Ordering Party; - Performing Party – The legal and natural entities that have expertise in the area of research evaluation. Experts assess projects in accordance with criteria and states of expert opinions approved by the Ordering Party. Objects of assessment generally are applications or HSE competition project reports. Mainly assessments are deployed for internal needs, i.e. the most ordering parties are HSE branches and departments, but assessment can also be conducted for external clients. The financing of research evaluation at HSE is based on the established corporate culture and traditions of HSE.Keywords: expert assessment, management of research evaluation, process approach, research evaluation
Procedia PDF Downloads 253154 Flood Risk Assessment, Mapping Finding the Vulnerability to Flood Level of the Study Area and Prioritizing the Study Area of Khinch District Using and Multi-Criteria Decision-Making Model
Authors: Muhammad Karim Ahmadzai
Abstract:
Floods are natural phenomena and are an integral part of the water cycle. The majority of them are the result of climatic conditions, but are also affected by the geology and geomorphology of the area, topography and hydrology, the water permeability of the soil and the vegetation cover, as well as by all kinds of human activities and structures. However, from the moment that human lives are at risk and significant economic impact is recorded, this natural phenomenon becomes a natural disaster. Flood management is now a key issue at regional and local levels around the world, affecting human lives and activities. The majority of floods are unlikely to be fully predicted, but it is feasible to reduce their risks through appropriate management plans and constructions. The aim of this Case Study is to identify, and map areas of flood risk in the Khinch District of Panjshir Province, Afghanistan specifically in the area of Peshghore, causing numerous damages. The main purpose of this study is to evaluate the contribution of remote sensing technology and Geographic Information Systems (GIS) in assessing the susceptibility of this region to flood events. Panjsher is facing Seasonal floods and human interventions on streams caused floods. The beds of which have been trampled to build houses and hotels or have been converted into roads, are causing flooding after every heavy rainfall. The streams crossing settlements and areas with high touristic development have been intensively modified by humans, as the pressure for real estate development land is growing. In particular, several areas in Khinch are facing a high risk of extensive flood occurrence. This study concentrates on the construction of a flood susceptibility map, of the study area, by combining vulnerability elements, using the Analytical Hierarchy Process/ AHP. The Analytic Hierarchy Process, normally called AHP, is a powerful yet simple method for making decisions. It is commonly used for project prioritization and selection. AHP lets you capture your strategic goals as a set of weighted criteria that you then use to score projects. This method is used to provide weights for each criterion which Contributes to the Flood Event. After processing of a digital elevation model (DEM), important secondary data were extracted, such as the slope map, the flow direction and the flow accumulation. Together with additional thematic information (Landuse and Landcover, topographic wetness index, precipitation, Normalized Difference Vegetation Index, Elevation, River Density, Distance from River, Distance to Road, Slope), these led to the final Flood Risk Map. Finally, according to this map, the Priority Protection Areas and Villages and the structural and nonstructural measures were demonstrated to Minimize the Impacts of Floods on residential and Agricultural areas.Keywords: flood hazard, flood risk map, flood mitigation measures, AHP analysis
Procedia PDF Downloads 117153 Chiral Molecule Detection via Optical Rectification in Spin-Momentum Locking
Authors: Jessie Rapoza, Petr Moroshkin, Jimmy Xu
Abstract:
Chirality is omnipresent, in nature, in life, and in the field of physics. One intriguing example is the homochirality that has remained a great secret of life. Another is the pairs of mirror-image molecules – enantiomers. They are identical in atomic composition and therefore indistinguishable in the scalar physical properties. Yet, they can be either therapeutic or toxic, depending on their chirality. Recent studies suggest a potential link between abnormal levels of certain D-amino acids and some serious health impairments, including schizophrenia, amyotrophic lateral sclerosis, and potentially cancer. Although indistinguishable in their scalar properties, the chirality of a molecule reveals itself in interaction with the surrounding of a certain chirality, or more generally, a broken mirror-symmetry. In this work, we report on a system for chiral molecule detection, in which the mirror-symmetry is doubly broken, first by asymmetric structuring a nanopatterned plasmonic surface than by the incidence of circularly polarized light (CPL). In this system, the incident circularly-polarized light induces a surface plasmon polariton (SPP) wave, propagating along the asymmetric plasmonic surface. This SPP field itself is chiral, evanescently bound to a near-field zone on the surface (~10nm thick), but with an amplitude greatly intensified (by up to 104) over that of the incident light. It hence probes just the molecules on the surface instead of those in the volume. In coupling to molecules along its path on the surface, the chiral SPP wave favors one chirality over the other, allowing for chirality detection via the change in an optical rectification current measured at the edges of the sample. The asymmetrically structured surface converts the high-frequency electron plasmonic-oscillations in the SPP wave into a net DC drift current that can be measured at the edge of the sample via the mechanism of optical rectification. The measured results validate these design concepts and principles. The observed optical rectification current exhibits a clear differentiation between a pair of enantiomers. Experiments were performed by focusing a 1064nm CW laser light at the sample - a gold grating microchip submerged in an approximately 1.82M solution of either L-arabinose or D-arabinose and water. A measurement of the current output was then recorded under both rights and left circularly polarized lights. Measurements were recorded at various angles of incidence to optimize the coupling between the spin-momentums of the incident light and that of the SPP, that is, spin-momentum locking. In order to suppress the background, the values of the photocurrent for the right CPL are subtracted from those for the left CPL. Comparison between the two arabinose enantiomers reveals a preferential signal response of one enantiomer to left CPL and the other enantiomer to right CPL. In sum, this work reports on the first experimental evidence of the feasibility of chiral molecule detection via optical rectification in a metal meta-grating. This nanoscale interfaced electrical detection technology is advantageous over other detection methods due to its size, cost, ease of use, and integration ability with read-out electronic circuits for data processing and interpretation.Keywords: Chirality, detection, molecule, spin
Procedia PDF Downloads 92152 Gamifying Content and Language Integrated Learning: A Study Exploring the Use of Game-Based Resources to Teach Primary Mathematics in a Second Language
Authors: Sarah Lister, Pauline Palmer
Abstract:
Research findings presented within this paper form part of a larger scale collaboration between academics at Manchester Metropolitan University and a technology company. The overarching aims of this project focus on developing a series of game-based resources to promote the teaching of aspects of mathematics through a second language (L2) in primary schools. This study explores the potential of game-based learning (GBL) as a dynamic way to engage and motivate learners, making learning fun and purposeful. The research examines the capacity of GBL resources to provide a meaningful and purposeful context for CLIL. GBL is a powerful learning environment and acts as an effective vehicle to promote the learning of mathematics through an L2. The fun element of GBL can minimise stress and anxiety associated with mathematics and L2 learning that can create barriers. GBL provides one of the few safe domains where it is acceptable for learners to fail. Games can provide a life-enhancing experience for learners, revolutionizing the routinized ways of learning through fusing learning and play. This study argues that playing games requires learners to think creatively to solve mathematical problems, using the L2 in order to progress, which can be associated with the development of higher-order thinking skills and independent learning. GBL requires learners to engage appropriate cognitive processes with increased speed of processing, sensitivity to environmental inputs, or flexibility in allocating cognitive and perceptual resources. At surface level, GBL resources provide opportunities for learners to learn to do things. Games that fuse subject content and appropriate learning objectives have the potential to make learning academic subjects more learner-centered, promote learner autonomy, easier, more enjoyable, more stimulating and engaging and therefore, more effective. Data includes observations of the children playing the games and follow up group interviews. Given that learning as a cognitive event cannot be directly observed or measured. A Cognitive Discourse Functions (CDF) construct was used to frame the research, to map the development of learners’ conceptual understanding in an L2 context and as a framework to observe the discursive interactions that occur learner to learner and between learner and teacher. Cognitively, the children were required to engage with mathematical content, concepts and language to make decisions quickly, to engage with the gameplay to reason, solve and overcome problems and learn through experimentation. The visual elements of the games supported the learning of new concepts. Children recognised the value of the games to consolidate their mathematical thinking and develop their understanding of new ideas. The games afforded them time to think and reflect. The teachers affirmed that the games provided meaningful opportunities for the learners to practise the language. The findings of this research support the view that using the game-based resources supported children’s grasp of mathematical ideas and their confidence and ability to use the L2. Engaging with the content and language through the games led to deeper learning.Keywords: CLIL, gaming, language, mathematics
Procedia PDF Downloads 142151 Adaptation Measures as a Response to Climate Change Impacts and Associated Financial Implications for Construction Businesses by the Application of a Mixed Methods Approach
Authors: Luisa Kynast
Abstract:
It is obvious that buildings and infrastructure are highly impacted by climate change (CC). Both, design and material of buildings need to be resilient to weather events in order to shelter humans, animals, or goods. As well as buildings and infrastructure are exposed to weather events, the construction process itself is generally carried out outdoors without being protected from extreme temperatures, heavy rain, or storms. The production process is restricted by technical limitations for processing materials with machines and physical limitations due to human beings (“outdoor-worker”). In future due to CC, average weather patterns are expected to change as well as extreme weather events are expected to occur more frequently and more intense and therefore have a greater impact on production processes and on the construction businesses itself. This research aims to examine this impact by analyzing an association between responses to CC and financial performance of businesses within the construction industry. After having embedded the above depicted field of research into the resource dependency theory, a literature review was conducted to expound the state of research concerning a contingent relation between climate change adaptation measures (CCAM) and corporate financial performance for construction businesses. The examined studies prove that this field is rarely investigated, especially for construction businesses. Therefore, reports of the Carbon Disclosure Project (CDP) were analyzed by applying content analysis using the software tool MAXQDA. 58 construction companies – located worldwide – could be examined. To proceed even more systematically a coding scheme analogous to findings in literature was adopted. Out of qualitative analysis, data was quantified and a regression analysis containing corporate financial data was conducted. The results gained stress adaptation measures as a response to CC as a crucial proxy to handle climate change impacts (CCI) by mitigating risks and exploiting opportunities. In CDP reports the majority of answers stated increasing costs/expenses as a result of implemented measures. A link to sales/revenue was rarely drawn. Though, CCAM were connected to increasing sales/revenues. Nevertheless, this presumption is supported by the results of the regression analysis where a positive effect of implemented CCAM on construction businesses´ financial performance in the short-run was ascertained. These findings do refer to appropriate responses in terms of the implemented number of CCAM. Anyhow, still businesses show a reluctant attitude for implementing CCAM, which was confirmed by findings in literature as well as by findings in CDP reports. Businesses mainly associate CCAM with costs and expenses rather than with an effect on their corporate financial performance. Mostly companies underrate the effect of CCI and overrate the costs and expenditures for the implementation of CCAM and completely neglect the pay-off. Therefore, this research shall create a basis for bringing CC to the (financial) attention of corporate decision-makers, especially within the construction industry.Keywords: climate change adaptation measures, construction businesses, financial implication, resource dependency theory
Procedia PDF Downloads 143150 Using Health Literacy and Medico-Legal Guidance to Improve Restorative Dentistry Patient Information Leaflets
Authors: Hasneet K. Kalsi, Julie K. Kilgariff
Abstract:
Introduction: Within dentistry, the process for gaining informed consent has become more complex. To consent for treatment, patients must understand all reasonable treatment options and associated risks and benefits. Consenting is therefore deeply embedded in health literacy. Patients attending for dental consultation are often presented with an array of information and choices, yet studies show patients recall less than half of the information provided immediately after. Appropriate and comprehensible patient information leaflets (PILs) may be useful aid memories. In 2016 the World Health Organisation set improving health literacy as a global priority. Soon after, Scotland’s 2017-2025 Making it Easier: A Health Literacy Action Plan followed. This project involved the review of Restorative PILs used within Dundee Dental Hospital to assess the Content and Readability. Method: The current PIL on Root Canal Treatment (RCT) was created in 2011. This predates the Montgomery vs. NHS Lanarkshire case, a ruling which significantly impacted dental consenting processes, as well as General Dental Council’s (GDC’s) Standards for the Dental Team and Faculty of General Dental Practice’s Good Practice Guidance on Clinical Examination and Record-Keeping. Current evidence-based guidance, including that stipulated by the GDC, was reviewed. A 20-point Essential Content Checklist was designed to conform to best practice guidance for valid consenting processes. The RCT leaflet was scored against this to ascertain if the content was satisfactory. Having ensured the content satisfied medicolegal requirements, health literacy considerations were reviewed regarding readability. This was assessed using McLaughlin’s Simple Measure of Gobbledygook (SMOG) formula, which identifies school stages that would have to be achieved to comprehend the PIL. The sensitivity of the results to alternative readability methods were assessed. Results: The PIL was not sufficient for modern consenting processes and reflected a suboptimal level of health literacy. Evaluation of the leaflet revealed key content was missing, including information pertaining to risks and benefits. Only five points out of the 20-point checklist were present. The readability score was 16, equivalent to a level 2 in National Adult Literacy Standards/Scottish Credit and Qualification Framework Level 5; 62% of Scottish adults are able to read to this standard. Discussion: Assessment of the leaflet showed it was no longer fit for purpose. Reasons include a lack of pertinent information, a text-heavy leaflet lacking flow, and content errors. The SMOG score indicates a high level of comprehension is required to understand this PIL, which many patients may not possess. A new PIL, compliant with medicolegal and health literacy guidance, was designed with patient-driven checklists, notes spaces for annotations/ questions and areas for clinicians to highlight important case-specific information. It has been tested using the SMOG formula. Conclusion: PILs can be extremely useful. Studies show that interactive use can enhance their effectiveness. PILs should reflect best practice guidance and be understood by patients. The 2020 leaflet designed and implemented aims to fulfill the needs of a modern healthcare system and its service users. It embraces and embeds Scotland’s Health Literacy Action Plan within the consenting process. A review of further leaflets using this model is ongoing.Keywords: consent, health literacy, patient information leaflet, restorative dentistry
Procedia PDF Downloads 143149 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments
Authors: David X. Dong, Qingming Zhang, Meng Lu
Abstract:
Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.Keywords: optical sensor, regression model, nitrites, water quality
Procedia PDF Downloads 72148 Development of Mesoporous Gel Based Nonwoven Structure for Thermal Barrier Application
Authors: R. P. Naik, A. K. Rakshit
Abstract:
In recent years, with the rapid development in science and technology, people have increasing requirements on uses of clothing for new functions, which contributes to opportunities for further development and incorporation of new technologies along with novel materials. In this context, textiles are of fast decalescence or fast heat radiation media as per as comfort accountability of textile articles are concern. The microstructure and texture of textiles play a vital role in determining the heat-moisture comfort level of the human body because clothing serves as a barrier to the outside environment and a transporter of heat and moisture from the body to the surrounding environment to keep thermal balance between body heat produced and body heat loss. The main bottleneck which is associated with textile materials to be successful as thermal insulation materials can be enumerated as; firstly, high loft or bulkiness of material so as to provide predetermined amount of insulation by ensuring sufficient trapping of air. Secondly, the insulation depends on forced convection; such convective heat loss cannot be prevented by textile material. Third is that the textile alone cannot reach the level of thermal conductivity lower than 0.025 W/ m.k of air. Perhaps, nano-fibers can do so, but still, mass production and cost-effectiveness is a problem. Finally, such high loft materials for thermal insulation becomes heavier and uneasy to manage especially when required to carry over a body. The proposed works aim at developing lightweight effective thermal insulation textiles in combination with nanoporous silica-gel which provides the fundamental basis for the optimization of material properties to achieve good performance of the clothing system. This flexible nonwoven silica-gel composites fabric in intact monolith was successfully developed by reinforcing SiO2-gel in thermal bonded nonwoven fabric via sol-gel processing. Ambient Pressure Drying method is opted for silica gel preparation for cost-effective manufacturing. The formed structure of the nonwoven / SiO₂ -gel composites were analyzed, and the transfer properties were measured. The effects of structure and fibre on the thermal properties of the SiO₂-gel composites were evaluated. Samples are then tested against untreated samples of same GSM in order to study the effect of SiO₂-gel application on various properties of nonwoven fabric. The nonwoven fabric composites reinforced with aerogel showed intact monolith structure were also analyzed for their surface structure, functional group present, microscopic images. Developed product reveals a significant reduction in pores' size and air permeability than the conventional nonwoven fabric. Composite made from polyester fibre with lower GSM shows lowest thermal conductivity. Results obtained were statistically analyzed by using STATISTICA-6 software for their level of significance. Univariate tests of significance for various parameters are practiced which gives the P value for analyzing significance level along with that regression summary for dependent variable are also studied to obtain correlation coefficient.Keywords: silica-gel, heat insulation, nonwoven fabric, thermal barrier clothing
Procedia PDF Downloads 111147 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations
Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos
Abstract:
Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest
Procedia PDF Downloads 177146 Sustainability in Space: Implementation of Circular Economy and Material Efficiency Strategies in Space Missions
Authors: Hamda M. Al-Ali
Abstract:
The ultimate aim of space exploration has been centralized around the possibility of life on other planets in the solar system. This aim is driven by the detrimental effects that climate change could potentially have on human survival on Earth in the future. This drives humans to search for feasible solutions to increase environmental and economical sustainability on Earth and to evaluate and explore the ability of human survival on other planets such as Mars. To do that, frequent space missions are required to meet the ambitious human goals. This means that reliable and affordable access to space is required, which could be largely achieved through the use of reusable spacecrafts. Therefore, materials and resources must be used wisely to meet the increasing demand. Space missions are currently extremely expensive to operate. However, reusing materials hence spacecrafts, can potentially reduce overall mission costs as well as the negative impact on both space and Earth environments. This is because reusing materials leads to less waste generated per mission, and therefore fewer landfill sites are required. Reusing materials reduces resource consumption, material production, and the need for processing new and replacement spacecraft and launch vehicle parts. Consequently, this will ease and facilitate human access to outer space as it will reduce the demand for scarce resources, which will boost material efficiency in the space industry. Material efficiency expresses the extent to which resources are consumed in the production cycle and how the waste produced by the industrial process is minimized. The strategies proposed in this paper to boost material efficiency in the space sector are the introduction of key performance indicators that are able to measure material efficiency as well as the introduction of clearly defined policies and legislation that can be easily implemented within the general practices in the space industry. Another strategy to improve material efficiency is by amplifying energy and resource efficiency through reusing materials. The circularity of various spacecraft materials such as Kevlar, steel, and aluminum alloys could be maximized through reusing them directly or after galvanizing them with another layer of material to act as a protective coat. This research paper has an aim to investigate and discuss how to improve material efficiency in space missions considering circular economy concepts so that space and Earth become more economically and environmentally sustainable. The circular economy is a transition from a make-use-waste linear model to a closed-loop socio-economic model, which is regenerative and restorative in nature. The implementation of a circular economy will reduce waste and pollution through maximizing material efficiency, ensuring that businesses can thrive and sustain. Further research into the extent to which reusable launch vehicles reduce space mission costs have been discussed, along with the environmental and economic implications it could have on the space sector and the environment. This has been examined through research and in-depth literature review of published reports, books, scientific articles, and journals. Keywords such as material efficiency, circular economy, reusable launch vehicles and spacecraft materials were used to search for relevant literature.Keywords: circular economy, key performance indicator, material efficiency, reusable launch vehicles, spacecraft materials
Procedia PDF Downloads 125145 Leveraging Advanced Technologies and Data to Eliminate Abandoned, Lost, or Otherwise Discarded Fishing Gear and Derelict Fishing Gear
Authors: Grant Bifolchi
Abstract:
As global environmental problems continue to have highly adverse effects, finding long-term, sustainable solutions to combat ecological distress are of growing paramount concern. Ghost Gear—also known as abandoned, lost or otherwise discarded fishing gear (ALDFG) and derelict fishing gear (DFG)—represents one of the greatest threats to the world’s oceans, posing a significant hazard to human health, livelihoods, and global food security. In fact, according to the UN Food and Agriculture Organization (FAO), abandoned, lost and discarded fishing gear represents approximately 10% of marine debris by volume. Around the world, many governments, governmental and non-profit organizations are doing their best to manage the reporting and retrieval of nets, lines, ropes, traps, floats and more from their respective bodies of water. However, these organizations’ ability to effectively manage files and documents about the environmental problem further complicates matters. In Ghost Gear monitoring and management, organizations face additional complexities. Whether it’s data ingest, industry regulations and standards, garnering actionable insights into the location, security, and management of data, or the application of enforcement due to disparate data—all of these factors are placing massive strains on organizations struggling to save the planet from the dangers of Ghost Gear. In this 90-minute educational session, globally recognized Ghost Gear technology expert Grant Bifolchi CET, BBA, Bcom, will provide real-world insight into how governments currently manage Ghost Gear and the technology that can accelerate success in combatting ALDFG and DFG. In this session, attendees will learn how to: • Identify specific technologies to solve the ingest and management of Ghost Gear data categories, including type, geo-location, size, ownership, regional assignment, collection and disposal. • Provide enhanced access to authorities, fisheries, independent fishing vessels, individuals, etc., while securely controlling confidential and privileged data to globally recognized standards. • Create and maintain processing accuracy to effectively track ALDFG/DFG reporting progress—including acknowledging receipt of the report and sharing it with all pertinent stakeholders to ensure approvals are secured. • Enable and utilize Business Intelligence (BI) and Analytics to store and analyze data to optimize organizational performance, maintain anytime-visibility of report status, user accountability, scheduling, management, and foster governmental transparency. • Maintain Compliance Reporting through highly defined, detailed and automated reports—enabling all stakeholders to share critical insights with internal colleagues, regulatory agencies, and national and international partners.Keywords: ghost gear, ALDFG, DFG, abandoned, lost or otherwise discarded fishing gear, data, technology
Procedia PDF Downloads 95144 The Temperature Degradation Process of Siloxane Polymeric Coatings
Authors: Andrzej Szewczak
Abstract:
Study of the effect of high temperatures on polymer coatings represents an important field of research of their properties. Polymers, as materials with numerous features (chemical resistance, ease of processing and recycling, corrosion resistance, low density and weight) are currently the most widely used modern building materials, among others in the resin concrete, plastic parts, and hydrophobic coatings. Unfortunately, the polymers have also disadvantages, one of which decides about their usage - low resistance to high temperatures and brittleness. This applies in particular thin and flexible polymeric coatings applied to other materials, such a steel and concrete, which degrade under varying thermal conditions. Research about improvement of this state includes methods of modification of the polymer composition, structure, conditioning conditions, and the polymerization reaction. At present, ways are sought to reflect the actual environmental conditions, in which the coating will be operating after it has been applied to other material. These studies are difficult because of the need for adopting a proper model of the polymer operation and the determination of phenomena occurring at the time of temperature fluctuations. For this reason, alternative methods are being developed, taking into account the rapid modeling and the simulation of the actual operating conditions of polymeric coating’s materials in real conditions. The nature of a duration is typical for the temperature influence in the environment. Studies typically involve the measurement of variation one or more physical and mechanical properties of such coating in time. Based on these results it is possible to determine the effects of temperature loading and develop methods affecting in the improvement of coatings’ properties. This paper contains a description of the stability studies of silicone coatings deposited on the surface of a ceramic brick. The brick’s surface was hydrophobized by two types of inorganic polymers: nano-polymer preparation based on dialkyl siloxanes (Series 1 - 5) and an aqueous solution of the silicon (series 6 - 10). In order to enhance the stability of the film formed on the brick’s surface and immunize it to variable temperature and humidity loading, the nano silica was added to the polymer. The right combination of the polymer liquid phase and the solid phase of nano silica was obtained by disintegration of the mixture by the sonification. The changes of viscosity and surface tension of polymers were defined, which are the basic rheological parameters affecting the state and the durability of the polymer coating. The coatings created on the brick’s surfaces were then subjected to a temperature loading of 100° C and moisture by total immersion in water, in order to determine any water absorption changes caused by damages and the degradation of the polymer film. The effect of moisture and temperature was determined by measurement (at specified number of cycles) of changes in the surface hardness (using a Vickers’ method) and the absorption of individual samples. As a result, on the basis of the obtained results, the degradation process of polymer coatings related to their durability changes in time was determined.Keywords: silicones, siloxanes, surface hardness, temperature, water absorption
Procedia PDF Downloads 243143 Characterizing and Developing the Clinical Grade Microbiome Assay with a Robust Bioinformatics Pipeline for Supporting Precision Medicine Driven Clinical Development
Authors: Danyi Wang, Andrew Schriefer, Dennis O'Rourke, Brajendra Kumar, Yang Liu, Fei Zhong, Juergen Scheuenpflug, Zheng Feng
Abstract:
Purpose: It has been recognized that the microbiome plays critical roles in disease pathogenesis, including cancer, autoimmune disease, and multiple sclerosis. To develop a clinical-grade assay for exploring microbiome-derived clinical biomarkers across disease areas, a two-phase approach is implemented. 1) Identification of the optimal sample preparation reagents using pre-mixed bacteria and healthy donor stool samples coupled with proprietary Sigma-Aldrich® bioinformatics solution. 2) Exploratory analysis of patient samples for enabling precision medicine. Study Procedure: In phase 1 study, we first compared the 16S sequencing results of two ATCC® microbiome standards (MSA 2002 and MSA 2003) across five different extraction kits (Kit A, B, C, D & E). Both microbiome standards samples were extracted in triplicate across all extraction kits. Following isolation, DNA quantity was determined by Qubit assay. DNA quality was assessed to determine purity and to confirm extracted DNA is of high molecular weight. Bacterial 16S ribosomal ribonucleic acid (rRNA) amplicons were generated via amplification of the V3/V4 hypervariable region of the 16S rRNA. Sequencing was performed using a 2x300 bp paired-end configuration on the Illumina MiSeq. Fastq files were analyzed using the Sigma-Aldrich® Microbiome Platform. The Microbiome Platform is a cloud-based service that offers best-in-class 16S-seq and WGS analysis pipelines and databases. The Platform and its methods have been extensively benchmarked using microbiome standards generated internally by MilliporeSigma and other external providers. Data Summary: The DNA yield using the extraction kit D and E is below the limit of detection (100 pg/µl) of Qubit assay as both extraction kits are intended for samples with low bacterial counts. The pre-mixed bacterial pellets at high concentrations with an input of 2 x106 cells for MSA-2002 and 1 x106 cells from MSA-2003 were not compatible with the kits. Among the remaining 3 extraction kits, kit A produced the greatest yield whereas kit B provided the least yield (Kit-A/MSA-2002: 174.25 ± 34.98; Kit-A/MSA-2003: 179.89 ± 30.18; Kit-B/MSA-2002: 27.86 ± 9.35; Kit-B/MSA-2003: 23.14 ± 6.39; Kit-C/MSA-2002: 55.19 ± 10.18; Kit-C/MSA-2003: 35.80 ± 11.41 (Mean ± SD)). Also, kit A produced the greatest yield, whereas kit B provided the least yield. The PCoA 3D visualization of the Weighted Unifrac beta diversity shows that kits A and C cluster closely together while kit B appears as an outlier. The kit A sequencing samples cluster more closely together than both the other kits. The taxonomic profiles of kit B have lower recall when compared to the known mixture profiles indicating that kit B was inefficient at detecting some of the bacteria. Conclusion: Our data demonstrated that the DNA extraction method impacts DNA concentration, purity, and microbial communities detected by next-generation sequencing analysis. Further microbiome analysis performance comparison of using healthy stool samples is underway; also, colorectal cancer patients' samples will be acquired for further explore the clinical utilities. Collectively, our comprehensive qualification approach, including the evaluation of optimal DNA extraction conditions, the inclusion of positive controls, and the implementation of a robust qualified bioinformatics pipeline, assures accurate characterization of the microbiota in a complex matrix for deciphering the deep biology and enabling precision medicine.Keywords: 16S rRNA sequencing, analytical validation, bioinformatics pipeline, metagenomics
Procedia PDF Downloads 170142 A Distributed Smart Battery Management System – sBMS, for Stationary Energy Storage Applications
Authors: António J. Gano, Carmen Rangel
Abstract:
Currently, electric energy storage systems for stationary applications have known an increasing interest, namely with the integration of local renewable energy power sources into energy communities. Li-ion batteries are considered the leading electric storage devices to achieve this integration, and Battery Management Systems (BMS) are decisive for their control and optimum performance. In this work, the advancement of a smart BMS (sBMS) prototype with a modular distributed topology is described. The system, still under development, has a distributed architecture with modular characteristics to operate with different battery pack topologies and charge capacities, integrating adaptive algorithms for functional state real-time monitoring and management of multicellular Li-ion batteries, and is intended for application in the context of a local energy community fed by renewable energy sources. This sBMS system includes different developed hardware units: (1) Cell monitoring units (CMUs) for interfacing with each individual cell or module monitoring within the battery pack; (2) Battery monitoring and switching unit (BMU) for global battery pack monitoring, thermal control and functional operating state switching; (3) Main management and local control unit (MCU) for local sBMS’s management and control, also serving as a communications gateway to external systems and devices. This architecture is fully expandable to battery packs with a large number of cells, or modules, interconnected in series, as the several units have local data acquisition and processing capabilities, communicating over a standard CAN bus and will be able to operate almost autonomously. The CMU units are intended to be used with Li-ion cells but can be used with other cell chemistries, with output voltages within the 2.5 to 5 V range. The different unit’s characteristics and specifications are described, including the different implemented hardware solutions. The developed hardware supports both passive and active methods for charge equalization, considered fundamental functionalities for optimizing the performance and the useful lifetime of a Li-ion battery package. The functional characteristics of the different units of this sBMS system, including different process variables data acquisition using a flexible set of sensors, can support the development of custom algorithms for estimating the parameters defining the functional states of the battery pack (State-of-Charge, State-of-Health, etc.) as well as different charge equalizing strategies and algorithms. This sBMS system is intended to interface with other systems and devices using standard communication protocols, like those used by the Internet of Things. In the future, this sBMS architecture can evolve to a fully decentralized topology, with all the units using Wi-Fi protocols and integrating a mesh network, making unnecessary the MCU unit. The status of the work in progress is reported, leading to conclusions on the system already executed, considering the implemented hardware solution, not only as fully functional advanced and configurable battery management system but also as a platform for developing custom algorithms and optimizing strategies to achieve better performance of electric energy stationary storage devices.Keywords: Li-ion battery, smart BMS, stationary electric storage, distributed BMS
Procedia PDF Downloads 100141 W-WING: Aeroelastic Demonstrator for Experimental Investigation into Whirl Flutter
Authors: Jiri Cecrdle
Abstract:
This paper describes the concept of the W-WING whirl flutter aeroelastic demonstrator. Whirl flutter is the specific case of flutter that accounts for the additional dynamic and aerodynamic influences of the engine rotating parts. The instability is driven by motion-induced unsteady aerodynamic propeller forces and moments acting in the propeller plane. Whirl flutter instability is a serious problem that may cause the unstable vibration of a propeller mounting, leading to the failure of an engine installation or an entire wing. The complicated physical principle of whirl flutter required the experimental validation of the analytically gained results. W-WING aeroelastic demonstrator has been designed and developed at Czech Aerospace Research Centre (VZLU) Prague, Czechia. The demonstrator represents the wing and engine of the twin turboprop commuter aircraft. Contrary to the most of past demonstrators, it includes a powered motor and thrusting propeller. It allows the changes of the main structural parameters influencing the whirl flutter stability characteristics. Propeller blades are adjustable at standstill. The demonstrator is instrumented by strain gauges, accelerometers, revolution-counting impulse sensor, sensor of airflow velocity, and the thrust measurement unit. Measurement is supported by the in house program providing the data storage and real-time depiction in the time domain as well as pre-processing into the form of the power spectral densities. The engine is linked with a servo-drive unit, which enables maintaining of the propeller revolutions (constant or controlled rate ramp) and monitoring of immediate revolutions and power. Furthermore, the program manages the aerodynamic excitation of the demonstrator by the aileron flapping (constant, sweep, impulse). Finally, it provides the safety guard to prevent any structural failure of the demonstrator hardware. In addition, LMS TestLab system is used for the measurement of the structure response and for the data assessment by means of the FFT- and OMA-based methods. The demonstrator is intended for the experimental investigations in the VZLU 3m-diameter low-speed wind tunnel. The measurement variant of the model is defined by the structural parameters: pitch and yaw attachment stiffness, pitch and yaw hinge stations, balance weight station, propeller type (duralumin or steel blades), and finally, angle of attack of the propeller blade 75% section (). The excitation is provided either by the airflow turbulence or by means of the aerodynamic excitation by the aileron flapping using a frequency harmonic sweep. The experimental results are planned to be utilized for validation of analytical methods and software tools in the frame of development of the new complex multi-blade twin-rotor propulsion system for the new generation regional aircraft. Experimental campaigns will include measurements of aerodynamic derivatives and measurements of stability boundaries for various configurations of the demonstrator.Keywords: aeroelasticity, flutter, whirl flutter, W WING demonstrator
Procedia PDF Downloads 96140 Increasing Student Engagement through Culturally-Responsive Classroom Management
Authors: Catherine P. Bradshaw, Elise T. Pas, Katrina J. Debnam, Jessika H. Bottiani, Michael Rosenberg
Abstract:
Worldwide, ethnically and culturally diverse students are at increased risk for school failure, discipline problems, and dropout. Despite decades of concern about this issue of disparities in education and other fields (e.g., 'school to prison pipeline'), there has been limited empirical examination of models that can actually reduce these gaps in schools. Moreover, few studies have examined the effectiveness of in-service teacher interventions and supports specifically designed to reduce discipline disparities and improve student engagement. This session provides an overview of the evidence-based Double Check model which serves as a framework for teachers to use culturally-responsive strategies to engage ethnically and culturally diverse students in the classroom and reduce discipline problems. Specifically, Double Check is a school-based prevention program which includes three core components: (a) enhancements to the school-wide Positive Behavioral Interventions and Supports (PBIS) tier-1 level of support; (b) five one-hour professional development training sessions, each of which addresses five domains of cultural competence (i.e., connection to the curriculum, authentic relationships, reflective thinking, effective communication, and sensitivity to students’ culture); and (c) coaching of classroom teachers using an adapted version of the Classroom Check-Up, which intends to increase teachers’ use of effective classroom management and culturally-responsive strategies using research-based motivational interviewing and data-informed problem-solving approaches. This paper presents findings from a randomized controlled trial (RCT) testing the impact of Double Check, on office discipline referrals (disaggregated by race) and independently observed and self-reported culturally-responsive practices and classroom behavior management. The RCT included 12 elementary and middle schools; 159 classroom teachers were randomized either to receive coaching or serve as comparisons. Specifically, multilevel analyses indicated that teacher self-reported culturally responsive behavior management improved over the course of the school year for teachers who received the coaching and professional development. However, the average annual office discipline referrals issued to black students were reduced among teachers who were randomly assigned to receive coaching relative to comparison teachers. Similarly, observations conducted by trained external raters indicated significantly more teacher proactive behavior management and anticipation of student problems, higher student compliance, less student non-compliance, and less socially disruptive behaviors in classrooms led by coached teachers than classrooms led teachers randomly assigned to the non-coached condition. These findings indicated promising effects of the Double Check model on a range of teacher and student outcomes, including disproportionality in office discipline referrals among Black students. These results also suggest that the Double Check model is one of only a few systematic approaches to promoting culturally-responsive behavior management which has been rigorously tested and shown to be associated with improvements in either student or staff outcomes indicated significant reductions in discipline problems and improvements in behavior management. Implications of these findings are considered within the broader context of globalization and demographic shifts, and their impacts on schools. These issues are particularly timely, given growing concerns about immigration policies in the U.S. and abroad.Keywords: ethnically and culturally diverse students, student engagement, school-based prevention, academic achievement
Procedia PDF Downloads 282139 [Keynote Talk]: Surveillance of Food Safety Compliance of Hong Kong Street Food
Authors: Mabel Y. C. Yau, Roy C. F. Lai, Hugo Y. H. Or
Abstract:
This study is a pilot surveillance of hygiene compliance and food microbial safety of both licensed and mobile vendors selling Chinese ready–to-eat snack foods in Hong Kong. The study reflects similar situations in running mobile food vending business on trucks. Hong Kong is about to launch the Food Truck Pilot Scheme by the end of 2016 or early 2017. Technically, selling food on the vehicle is no different from hawking food on the street or vending food on the street. Each type of business bears similar food safety issues and cast the same impact on public health. Present findings demonstrate exemplarily situations that also apply to food trucks. 9 types of Cantonese style snacks of 32 samples in total were selected for microbial screening. A total of 16 vending sites including supermarkets, street markets, and snack stores were visited. The study finally focused on a traditional snack, the steamed rice cake with red beans called Put Chai Ko (PCK). PCK is a type of classical Cantonese pastry sold on push carts on the street. It used to be sold at room temperature and served with bamboo sticks in the old days. Some shops would have them sold steam fresh. Microbial examinations on aerobic counts, yeast, and mould, coliform, salmonella as well as Staphylococcus aureus detections were carried out. Salmonella was not detected in all samples. Since PCK does not contain ingredients of beef, poultry, eggs or dairy products, the risk of the presence of Salmonella in PCK was relatively lower although other source of contamination might be possible. Coagulase positive Staphylococcus aureus was found in 6 of the 14 samples sold at room temperature. Among these 6 samples, 3 were PCK. One of the samples was in an unacceptable range of total colony forming units higher than 105. The rest were only satisfactory. Observational evaluations were made with checklists on personal hygiene, premises hygiene, food safety control, food storage, cleaning and sanitization as well as waste disposals. The maximum score was 25 if total compliance were obtained. The highest score among vendors was 20. Three stores were below average, and two of these stores were selling PCK. Most of the non-compliances were on food processing facilities, sanitization conditions and waste disposal. In conclusion, although no food poisoning outbreaks happened during the time of the investigation, the risk of food hazard existed in these stores, especially among street vendors. Attention is needed in the traditional practice of food selling, and that food handlers might not have sufficient knowledge to properly handle food products. Variations in food qualities existed among supply chains or franchise eateries or shops. It was commonly observed that packaging and storage conditions are not properly enforced in the retails. The same situation could be reflected across the food business. It did indicate need of food safety training in the industry and loopholes in quality control among business.Keywords: cantonese snacks, food safety, microbial, hygiene, street food
Procedia PDF Downloads 302138 Provision of Afterschool Programs: Understanding the Educational Needs and Outcomes of Newcomer and Refugee Students in Canada
Authors: Edward Shizha, Edward Makwarimba
Abstract:
Newcomer and refugee youth feel excluded in the education system in Canada, and the formal education environment does not fully cater for their learning needs. The objective of this study was to build knowledge and understanding of the educational needs and experiences of these youth in Canada and how available afterschool programs can most effectively support their learning needs and academic outcomes. The Employment and Social Development Canada (ESDC), which funded this research, enables and empowers students to advance their educational experience through targeted investments in services that are delivered by youth-serving organizations outside the formal education system through afterschool initiatives. A literature review and a provincial/territorial internet scan were conducted to determine the availability of services and programs that serve the educational needs and academic outcomes of newcomer youth in 10 provinces and 3 territories in Canada. The goal was to identify intersectional factors (e.g., gender, sexuality, culture, social class, race, etc.) that influence educational outcomes of newcomer/refugee students and to recommend ways the ESDC could complement settlement services to enhance students’ educational success. First, data was collected through a literature search of various databases, including PubMed, Web of Science, Scopus, Google docs, ACADEMIA, and grey literature, including government documents, to inform our analysis. Second, a provincial/territorial internet scan was conducted using a template that was created by ESDC staff with the input of the researchers. The objective of the web-search scan was to identify afterschool programs, projects, and initiatives offered to newcomer/refugee youth by service provider organizations. The method for the scan included both qualitative and quantitative data gathering. Both the literature review and the provincial/territorial scan revealed that there are gender disparities in educational outcomes of newcomer and refugee youth. High school completion rates by gender show that boys are at higher risk of not graduating than girls and that girls are more likely than boys to have at least a high school diploma and more likely to proceed to postsecondary education. Findings from literature reveal that afterschool programs are required for refugee youth who experience mental health challenges and miss out on significant periods of schooling, which affect attendance, participation, and graduation from high school. However, some refugee youth use their resilience and ambition to succeed in their educational outcomes. Another finding showed that some immigrant/refugee students, through ethnic organizations and familial affiliation, maintain aspects of their cultural values, parental expectations and ambitious expectations for their own careers to succeed in both high school and postsecondary education. The study found a significant combination of afterschool programs that include academic support, scholarships, bursaries, homework support, career readiness, internships, mentorship, tutoring, non-clinical counselling, mental health and social well-being support, language skills, volunteering opportunities, community connections, peer networking, culturally relevant services etc. These programs assist newcomer youth to develop self-confidence and prepare for academic success and future career development. The study concluded that advantages of afterschool programs are greatest for youth at risk for poor educational outcomes, such as Latino and Black youth, including 2SLGBTQI+ immigrant youth.Keywords: afterschool programs, educational outcomes, newcomer youth, refugee youth, youth-serving organizations
Procedia PDF Downloads 74137 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 148136 Young People and Their Parents Accessing Their Digital Health Data via a Patient Portal: The Ethical and Legal Implications
Authors: Pippa Sipanoun, Jo Wray, Kate Oulton, Faith Gibson
Abstract:
Background: With rapidly evolving digital health innovation, there is a need for digital health transformation that is accessible and sustainable, that demonstrates utility for all stakeholders while maintaining data safety. Great Ormond Street Hospital for Children aimed to future-proof the hospital by transitioning to an electronic patient record (EPR) system with a tethered patient portal (MyGOSH) in April 2019. MyGOSH patient portal enables patients 12 years or older (with their parent's consent) to access their digital health data. This includes access to results, documentation, and appointments that facilitate communication with their care team. As part of the Going Digital Study conducted between 2018-2021, data were collected from a sample of all relevant stakeholders before and after EPR and MyGOSH implementation. Data collection reach was wide and included the hospital legal and ethics teams. Aims: This study aims to understand the ethical and legal implications of young people and their parents accessing their digital health data. Methods: A focus group was conducted. Recruited participants were members of the Great Ormond Street Hospital Paediatric Bioethics Centre. Participants included expert and lay members from the Committee from a variety of professional or academic disciplines. Written informed consent was provided by all participants (n=7). The focus group was recorded, transcribed verbatim, and analyzed using thematic analysis. Results: Six themes were identified: access, competence and capacity - granting access to the system; inequalities in access resulting in inequities; burden, uncertainty and responding to change - managing expectations; documenting, risks and data safety; engagement, empowerment and understanding – how to use and manage personal information; legal considerations and obligations. Discussion: If healthcare professionals are to empower young people to be more engaged in their care, the importance of including them in decisions about their health is paramount, especially when they are approaching the age of becoming the consenter for treatment. Complexities exist in assessing competence or capacity when granting system access, when disclosing sensitive information, and maintaining confidentiality. Difficulties are also present in managing clinician burden, managing user expectations whilst providing an equitable service, and data management that meets professional and legal requirements. Conclusion: EPR and tethered-portal implementation at Great Ormond Street Hospital for Children was not only timely, due to the need for a rapid transition to remote consultations during the COVID-19 pandemic, which would not have been possible had EPR/MyGOSH not been implemented, but also integral to the digital health revolution required in healthcare today. This study is highly relevant in understanding the complexities around young people and their parents accessing their digital health data and, although the focus of this research related to portal use and access, the findings translate to young people in the wider digital health context. Ongoing support is required for all relevant stakeholders following MyGOSH patient portal implementation to navigate the ethical and legal complexities. Continued commitment is needed to balance the benefits and burdens, promote inclusion and equity, and ensure portal utility for patient benefit, whilst maintaining an individualized approach to care.Keywords: patient portal, young people and their parents, ethical, legal
Procedia PDF Downloads 114