Search results for: large PV power plant
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15296

Search results for: large PV power plant

296 Review of Health Disparities in Migrants Attending the Emergency Department with Acute Mental Health Presentations

Authors: Jacqueline Eleonora Ek, Michael Spiteri, Chris Giordimaina, Pierre Agius

Abstract:

Background: Malta is known for being a key player as a frontline country with regard to irregular immigration from Africa to Europe. Every year the island experiences an influx of migrants as boat movement across the Mediterranean continues to be a humanitarian challenge. Irregular immigration and applying for asylum is both a lengthy and mentally demanding process. Those doing so are often faced with multiple challenges, which can adversely affect their mental health. Between January and August 2020, Malta disembarked 2 162 people rescued at sea, 463 of them between July & August. Given the small size of the Maltese islands, this regulation places a disproportionately large burden on the country, creating a backlog in the processing of asylum applications resulting in increased time periods of detention. These delays reverberate throughout multiple management pathways resulting in prolonged periods of detention and challenging access to health services. Objectives: To better understand the spatial dimensions of this humanitarian crisis, this study aims to assess disparities in the acute medical management of migrants presenting to the emergency department (ED) with acute mental health presentations as compared to that of local and non-local residents. Method: In this retrospective study, 17795 consecutive ED attendances were reviewed to look for acute mental health presentations. These were further evaluated to assess discrepancies in transportation routes to hospital, nature of presenting complaint, effects of language barriers, use of CT brain, treatment given at ED, availability of psychiatric reviews, and final admission/discharge plans. Results: Of the ED attendances, 92.3% were local residents, and 7.7% were non-locals. Of the non-locals, 13.8% were migrants, and 86.2% were other-non-locals. Acute mental health presentations were seen in 1% of local residents; this increased to 20.6% in migrants. 56.4% of migrants attended with deliberate self-harm; this was lower in local residents, 28.9%. Contrastingly, in local residents, the most common presenting complaint was suicidal thought/ low mood 37.3%, the incidence was similar in migrants at 33.3%. The main differences included 12.8% of migrants presenting with refused oral intake while only 0.6% of local residents presented with the same complaints. 7.7% of migrants presented with a reduced level of consciousness, no local residents presented with this same issue. Physicians documented a language barrier in 74.4% of migrants. 25.6% were noted to be completely uncommunicative. Further investigations included the use of a CT scan in 12% of local residents and in 35.9% of migrants. The most common treatment administered to migrants was supportive fluids 15.4%, the most common in local residents was benzodiazepines 15.1%. Voluntary psychiatric admissions were seen in 33.3% of migrants and 24.7% of locals. Involuntary admissions were seen in 23% of migrants and 13.3% of locals. Conclusion: Results showed multiple disparities in health management. A meeting was held between entities responsible for migrant health in Malta, including the emergency department, primary health care, migrant detention services, and Malta Red Cross. Currently, national quality-improvement initiatives are underway to form new pathways to improve patient-centered care. These include an interpreter unit, centralized handover sheets, and a dedicated migrant health service.

Keywords: emergency department, communication, health, migration

Procedia PDF Downloads 114
295 Absenteeism in Polytechnical University Studies: Quantification and Identification of the Causes at Universitat Politècnica de Catalunya

Authors: E. Mas de les Valls, M. Castells-Sanabra, R. Capdevila, N. Pla, Rosa M. Fernandez-Canti, V. de Medina, A. Mujal, C. Barahona, E. Velo, M. Vigo, M. A. Santos, T. Soto

Abstract:

Absenteeism in universities, including polytechnical universities, is influenced by a variety of factors. Some factors overlap with those causing absenteeism in schools, while others are specific to the university and work-related environments. Indeed, these factors may stem from various sources, including students, educators, the institution itself, or even the alignment of degree curricula with professional requirements. In Spain, there has been an increase in absenteeism in polytechnical university studies, especially after the Covid crisis, posing a significant challenge for institutions to address. This study focuses on Universitat Politècnica de Catalunya• BarcelonaTech (UPC) and aims to quantify the current level of absenteeism and identify its main causes. The study is part of the teaching innovation project ASAP-UPC, which aims to minimize absenteeism through the redesign of teaching methodologies. By understanding the factors contributing to absenteeism, the study seeks to inform the subsequent phases of the ASAP-UPC project, which involve implementing methodologies to minimize absenteeism and evaluating their effectiveness. The study utilizes surveys conducted among students and polytechnical companies. Students' perspectives are gathered through both online surveys and in-person interviews. The surveys inquire about students' interest in attending classes, skill development throughout their UPC experience, and their perception of the skills required for a career in a polytechnical field. Additionally, polytechnical companies are surveyed regarding the skills they seek in prospective employees. The collected data is then analyzed to identify patterns and trends. This analysis involves organizing and categorizing the data, identifying common themes, and drawing conclusions based on the findings. This mixed-method approach has revealed that higher levels of absenteeism are observed in large student groups at both the Bachelor's and Master's degree levels. However, the main causes of absenteeism differ between these two levels. At the Bachelor's level, many students express dissatisfaction with in-person classes, perceiving them as overly theoretical and lacking a balance between theory, experimental practice, and problem-solving components. They also find a lack of relevance to professional needs. Consequently, they resort to using online available materials developed during the Covid crisis and attending private academies for exam preparation instead. On the other hand, at the Master's level, absenteeism primarily arises from schedule incompatibility between university and professional work. There is a discrepancy between the skills highly valued by companies and the skills emphasized during the studies, aligning partially with students' perceptions. These findings are of theoretical importance as they shed light on areas that can be improved to offer a more beneficial educational experience to students at UPC. The study also has potential applicability to other polytechnic universities, allowing them to adapt the surveys and apply the findings to their specific contexts. By addressing the identified causes of absenteeism, universities can enhance the educational experience and better prepare students for successful careers in polytechnical fields.

Keywords: absenteeism, polytechnical studies, professional skills, university challenges

Procedia PDF Downloads 68
294 Design of an Automated Deep Learning Recurrent Neural Networks System Integrated with IoT for Anomaly Detection in Residential Electric Vehicle Charging in Smart Cities

Authors: Wanchalerm Patanacharoenwong, Panaya Sudta, Prachya Bumrungkun

Abstract:

The paper focuses on the development of a system that combines Internet of Things (IoT) technologies and deep learning algorithms for anomaly detection in residential Electric Vehicle (EV) charging in smart cities. With the increasing number of EVs, ensuring efficient and reliable charging systems has become crucial. The aim of this research is to develop an integrated IoT and deep learning system for detecting anomalies in residential EV charging and enhancing EV load profiling and event detection in smart cities. This approach utilizes IoT devices equipped with infrared cameras to collect thermal images and household EV charging profiles from the database of Thailand utility, subsequently transmitting this data to a cloud database for comprehensive analysis. The methodology includes the use of advanced deep learning techniques such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) algorithms. IoT devices equipped with infrared cameras are used to collect thermal images and EV charging profiles. The data is transmitted to a cloud database for comprehensive analysis. The researchers also utilize feature-based Gaussian mixture models for EV load profiling and event detection. Moreover, the research findings demonstrate the effectiveness of the developed system in detecting anomalies and critical profiles in EV charging behavior. The system provides timely alarms to users regarding potential issues and categorizes the severity of detected problems based on a health index for each charging device. The system also outperforms existing models in event detection accuracy. This research contributes to the field by showcasing the potential of integrating IoT and deep learning techniques in managing residential EV charging in smart cities. The system ensures operational safety and efficiency while also promoting sustainable energy management. The data is collected using IoT devices equipped with infrared cameras and is stored in a cloud database for analysis. The collected data is then analyzed using RNN, LSTM, and feature-based Gaussian mixture models. The approach includes both EV load profiling and event detection, utilizing a feature-based Gaussian mixture model. This comprehensive method aids in identifying unique power consumption patterns among EV owners and outperforms existing models in event detection accuracy. In summary, the research concludes that integrating IoT and deep learning techniques can effectively detect anomalies in residential EV charging and enhance EV load profiling and event detection accuracy. The developed system ensures operational safety and efficiency, contributing to sustainable energy management in smart cities.

Keywords: cloud computing framework, recurrent neural networks, long short-term memory, Iot, EV charging, smart grids

Procedia PDF Downloads 64
293 Molecular Characterization of Chicken B Cell Marker (ChB6) in Native Chicken of Poonch Region from International Borders of India and Pakistan

Authors: Mandeep Singh Azad.Dibyendu Chakraborty, Vikas Vohra

Abstract:

Introduction: Poonch is one of the remotest districts of the Jammu and Kashmir (UT) and situated on international borders. This native poultry population in these areas is quite hardy and thrives well in adverse climatic conditions. Till date, no local breed from this area (Jammu Province) has been characterized thus present study was undertaken with the main objectives of molecular characterization of ChB6 gene in local native chicken of Poonch region located at international borders between India and Pakistan. The chicken B-cell marker (ChB6) gene has been proposed as a candidate gene in regulating B-cell development. Material and Method: RNA was isolated by Blood RNA Purification Kit (HiPura) and Trizol method from whole blood samples. Positive PCR products with size 1110 bp were selected for further purification, sequencing and analysis. The amplified PCR product was sequenced by Sangers dideoxy chain termination method. The obtained sequence of ChB6 gene of Poonchi chicken were compared by MEGAX software. BioEdit software was used to construct phylogenic tree, and Neighbor Joining method was used to infer evolutionary history. In order to compute evolutionary distance Maximum Composite Likelihood method was used. Results: The positively amplified samples of ChB6 genes were then subjected to Sanger sequencing with “Primer Walking. The sequences were then analyzed using MEGA X and BioEdit software. The sequence results were compared with other reported sequence from different breed of chicken and with other species obtained from the NCBI (National Center for Biotechnology Information). ClustalW method using MEGA X software was used for multiple sequence alignment. The sequence results of ChB6 gene of Poonchi chicken was compared with Centrocercus urophasianus, G. gallus mRNA for B6.1 protein, G. gallus mRNA for B6.2, G. gallus mRNA for B6.3, Gallus gallus B6.1, Halichoeres bivittatus, Miniopterus fuliginosus Ferringtonia patagonica, Tympanuchus phasianellus. The genetic distances were 0.2720, 0.0000, 0.0245, 0.0212, 0.0147, 1.6461, 2.2394, 2.0070 and 0.2363 for ChB6 gene of Poonchi chicken sequence with other sequences in the present study respectively. Sequencing results showed variations between different species. It was observed that AT content were higher then GC content for ChB6 gene. The lower AT content suggests less thermostable. It was observed that there was no sequence difference within the Poonchi population for ChB6 gene. The high homology within chicken population indicates the conservation of ChB6 gene. The maximum difference was observed with Miniopterus fuliginosus (Eastern bent-wing bat) followed by Ferringtonia patagonica and Halichoeres bivittatus. Conclusion: Genetic variation is the essential component for genetic improvement. The results of immune related gene Chb6 shows between population genetic variability. Therefore, further association studies of this gene with some prevalent diseases in large population would be helpful to identify disease resistant/ susceptible genotypes in the indigenous chicken population.

Keywords: ChB6, sequencing, ClustalW, genetic distance, poonchi chicken, SNP

Procedia PDF Downloads 70
292 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 226
291 An Exploratory Case Study of Pre-Service Teachers' Learning to Teach Mathematics to Culturally Diverse Students through a Community-Based After-School Field Experience

Authors: Eugenia Vomvoridi-Ivanovic

Abstract:

It is broadly assumed that participation in field experiences will help pre-service teachers (PSTs) bridge theory to practice. However, this is often not the case since PSTs who are placed in classrooms with large numbers of students from diverse linguistic, cultural, racial, and ethnic backgrounds (culturally diverse students (CDS)) usually observe ineffective mathematics teaching practices that are in contrast to those discussed in their teacher preparation program. Over the past decades, the educational research community has paid increasing attention to investigating out-of-school learning contexts and how participation in such contexts can contribute to the achievement of underrepresented groups in Science, Technology, Engineering, and mathematics (STEM) education and their expanded participation in STEM fields. In addition, several research studies have shown that students display different kinds of mathematical behaviors and discourse practices in out-of-school contexts than they do in the typical mathematics classroom since they draw from a variety of linguistic and cultural resources to negotiate meanings and participate in joint problem solving. However, almost no attention has been given to exploring these contexts as field experiences for pre-service mathematics teachers. The purpose of this study was to explore how participation in a community based after-school field experience promotes understanding of the content pedagogy concepts introduced in elementary mathematics methods courses, particularly as they apply to teaching mathematics to CDS. This study draws upon a situated, socio-cultural theory of teacher learning that centers on the concept of learning as situated social practice, which includes discourse, social interaction, and participation structures. Consistent with exploratory case study methodology, qualitative methods were employed to investigate how a cohort of twelve participating pre-service teacher's approach to pedagogy and their conversations around teaching and learning mathematics to CDS evolved through their participation in the after-school field experience, and how they connected the content discussed in their mathematics methods course with their interactions with the CDS in the after-school. Data were collected over a period of one academic year from the following sources: (a) audio recordings of the PSTs' interactions with the students during the after-school sessions, (b) PSTs' after-school field-notes, (c) audio-recordings of weekly methods course meetings, and (d) other document data (e.g., PST and student generated artifacts, PSTs' written course assignments). The findings of this study reveal that the PSTs benefitted greatly through their participation in the after-school field experience. Specifically, after-school participation promoted a deeper understanding of the content pedagogy concepts introduced in the mathematics methods course and gained a greater appreciation for how students learn mathematics with understanding. Further, even though many of PSTs' assumptions about the mathematical abilities of CDS were challenged and PSTs began to view CDSs' cultural and linguistic backgrounds as resources (rather than obstacles) for learning, some PSTs still held negative stereotypes about CDS and teaching and learning mathematics to CDS in particular. Insights gained through this study contribute to a better understanding of how informal mathematics learning contexts may provide a valuable context for pre-service teacher's learning to teach mathematics to CDS.

Keywords: after-school mathematics program, pre-service mathematical education of teachers, qualitative methods, situated socio-cultural theory, teaching culturally diverse students

Procedia PDF Downloads 130
290 Antibiotic Prophylaxis Habits in Oral Implant Surgery in the Netherlands: A Cross-Sectional Survey

Authors: Fabio Rodriguez Sanchez, Josef Bruers, Iciar Arteagoitia, Carlos Rodriguez Andres

Abstract:

Background: Oral implants are a routine treatment to replace lost teeth. Although they have a high rate of success, implant failures do occur. Perioperative antibiotics have been suggested to prevent postoperative infections and dental implant failures, but they remain a controversial treatment among healthy patients. The objective of this study was to determine whether antibiotic prophylaxis is a common treatment in the Netherlands among general dentists, maxillofacial-surgeons, periodontists and implantologists in conjunction with oral implant surgery among healthy patients and to assess the nature of antibiotics prescriptions in order to evaluate whether any consensus has been reached and the current recommendations are being followed. Methodology: Observational cross-sectional study based on a web-survey reported according to the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guidelines. A validated questionnaire, developed by Deeb et al. (2015), was translated and slightly adjusted to circumstances in the Netherlands. It was used with the explicit permission of the authors. This questionnaire contained both close-ended and some open-ended questions in relation to the following topics: demographics, qualification, antibiotic type, prescription-duration and dosage. An email was sent February 2018 to a sample of 600 general dentists and all 302 oral implantologists, periodontists and maxillofacial surgeons who were recognized by the Dutch Association of Oral Implantology (NVOI) as oral health care providers placing oral implants. The email included a brief introduction about the study objectives and a link to the web questionnaire, which could be filled in anonymously. Overall, 902 questionnaires were sent. However, 29 questionnaires were not correctly received due to an incorrect email address. So a total number of 873 professionals were reached. Collected data were analyzed using SPSS (IBM Corp., released 2012, Armonk, NY). Results: The questionnaire was sent back by a total number of 218 participants (response rate=24.2%), 45 female (20.8%) and 171 male (79.2%). Two respondents were excluded from the study group because they were not currently working as oral health providers. Overall 151 (69.9%) placed oral implants on regular basis. Approximately 79 (52.7%) of these participants prescribed antibiotics only in determined situations, 66 (44.0%) prescribed antibiotics always and 5 dentists (3.3%) did not prescribe antibiotics at all when placing oral implants. Overall, 83 participants who prescribed antibiotics, did so both pre- and postoperatively (58.5%), 12 exclusively postoperative (8.5%), and 47 followed an exclusive preoperative regime (33.1%). A single dose of 2,000 mg amoxicillin orally 1-hour prior treatment was the most prescribed preoperative regimen. The most frequent prescribed postoperative regimen was 500 mg amoxicillin three times daily for 7 days after surgery. On average, oral health professionals prescribed 6,923 mg antibiotics in conjunction with oral implant surgery, varying from 500 to 14,600 mg. Conclusions: Antibiotic prophylaxis in conjunction with oral implant surgery is prescribed in the Netherlands on a rather large scale. Dutch professionals might prescribe antibiotics more cautiously than in other countries and there seems to be a lower range on the different antibiotic types and regimens being prescribed. Anyway, recommendations based on last-published evidence are frequently not being followed.

Keywords: clinical decision making, infection control, antibiotic prophylaxis, dental implants

Procedia PDF Downloads 141
289 Water Monitoring Sentinel Cloud Platform: Water Monitoring Platform Based on Satellite Imagery and Modeling Data

Authors: Alberto Azevedo, Ricardo Martins, André B. Fortunato, Anabela Oliveira

Abstract:

Water is under severe threat today because of the rising population, increased agricultural and industrial needs, and the intensifying effects of climate change. Due to sea-level rise, erosion, and demographic pressure, the coastal regions are of significant concern to the scientific community. The Water Monitoring Sentinel Cloud platform (WORSICA) service is focused on providing new tools for monitoring water in coastal and inland areas, taking advantage of remote sensing, in situ and tidal modeling data. WORSICA is a service that can be used to determine the coastline, coastal inundation areas, and the limits of inland water bodies using remote sensing (satellite and Unmanned Aerial Vehicles - UAVs) and in situ data (from field surveys). It applies to various purposes, from determining flooded areas (from rainfall, storms, hurricanes, or tsunamis) to detecting large water leaks in major water distribution networks. This service was built on components developed in national and European projects, integrated to provide a one-stop-shop service for remote sensing information, integrating data from the Copernicus satellite and drone/unmanned aerial vehicles, validated by existing online in-situ data. Since WORSICA is operational using the European Open Science Cloud (EOSC) computational infrastructures, the service can be accessed via a web browser and is freely available to all European public research groups without additional costs. In addition, the private sector will be able to use the service, but some usage costs may be applied, depending on the type of computational resources needed by each application/user. Although the service has three main sub-services i) coastline detection; ii) inland water detection; iii) water leak detection in irrigation networks, in the present study, an application of the service to Óbidos lagoon in Portugal is shown, where the user can monitor the evolution of the lagoon inlet and estimate the topography of the intertidal areas without any additional costs. The service has several distinct methodologies implemented based on the computations of the water indexes (e.g., NDWI, MNDWI, AWEI, and AWEIsh) retrieved from the satellite image processing. In conjunction with the tidal data obtained from the FES model, the system can estimate a coastline with the corresponding level or even topography of the inter-tidal areas based on the Flood2Topo methodology. The outcomes of the WORSICA service can be helpful for several intervention areas such as i) emergency by providing fast access to inundated areas to support emergency rescue operations; ii) support of management decisions on hydraulic infrastructures operation to minimize damage downstream; iii) climate change mitigation by minimizing water losses and reduce water mains operation costs; iv) early detection of water leakages in difficult-to-access water irrigation networks, promoting their fast repair.

Keywords: remote sensing, coastline detection, water detection, satellite data, sentinel, Copernicus, EOSC

Procedia PDF Downloads 125
288 Social Vulnerability Mapping in New York City to Discuss Current Adaptation Practice

Authors: Diana Reckien

Abstract:

Vulnerability assessments are increasingly used to support policy-making in complex environments, like urban areas. Usually, vulnerability studies include the construction of aggregate (sub-) indices and the subsequent mapping of indices across an area of interest. Vulnerability studies show a couple of advantages: they are great communication tools, can inform a wider general debate about environmental issues, and can help allocating and efficiently targeting scarce resources for adaptation policy and planning. However, they also have a number of challenges: Vulnerability assessments are constructed on the basis of a wide range of methodologies and there is no single framework or methodology that has proven to serve best in certain environments, indicators vary highly according to the spatial scale used, different variables and metrics produce different results, and aggregate or composite vulnerability indicators that are mapped easily distort or bias the picture of vulnerability as they hide the underlying causes of vulnerability and level out conflicting reasons of vulnerability in space. So, there is urgent need to further develop the methodology of vulnerability studies towards a common framework, which is one reason of the paper. We introduce a social vulnerability approach, which is compared with other approaches of bio-physical or sectoral vulnerability studies relatively developed in terms of a common methodology for index construction, guidelines for mapping, assessment of sensitivity, and verification of variables. Two approaches are commonly pursued in the literature. The first one is an additive approach, in which all potentially influential variables are weighted according to their importance for the vulnerability aspect, and then added to form a composite vulnerability index per unit area. The second approach includes variable reduction, mostly Principal Component Analysis (PCA) that reduces the number of variables that are interrelated into a smaller number of less correlating components, which are also added to form a composite index. We test these two approaches of constructing indices on the area of New York City as well as two different metrics of variables used as input and compare the outcome for the 5 boroughs of NY. Our analysis yields that the mapping exercise yields particularly different results in the outer regions and parts of the boroughs, such as Outer Queens and Staten Island. However, some of these parts, particularly the coastal areas receive the highest attention in the current adaptation policy. We imply from this that the current adaptation policy and practice in NY might need to be discussed, as these outer urban areas show relatively low social vulnerability as compared with the more central parts, i.e. the high dense areas of Manhattan, Central Brooklyn, Central Queens and the Southern Bronx. The inner urban parts receive lesser adaptation attention, but bear a higher risk of damage in case of hazards in those areas. This is conceivable, e.g., during large heatwaves, which would more affect more the inner and poorer parts of the city as compared with the outer urban areas. In light of the recent planning practice of NY one needs to question and discuss who in NY makes adaptation policy for whom, but the presented analyses points towards an under representation of the needs of the socially vulnerable population, such as the poor, the elderly, and ethnic minorities, in the current adaptation practice in New York City.

Keywords: vulnerability mapping, social vulnerability, additive approach, Principal Component Analysis (PCA), New York City, United States, adaptation, social sensitivity

Procedia PDF Downloads 395
287 Modern Hybrid of Older Black Female Stereotypes in Hollywood Film

Authors: Frederick W. Gooding, Jr., Mark Beeman

Abstract:

Nearly a century ago, the groundbreaking 1915 film ‘The Birth of a Nation’ popularized the way Hollywood made movies with its avant-garde, feature-length style. The movie's subjugating and demeaning depictions of African American women (and men) reflected popular racist beliefs held during the time of slavery and the early Jim Crow era. Although much has changed concerning race relations in the past century, American sociologist Patricia Hill Collins theorizes that the disparaging images of African American women originating in the era of plantation slavery are adaptable and endure as controlling images today. In this context, a comparative analysis of the successful contemporary film, ‘Bringing Down the House’ starring Queen Latifah is relevant as this 2004 film was designed to purposely defy and ridicule classic stereotypes of African American women. However, the film is still tied to the controlling images from the past, although in a modern hybrid form. Scholars of race and film have noted that the pervasive filmic imagery of the African American woman as the loyal mammy stereotype faded from the screen in the post-civil rights era in favor of more sexualized characters (i.e., the Jezebel trope). Analyzing scenes and dialogue through the lens of sociological and critical race theory, the troubling persistence of African American controlling images in film stubbornly emerge in a movie like ‘Bringing Down the House.’ Thus, these controlling images, like racism itself, can adapt to new social and economic conditions. Although the classic controlling images appeared in the first feature length film focusing on race relations a century ago, ‘The Birth of a Nation,’ this black and white rendition of the mammy figure was later updated in 1939 with the classic hit, ‘Gone with the Wind’ in living color. These popular controlling images have loomed quite large in the minds of international audiences, as ‘Gone with the Wind’ is still shown in American theaters currently, and experts at the British Film Institute in 2004 rated ‘Gone with the Wind’ as the number one movie of all time in UK movie history based upon the total number of actual viewings. Critical analysis of character patterns demonstrate that images that appear superficially benign contribute to a broader and quite persistent pattern of marginalization within the aggregate. This approach allows experts and viewers alike to detect more subtle and sophisticated strands of racial discrimination that are ‘hidden in plain sight’ despite numerous changes in the Hollywood industry that appear to be more voluminous and diverse than three or four decades ago. In contrast to white characters, non-white or minority characters are likely to be subtly compromised or marginalized relative to white characters if and when seen within mainstream movies, rather than be subjected to obvious and offensive racist tropes. The hybrid form of both the older Jezebel and Mammy stereotypes exhibited by lead actress Queen Latifah in ‘Bringing Down the House’ represents a more suave and sophisticated merging of past imagery ideas deemed problematic in the past as well as the present.

Keywords: African Americans, Hollywood film, hybrid, stereotypes

Procedia PDF Downloads 177
286 Human Identification and Detection of Suspicious Incidents Based on Outfit Colors: Image Processing Approach in CCTV Videos

Authors: Thilini M. Yatanwala

Abstract:

CCTV (Closed-Circuit-Television) Surveillance System is being used in public places over decades and a large variety of data is being produced every moment. However, most of the CCTV data is stored in isolation without having integrity. As a result, identification of the behavior of suspicious people along with their location has become strenuous. This research was conducted to acquire more accurate and reliable timely information from the CCTV video records. The implemented system can identify human objects in public places based on outfit colors. Inter-process communication technologies were used to implement the CCTV camera network to track people in the premises. The research was conducted in three stages and in the first stage human objects were filtered from other movable objects available in public places. In the second stage people were uniquely identified based on their outfit colors and in the third stage an individual was continuously tracked in the CCTV network. A face detection algorithm was implemented using cascade classifier based on the training model to detect human objects. HAAR feature based two-dimensional convolution operator was introduced to identify features of the human face such as region of eyes, region of nose and bridge of the nose based on darkness and lightness of facial area. In the second stage outfit colors of human objects were analyzed by dividing the area into upper left, upper right, lower left, lower right of the body. Mean color, mod color and standard deviation of each area were extracted as crucial factors to uniquely identify human object using histogram based approach. Color based measurements were written in to XML files and separate directories were maintained to store XML files related to each camera according to time stamp. As the third stage of the approach, inter-process communication techniques were used to implement an acknowledgement based CCTV camera network to continuously track individuals in a network of cameras. Real time analysis of XML files generated in each camera can determine the path of individual to monitor full activity sequence. Higher efficiency was achieved by sending and receiving acknowledgments only among adjacent cameras. Suspicious incidents such as a person staying in a sensitive area for a longer period or a person disappeared from the camera coverage can be detected in this approach. The system was tested for 150 people with the accuracy level of 82%. However, this approach was unable to produce expected results in the presence of group of people wearing similar type of outfits. This approach can be applied to any existing camera network without changing the physical arrangement of CCTV cameras. The study of human identification and suspicious incident detection using outfit color analysis can achieve higher level of accuracy and the project will be continued by integrating motion and gait feature analysis techniques to derive more information from CCTV videos.

Keywords: CCTV surveillance, human detection and identification, image processing, inter-process communication, security, suspicious detection

Procedia PDF Downloads 181
285 Cost Efficient Receiver Tube Technology for Eco-Friendly Concentrated Solar Thermal Applications

Authors: M. Shiva Prasad, S. R. Atchuta, T. Vijayaraghavan, S. Sakthivel

Abstract:

The world is in need of efficient energy conversion technologies which are affordable, accessible, and sustainable with eco-friendly nature. Solar energy is one of the cornerstones for the world’s economic growth because of its abundancy with zero carbon pollution. Among the various solar energy conversion technologies, solar thermal technology has attracted a substantial renewed interest due to its diversity and compatibility in various applications. Solar thermal systems employ concentrators, tracking systems and heat engines for electricity generation which lead to high cost and complexity in comparison with photovoltaics; however, it is compatible with distinct thermal energy storage capability and dispatchable electricity which creates a tremendous attraction. Apart from that, employing cost-effective solar selective receiver tube in a concentrating solar thermal (CST) system improves the energy conversion efficiency and directly reduces the cost of technology. In addition, the development of solar receiver tubes by low cost methods which can offer high optical properties and corrosion resistance in an open-air atmosphere would be beneficial for low and medium temperature applications. In this regard, our work opens up an approach which has the potential to achieve cost-effective energy conversion. We have developed a highly selective tandem absorber coating through a facile wet chemical route by a combination of chemical oxidation, sol-gel, and nanoparticle coating methods. The developed tandem absorber coating has gradient refractive index nature on stainless steel (SS 304) and exhibited high optical properties (α ≤ 0.95 & ε ≤ 0.14). The first absorber layer (Cr-Mn-Fe oxides) developed by controlled oxidation of SS 304 in a chemical bath reactor. A second composite layer of ZrO2-SiO2 has been applied on the chemically oxidized substrate by So-gel dip coating method to serve as optical enhancing and corrosion resistant layer. Finally, an antireflective layer (MgF2) has been deposited on the second layer, to achieve > 95% of absorption. The developed tandem layer exhibited good thermal stability up to 250 °C in open air atmospheric condition and superior corrosion resistance (withstands for > 200h in salt spray test (ASTM B117)). After the successful development of a coating with targeted properties at a laboratory scale, a prototype of the 1 m tube has been demonstrated with excellent uniformity and reproducibility. Moreover, it has been validated under standard laboratory test condition as well as in field condition with a comparison of the commercial receiver tube. The presented strategy can be widely adapted to develop highly selective coatings for a variety of CST applications ranging from hot water, solar desalination, and industrial process heat and power generation. The high-performance, cost-effective medium temperature receiver tube technology has attracted many industries, and recently the technology has been transferred to Indian industry.

Keywords: concentrated solar thermal system, solar selective coating, tandem absorber, ultralow refractive index

Procedia PDF Downloads 89
284 Impact of School Environment on Socio-Affective Development: A Quasi-Experimental Longitudinal Study of Urban and Suburban Gifted and Talented Programs

Authors: Rebekah Granger Ellis, Richard B. Speaker, Pat Austin

Abstract:

This study used two psychological scales to examine the level of social and emotional intelligence and moral judgment of over 500 gifted and talented high school students in various academic and creative arts programs in a large metropolitan area in the southeastern United States. For decades, numerous models and programs purporting to encourage socio-affective characteristics of adolescent development have been explored in curriculum theory and design. Socio-affective merges social, emotional, and moral domains. It encompasses interpersonal relations and social behaviors; development and regulation of emotions; personal and gender identity construction; empathy development; moral development, thinking, and judgment. Examining development in these socio-affective domains can provide insight into why some gifted and talented adolescents are not successful in adulthood despite advanced IQ scores. Particularly whether nonintellectual characteristics of gifted and talented individuals, such as emotional, social and moral capabilities, are as advanced as their intellectual abilities and how these are related to each other. Unique characteristics distinguish gifted and talented individuals; these may appear as strengths, but there is the potential for problems to accompany them. Although many thrive in their school environments, some gifted students struggle rather than flourish. In the socio-affective domain, these adolescents face special intrapersonal, interpersonal, and environmental problems. Gifted individuals’ cognitive, psychological, and emotional development occurs asynchronously, in multidimensional layers at different rates and unevenly across ability levels. Therefore, it is important to examine the long-term effects of participation in various gifted and talented programs on the socio-affective development of gifted and talented adolescents. This quasi-experimental longitudinal study examined students in several gifted and talented education programs (creative arts school, urban charter schools, and suburban public schools) for (1) socio-affective development level and (2) whether a particular gifted and talented program encourages developmental growth. The following research questions guided the study: (1) How do academically and artistically talented gifted 10th and 11th grade students perform on psychometric scales of social and emotional intelligence and moral judgment? Do they differ from their age or grade normative sample? Are their gender differences among gifted students? (2) Does school environment impact 10th and 11th grade gifted and talented students’ socio-affective development? Do gifted adolescents who participate in a particular school gifted program differ in their developmental profiles of social and emotional intelligence and moral judgment? Students’ performances on psychometric instruments were compared over time and by type of program. Participants took pre-, mid-, and post-tests over the course of an academic school year with Defining Issues Test (DIT-2) assessing moral judgment and BarOn EQ-I: YV assessing social and emotional intelligence. Based on these assessments, quantitative differences in growth on psychological scales (individual and school) were examined. Change scores between schools were also compared. If a school showed change, artifacts (culture, curricula, instructional methodology) provided insight as to environmental qualities that produced this difference.

Keywords: gifted and talented education, moral development, socio-affective development, socio-affective education

Procedia PDF Downloads 162
283 From Avatars to Humans: A Hybrid World Theory and Human Computer Interaction Experimentations with Virtual Reality Technologies

Authors: Juan Pablo Bertuzzi, Mauro Chiarella

Abstract:

Employing a communication studies perspective and a socio-technological approach, this paper introduces a theoretical framework for understanding the concept of hybrid world; the avatarization phenomena; and the communicational archetype of co-hybridization. This analysis intends to make a contribution to future design of virtual reality experimental applications. Ultimately, this paper presents an ongoing research project that proposes the study of human-avatar interactions in digital educational environments, as well as an innovative reflection on inner digital communication. The aforementioned project presents the analysis of human-avatar interactions, through the development of an interactive experience in virtual reality. The goal is to generate an innovative communicational dimension that could reinforce the hypotheses presented throughout this paper. Being thought for its initial application in educational environments, the analysis and results of this research are dependent and have been prepared in regard of a meticulous planning of: the conception of a 3D digital platform; the interactive game objects; the AI or computer avatars; the human representation as hybrid avatars; and lastly, the potential of immersion, ergonomics and control diversity that can provide the virtual reality system and the game engine that were chosen. The project is divided in two main axes: The first part is the structural one, as it is mandatory for the construction of an original prototype. The 3D model is inspired by the physical space that belongs to an academic institution. The incorporation of smart objects, avatars, game mechanics, game objects, and a dialogue system will be part of the prototype. These elements have all the objective of gamifying the educational environment. To generate a continuous participation and a large amount of interactions, the digital world will be navigable both, in a conventional device and in a virtual reality system. This decision is made, practically, to facilitate the communication between students and teachers; and strategically, because it will help to a faster population of the digital environment. The second part is concentrated to content production and further data analysis. The challenge is to offer a scenario’s diversity that compels users to interact and to question their digital embodiment. The multipath narrative content that is being applied is focused on the subjects covered in this paper. Furthermore, the experience with virtual reality devices proposes users to experiment in a mixture of a seemingly infinite digital world and a small physical area of movement. This combination will lead the narrative content and it will be crucial in order to restrict user’s interactions. The main point is to stimulate and to grow in the user the need of his hybrid avatar’s help. By building an inner communication between user’s physicality and user’s digital extension, the interactions will serve as a self-guide through the gameworld. This is the first attempt to make explicit the avatarization phenomena and to further analyze the communicational archetype of co-hybridization. The challenge of the upcoming years will be to take advantage from these forms of generalized avatarization, in order to create awareness and establish innovative forms of hybridization.

Keywords: avatar, hybrid worlds, socio-technology, virtual reality

Procedia PDF Downloads 142
282 A Cluster Randomised Controlled Trial Investigating the Impact of Integrating Mass Drug Administration Treating Soil Transmitted Helminths with Mass Dog Rabies Vaccination in Remote Communities in Tanzania

Authors: Felix Lankester, Alicia Davis, Safari Kinung'hi, Catherine Bunga, Shayo Alkara, Imam Mzimbiri, Jonathan Yoder, Sarah Cleaveland, Guy H. Palmer

Abstract:

Achieving the London Declaration goal of a 90% reduction in neglected tropical diseases (NTDs) by 2030 requires cost-effective strategies that attain high and comprehensive coverage. The first objective of this trial was to assess the impact on cost and coverage of employing a novel integrative One Health approach linking two NTD control programs: mass drug administration (MDA) for soil-transmitted helminths in humans (STH) and mass dog rabies vaccination (MDRV). The second objective was to compare the coverage achieved by the MDA, a community-wide deworming intervention, with that of the existing national primary school-based deworming program (NSDP), with particular focus on the proportion of primary school-age children reached and their school enrolment status. Our approach was unconventional because, in line with the One Health approach to disease control, it coupled the responsibilities and resources of the Ministries responsible for human and animal health into one program with the shared aim of preventing multiple NTDs. The trial was carried out in hard-to-reach pastoral communities comprising 24 villages of the Ngorongoro District, Tanzania, randomly allocated to either Arm A (MDA and MDRV), Arm B (MDA only) or Arm C (MDRV only). Objective one: The percentage of people in each target village that received treatment through MDA in Arms A and B was 63% and 65%, respectively (χ2 = 1, p = 0.32). The percentage of dogs vaccinated in Arm A and C was 70% and 81%, respectively (χ2 =9, p = 0.003). It took 33% less time for a single person and a dog to attend the integrated delivery than two separate events. Cost per dose (including delivery) was lower under the integrated strategy, with delivery of deworming and rabies vaccination reduced by $0.13 (54%) and $0.85 (19%) per dose, respectively. Despite a slight reduction in the proportion of village dogs vaccinated in the integrated event, both the integrated and non-integrated strategies achieved the target threshold of 70% required to eliminate rabies. Objective two: The percentages of primary school age children enrolled in school that was reached by this trial (73%) and the existing NSDP (80%) were not significantly different (F = 0.9, p = 0.36). However, of the primary school age children treated in this trial, 46% were not enrolled in school. Furthermore, 86% of the people treated would have been outside the reach of the NSDP because they were not primary school age or were primary school age children not enrolled in school. The comparable reach, the substantial reductions in cost per dose delivered and the decrease in participants’ time support this integrated One Health approach to control multiple NTDs. Further, the recorded level of non-enrolment at primary school suggests that, in remote areas, school-based delivery strategies could miss a large fraction of school-age children and that programs that focus delivery solely at the level of the primary school will miss a substantial proportion of both primary school age children as well as other individuals from the community. We have shown that these populations can be effectively reached through extramural programs.

Keywords: canine mediated human rabies, integrated health interventions, mass drug administration, neglected tropical disease, One Health, soil-transmitted helminths

Procedia PDF Downloads 181
281 Wind Tunnel Tests on Ground-Mounted and Roof-Mounted Photovoltaic Array Systems

Authors: Chao-Yang Huang, Rwey-Hua Cherng, Chung-Lin Fu, Yuan-Lung Lo

Abstract:

Solar energy is one of the replaceable choices to reduce the CO2 emission produced by conventional power plants in the modern society. As an island which is frequently visited by strong typhoons and earthquakes, it is an urgent issue for Taiwan to make an effort in revising the local regulations to strengthen the safety design of photovoltaic systems. Currently, the Taiwanese code for wind resistant design of structures does not have a clear explanation on photovoltaic systems, especially when the systems are arranged in arrayed format. Furthermore, when the arrayed photovoltaic system is mounted on the rooftop, the approaching flow is significantly altered by the building and led to different pressure pattern in the different area of the photovoltaic system. In this study, L-shape arrayed photovoltaic system is mounted on the ground of the wind tunnel and then mounted on the building rooftop. The system is consisted of 60 PV models. Each panel model is equivalent to a full size of 3.0 m in depth and 10.0 m in length. Six pressure taps are installed on the upper surface of the panel model and the other six are on the bottom surface to measure the net pressures. Wind attack angle is varied from 0° to 360° in a 10° interval for the worst concern due to wind direction. The sampling rate of the pressure scanning system is set as high enough to precisely estimate the peak pressure and at least 20 samples are recorded for good ensemble average stability. Each sample is equivalent to 10-minute time length in full scale. All the scale factors, including timescale, length scale, and velocity scale, are properly verified by similarity rules in low wind speed wind tunnel environment. The purpose of L-shape arrayed system is for the understanding the pressure characteristics at the corner area. Extreme value analysis is applied to obtain the design pressure coefficient for each net pressure. The commonly utilized Cook-and-Mayne coefficient, 78%, is set to the target non-exceedance probability for design pressure coefficients under Gumbel distribution. Best linear unbiased estimator method is utilized for the Gumbel parameter identification. Careful time moving averaging method is also concerned in data processing. Results show that when the arrayed photovoltaic system is mounted on the ground, the first row of the panels reveals stronger positive pressure than that mounted on the rooftop. Due to the flow separation occurring at the building edge, the first row of the panels on the rooftop is most in negative pressures; the last row, on the other hand, shows positive pressures because of the flow reattachment. Different areas also have different pressure patterns, which corresponds well to the regulations in ASCE7-16 describing the area division for design values. Several minor observations are found according to parametric studies, such as rooftop edge effect, parapet effect, building aspect effect, row interval effect, and so on. General comments are then made for the proposal of regulation revision in Taiwanese code.

Keywords: aerodynamic force coefficient, ground-mounted, roof-mounted, wind tunnel test, photovoltaic

Procedia PDF Downloads 138
280 Microplastics in Fish from Grenada, West Indies: Problems and Opportunities

Authors: Michelle E. Taylor, Clare E. Morrall

Abstract:

Microplastics are small particles produced for industrial purposes or formed by breakdown of anthropogenic debris. Caribbean nations import large quantities of plastic products. The Caribbean region is vulnerable to natural disasters and Climate Change is predicted to bring multiple additional challenges to island nations. Microplastics have been found in an array of marine environments and in a diversity of marine species. Occurrence of microplastic in the intestinal tracts of marine fish is a concern to human and ecosystem health as pollutants and pathogens can associate with plastics. Studies have shown that the incidence of microplastics in marine fish varies with species and location. Prevalence of microplastics (≤ 5 mm) in fish species from Grenadian waters (representing pelagic, semi-pelagic and demersal lifestyles) harvested for human consumption have been investigated via gut analysis. Harvested tissue was digested in 10% KOH and particles retained on a 0.177 mm sieve were examined. Microplastics identified have been classified according to type, colour and size. Over 97% of fish examined thus far (n=34) contained microplastics. Current and future work includes examining the invasive Lionfish (Pterois spp.) for microplastics, investigating marine invertebrate species as well as examining environmental sources of microplastics (i.e. rivers, coastal waters and sand). Owing to concerns of pollutant accumulation on microplastics and potential migration into organismal tissues, we plan to analyse fish tissue for mercury and other persistent pollutants. Despite having ~110,000 inhabitants, the island nation of Grenada imported approximately 33 million plastic bottles in 2013, of which it is estimated less than 5% were recycled. Over 30% of the imported bottles were ‘unmanaged’, and as such are potential litter/marine debris. A revised Litter Abatement Act passed into law in Grenada in 2015, but little enforcement of the law is evident to date. A local Non-governmental organization (NGO) ‘The Grenada Green Group’ (G3) is focused on reducing litter in Grenada through lobbying government to implement the revised act and running sessions in schools, community groups and on local media and social media to raise awareness of the problems associated with plastics. A local private company has indicated willingness to support an Anti-Litter Campaign in 2018 and local awareness of the need for a reduction of single use plastic use and litter seems to be high. The Government of Grenada have called for a Sustainable Waste Management Strategy and a ban on both Styrofoam and plastic grocery bags are among recommendations recently submitted. A Styrofoam ban will be in place at the St. George’s University campus from January 1st, 2018 and many local businesses have already voluntarily moved away from Styrofoam. Our findings underscore the importance of continuing investigations into microplastics in marine life; this will contribute to understanding the associated health risks. Furthermore, our findings support action to mitigate the volume of plastics entering the world’s oceans. We hope that Grenada’s future will involve a lot less plastic. This research was supported by the Caribbean Node of the Global Partnership on Marine Litter.

Keywords: Caribbean, microplastics, pollution, small island developing nation

Procedia PDF Downloads 211
279 Conceptualizing the Moroccan Amazigh

Authors: Sanaa Riaz

Abstract:

The free people, Amazigh (plural Imazighen), often known by the more popular exonym, Berber, are spread across several North African countries with the highest population in Morocco have been substantially misunderstood and differentially showcased by entities from western-school educated scholars to human, health and women’s rights organizations, to the State to the international community. This paper is an examination of the various conceptualization of the Imazighen. With the popularity of the Arab Spring movement to oust monarchical and dictatorial rulers across the Middle East and North Africa in Morocco, the Moroccan monarchy introduced various reform programs to win public favor. These included social, economic and educational reforms to incorporate marginalized groups such as the Imazighen. The monarchy has ushered Amazigh representation in public offices and landscape through Amazigh script, even though theirs has been an oral culture. After the Arab Spring, the Justice and Development party, an Islamist party took over in Morocco due to its accessibility to the masses, In Sept. 2021, unlike the case of Egypt and Tunisia where military and constitutional means were sought, Morocco successfully removed it from power through the ballot, resulting in a real victory for the neutral monarchy and its representation as a moderate, secular and liberal force for the nation. As a result, supporting the perpetuation of Amazigh linguistic identity also became synonymous to making a secular statement as a Muslim. It has led to the telling of Amazigh identity at state museums as one representing the indigenous, pure, diverse, culturally-rich and united Morocco. Reform efforts have also prioritized an amiable look towards the economic and familial links of Moroccan Jews with the few thousand families still left in the country and a showcasing through museums and cultural centers of the Jewish identity as Moroccan first. In that endeavor, it is interesting to note the coverage of Jews as the indigenous of Morocco through the embracing of their “folk” cultural and religious practices, those that are not continued outside Morocco. In this epistemology, the concept of the Moroccan Jew becomes similar to the indigenous Amazigh, both cherished as the oldest peoples of Morocco and symbols of its unity and resilience. In the urban discourse, Amazigh identity is a concept that continues to be part of the deliberations of elites and scholars graduating from French schools on the incorporation of rural and illiterate Morocco in economic and educational advancement. Yet, with the constant influx of migrants from Western Sahara into cities like Fez and Marrakesh, Amazigh has often been described as the umbrella term of those of “mixed” ethnic ancestry who constitute the country’s free population. In sum, Amazigh identity highlights the changing discourse on marginalized communities, human rights, representation, Moroccan nationhood, and regional and transnational politics. The aim of this paper is to analyze perceptions of Amazigh identity in Morocco post-2021 ousting of the Islamist party using data from state-sponsored museum displays and cultural centers collected in Summer 2022 and scholarly analyses of Amazigh identity, representation and rights in Morocco.

Keywords: Amazigh identity, Morocco, representation, state politics

Procedia PDF Downloads 92
278 Systematic Review of Quantitative Risk Assessment Tools and Their Effect on Racial Disproportionality in Child Welfare Systems

Authors: Bronwen Wade

Abstract:

Over the last half-century, child welfare systems have increasingly relied on quantitative risk assessment tools, such as actuarial or predictive risk tools. These tools are developed by performing statistical analysis of how attributes captured in administrative data are related to future child maltreatment. Some scholars argue that attributes in administrative data can serve as proxies for race and that quantitative risk assessment tools reify racial bias in decision-making. Others argue that these tools provide more “objective” and “scientific” guides for decision-making instead of subjective social worker judgment. This study performs a systematic review of the literature on the impact of quantitative risk assessment tools on racial disproportionality; it examines methodological biases in work on this topic, summarizes key findings, and provides suggestions for further work. A search of CINAHL, PsychInfo, Proquest Social Science Premium Collection, and the ProQuest Dissertations and Theses Collection was performed. Academic and grey literature were included. The review includes studies that use quasi-experimental methods and development, validation, or re-validation studies of quantitative risk assessment tools. PROBAST (Prediction model Risk of Bias Assessment Tool) and CHARMS (CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies) were used to assess the risk of bias and guide data extraction for risk development, validation, or re-validation studies. ROBINS-I (Risk of Bias in Non-Randomized Studies of Interventions) was used to assess for bias and guide data extraction for the quasi-experimental studies identified. Due to heterogeneity among papers, a meta-analysis was not feasible, and a narrative synthesis was conducted. 11 papers met the eligibility criteria, and each has an overall high risk of bias based on the PROBAST and ROBINS-I assessments. This is deeply concerning, as major policy decisions have been made based on a limited number of studies with a high risk of bias. The findings on racial disproportionality have been mixed and depend on the tool and approach used. Authors use various definitions for racial equity, fairness, or disproportionality. These concepts of statistical fairness are connected to theories about the reason for racial disproportionality in child welfare or social definitions of fairness that are usually not stated explicitly. Most findings from these studies are unreliable, given the high degree of bias. However, some of the less biased measures within studies suggest that quantitative risk assessment tools may worsen racial disproportionality, depending on how disproportionality is mathematically defined. Authors vary widely in their approach to defining and addressing racial disproportionality within studies, making it difficult to generalize findings or approaches across studies. This review demonstrates the power of authors to shape policy or discourse around racial justice based on their choice of statistical methods; it also demonstrates the need for improved rigor and transparency in studies of quantitative risk assessment tools. Finally, this review raises concerns about the impact that these tools have on child welfare systems and racial disproportionality.

Keywords: actuarial risk, child welfare, predictive risk, racial disproportionality

Procedia PDF Downloads 53
277 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 177
276 Worldwide GIS Based Earthquake Information System/Alarming System for Microzonation/Liquefaction and It’s Application for Infrastructure Development

Authors: Rajinder Kumar Gupta, Rajni Kant Agrawal, Jaganniwas

Abstract:

One of the most frightening phenomena of nature is the occurrence of earthquake as it has terrible and disastrous effects. Many earthquakes occur every day worldwide. There is need to have knowledge regarding the trends in earthquake occurrence worldwide. The recoding and interpretation of data obtained from the establishment of the worldwide system of seismological stations made this possible. From the analysis of recorded earthquake data, the earthquake parameters and source parameters can be computed and the earthquake catalogues can be prepared. These catalogues provide information on origin, time, epicenter locations (in term of latitude and longitudes) focal depths, magnitude and other related details of the recorded earthquakes. Theses catalogues are used for seismic hazard estimation. Manual interpretation and analysis of these data is tedious and time consuming. A geographical information system is a computer based system designed to store, analyzes and display geographic information. The implementation of integrated GIS technology provides an approach which permits rapid evaluation of complex inventor database under a variety of earthquake scenario and allows the user to interactively view results almost immediately. GIS technology provides a powerful tool for displaying outputs and permit to users to see graphical distribution of impacts of different earthquake scenarios and assumptions. An endeavor has been made in present study to compile the earthquake data for the whole world in visual Basic on ARC GIS Plate form so that it can be used easily for further analysis to be carried out by earthquake engineers. The basic data on time of occurrence, location and size of earthquake has been compiled for further querying based on various parameters. A preliminary analysis tool is also provided in the user interface to interpret the earthquake recurrence in region. The user interface also includes the seismic hazard information already worked out under GHSAP program. The seismic hazard in terms of probability of exceedance in definite return periods is provided for the world. The seismic zones of the Indian region are included in the user interface from IS 1893-2002 code on earthquake resistant design of buildings. The City wise satellite images has been inserted in Map and based on actual data the following information could be extracted in real time: • Analysis of soil parameters and its effect • Microzonation information • Seismic hazard and strong ground motion • Soil liquefaction and its effect in surrounding area • Impacts of liquefaction on buildings and infrastructure • Occurrence of earthquake in future and effect on existing soil • Propagation of earth vibration due of occurrence of Earthquake GIS based earthquake information system has been prepared for whole world in Visual Basic on ARC GIS Plate form and further extended micro level based on actual soil parameters. Individual tools has been developed for liquefaction, earthquake frequency etc. All information could be used for development of infrastructure i.e. multi story structure, Irrigation Dam & Its components, Hydro-power etc in real time for present and future.

Keywords: GIS based earthquake information system, microzonation, analysis and real time information about liquefaction, infrastructure development

Procedia PDF Downloads 316
275 Surviral: An Agent-Based Simulation Framework for Sars-Cov-2 Outcome Prediction

Authors: Sabrina Neururer, Marco Schweitzer, Werner Hackl, Bernhard Tilg, Patrick Raudaschl, Andreas Huber, Bernhard Pfeifer

Abstract:

History and the current outbreak of Covid-19 have shown the deadly potential of infectious diseases. However, infectious diseases also have a serious impact on areas other than health and healthcare, such as the economy or social life. These areas are strongly codependent. Therefore, disease control measures, such as social distancing, quarantines, curfews, or lockdowns, have to be adopted in a very considerate manner. Infectious disease modeling can support policy and decision-makers with adequate information regarding the dynamics of the pandemic and therefore assist in planning and enforcing appropriate measures that will prevent the healthcare system from collapsing. In this work, an agent-based simulation package named “survival” for simulating infectious diseases is presented. A special focus is put on SARS-Cov-2. The presented simulation package was used in Austria to model the SARS-Cov-2 outbreak from the beginning of 2020. Agent-based modeling is a relatively recent modeling approach. Since our world is getting more and more complex, the complexity of the underlying systems is also increasing. The development of tools and frameworks and increasing computational power advance the application of agent-based models. For parametrizing the presented model, different data sources, such as known infections, wastewater virus load, blood donor antibodies, circulating virus variants and the used capacity for hospitalization, as well as the availability of medical materials like ventilators, were integrated with a database system and used. The simulation result of the model was used for predicting the dynamics and the possible outcomes and was used by the health authorities to decide on the measures to be taken in order to control the pandemic situation. The survival package was implemented in the programming language Java and the analytics were performed with R Studio. During the first run in March 2020, the simulation showed that without measures other than individual personal behavior and appropriate medication, the death toll would have been about 27 million people worldwide within the first year. The model predicted the hospitalization rates (standard and intensive care) for Tyrol and South Tyrol with an accuracy of about 1.5% average error. They were calculated to provide 10-days forecasts. The state government and the hospitals were provided with the 10-days models to support their decision-making. This ensured that standard care was maintained for as long as possible without restrictions. Furthermore, various measures were estimated and thereafter enforced. Among other things, communities were quarantined based on the calculations while, in accordance with the calculations, the curfews for the entire population were reduced. With this framework, which is used in the national crisis team of the Austrian province of Tyrol, a very accurate model could be created on the federal state level as well as on the district and municipal level, which was able to provide decision-makers with a solid information basis. This framework can be transferred to various infectious diseases and thus can be used as a basis for future monitoring.

Keywords: modelling, simulation, agent-based, SARS-Cov-2, COVID-19

Procedia PDF Downloads 174
274 Environmentally Sustainable Transparent Wood: A Fully Green Approach from Bleaching to Impregnation for Energy-Efficient Engineered Wood Components

Authors: Francesca Gullo, Paola Palmero, Massimo Messori

Abstract:

Transparent wood is considered a promising structural material for the development of environmentally friendly, energy-efficient engineered components. To obtain transparent wood from natural wood materials two approaches can be used: i) bottom-up and ii) top-down. Through the second method, the color of natural wood samples is lightened through a chemical bleaching process that acts on chromophore groups of lignin, such as the benzene ring, quinonoid, vinyl, phenolics, and carbonyl groups. These chromophoric units form complex conjugate systems responsible for the brown color of wood. There are two strategies to remove color and increase the whiteness of wood: i) lignin removal and ii) lignin bleaching. In the lignin removal strategy, strong chemicals containing chlorine (chlorine, hypochlorite, and chlorine dioxide) and oxidizers (oxygen, ozone, and peroxide) are used to completely destroy and dissolve the lignin. In lignin bleaching methods, a moderate reductive (hydrosulfite) or oxidative (hydrogen peroxide) is commonly used to alter or remove the groups and chromophore systems of lignin, selectively discoloring the lignin while keeping the macrostructure intact. It is, therefore, essential to manipulate nanostructured wood by precisely controlling the nanopores in the cell walls by monitoring both chemical treatments and process conditions, for instance, the treatment time, the concentration of chemical solutions, the pH value, and the temperature. The elimination of wood light scattering is the second step in the fabrication of transparent wood materials, which can be achieved through two-step approaches: i) the polymer impregnation method and ii) the densification method. For the polymer impregnation method, the wood scaffold is treated with polymers having a corresponding refractive index (e.g., PMMA and epoxy resins) under vacuum to obtain the transparent composite material, which can finally be pressed to align the cellulose fibers and reduce interfacial defects in order to have a finished product with high transmittance (>90%) and excellent light-guiding. However, both the solution-based bleaching and the impregnation processes used to produce transparent wood generally consume large amounts of energy and chemicals, including some toxic or pollutant agents, and are difficult to scale up industrially. Here, we report a method to produce optically transparent wood by modifying the lignin structure with a chemical reaction at room temperature using small amounts of hydrogen peroxide in an alkaline environment. This method preserves the lignin, which results only deconjugated and acts as a binder, providing both a strong wood scaffold and suitable porosity for infiltration of biobased polymers while reducing chemical consumption, the toxicity of the reagents used, polluting waste, petroleum by-products, energy and processing time. The resulting transparent wood demonstrates high transmittance and low thermal conductivity. Through the combination of process efficiency and scalability, the obtained materials are promising candidates for application in the field of construction for modern energy-efficient buildings.

Keywords: bleached wood, energy-efficient components, hydrogen peroxide, transparent wood, wood composites

Procedia PDF Downloads 54
273 Phage Therapy of Staphylococcal Pyoderma in Dogs

Authors: Jiri Nepereny, Vladimir Vrzal

Abstract:

Staphylococcus intermedius/pseudintermedius bacteria are commonly found on the skin of healthy dogs and can cause pruritic skin diseases under certain circumstances (trauma, allergy, immunodeficiency, ectoparasitosis, endocrinological diseases, glucocorticoid therapy, etc.). These can develop into complicated superficial or deep pyoderma, which represent a large group of problematic skin diseases in dogs. These are predominantly inflammations of a secondary nature, associated with the occurrence of coagulase-positive Staphylococcus spp. A major problem is increased itching, which greatly complicates the healing process. The aim of this work is to verify the efficacy of the developed preparation Bacteriophage SI (Staphylococcus intermedius). The tested preparation contains a lysate of bacterial cells of S. intermedius host culture including culture medium and live virions of specific phage. Sodium Merthiolate is added as a preservative in a safe concentration. Validation of the efficacy of the product was demonstrated by monitoring the therapeutic effect after application to indicated cases from clinical practice. The indication for inclusion of the patient into the trial was an adequate history and clinical examination accompanied by sample collection for bacteriological examination and isolation of the specific causative agent. Isolate identification was performed by API BioMérieux identification system (API ID 32 STAPH) and rep-PCR typing. The suitability of therapy for a specific case was confirmed by in vitro testing of the lytic ability of the bacteriophage to lyse the specific isolate = formation of specific plaques on the culture isolate on the surface of the solid culture medium. So far, a total of 32 dogs of different sexes, ages and breed affiliations with different symptoms of staphylococcal dermatitis have been included in the testing. Their previous therapy consisted of more or less successful systemic or local application of broad-spectrum antibiotics. The presence of S. intermedius/pseudintermedius has been demonstrated in 26 cases. The isolates were identified as a S. pseudintermedius, in all cases. Contaminant bacterial microflora was always present in the examined samples. The test product was applied subcutaneously in gradually increasing doses over a period of 1 month. After improvement in health status, maintenance therapy was followed by application of the product once a week for 3 months. Adverse effects associated with the administration of the product (swelling at the site of application) occurred in only 2 cases. In all cases, there was a significant reduction in clinical signs (healing of skin lesions and reduction of inflammation) after therapy and an improvement in the well-being of the treated animals. A major problem in the treatment of pyoderma is the frequent resistance of the causative agents to antibiotics, especially the increasing frequency of multidrug-resistant and methicillin-resistant S. pseudintermedius (MRSP) strains. Specific phagolysate using for the therapy of these diseases could solve this problem and to some extent replace or reduce the use of antibiotics, whose frequent and widespread application often leads to the emergence of resistance. The advantage of the therapeutic use of bacteriophages is their bactericidal effect, high specificity and safety. This work was supported by Project FV40213 from Ministry of Industry and Trade, Czech Republic.

Keywords: bacteriophage, pyoderma, staphylococcus spp, therapy

Procedia PDF Downloads 171
272 Restoration of a Forest Catchment in Himachal Pradesh, India: An Institutional Analysis

Authors: Sakshi Gupta, Kavita Sardana

Abstract:

Management of a forest catchment involves diverse dimensions, multiple stakeholders, and conflicting interests, primarily due to the wide variety of valuable ecosystem services offered by it. Often, the coordination among different levels of formal institutions governing the catchment, local communities, as well as societal norms, taboos, customs and practices, happens to be amiss, leading to conflicting policy interventions which prove detrimental for such resources. In the case of Ala Catchment, which is a protected forest located at a distance of 9 km North-East of the town of Dalhousie, within district Chamba of Himachal Pradesh, India, and serves as one of the primary sources of public water supply for the downstream town of Dalhousie and nearby areas, several policy measures have been adopted for the restoration of the forest catchment, as well as for the improvement of public water supply. These catchment forest restoration measures include; the installation of a fence along the perimeter of the catchment, plantation of trees in the empty patches of the forest, construction of check dams, contour trenches, contour bunds, issuance of grazing permits, and installation of check posts to keep track of trespassers. While the measures adopted to address the acute shortage of public water supply in the Dalhousie region include; building and maintenance of large capacity water storage tanks, laying of pipelines, expanding public water distribution infrastructure to include water sources other than Ala Catchment Forest and introducing of five new water supply schemes for drinking water as well as irrigation. However, despite these policy measures, the degradation of the Ala catchment and acute shortage of water supply continue to distress the region. This study attempts to conduct an institutional analysis to assess the impact of policy measures for the restoration of the Ala Catchment in the Chamba district of Himachal Pradesh in India. For this purpose, the theoretical framework of Ostrom’s Institutional Assessment and Development (IAD) Framework was used. Snowball sampling was used to conduct private interviews and focused group discussions. A semi-structured questionnaire was administered to interview a total of 184 respondents across stakeholders from both formal and informal institutions. The central hypothesis of the study is that the interplay of formal and informal institutions facilitates the implementation of policy measures for ameliorating Ala Catchment, in turn improving the livelihood of people depending on this forest catchment for direct and indirect benefits. The findings of the study suggest that leakages in the successful implementation of policy measures occur at several nodes of decision-making, which adversely impact the catchment and the ecosystem services provided by it. Some of the key reasons diagnosed by the immediate analysis include; ad-hoc assignment of property rights, rise in tourist inflow increasing the pressures on water demand, illegal trespassing by local and nomadic pastoral communities for grazing and unlawful extraction of forest products, and rent-seeking by a few influential formal institutions. Consequently, it is indicated that the interplay of formal and informal institutions may be obscuring the consequentiality of the policy measures on the restoration of the catchment.

Keywords: catchment forest restoration, institutional analysis and development framework, institutional interplay, protected forest, water supply management

Procedia PDF Downloads 97
271 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level

Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni

Abstract:

In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.

Keywords: tropocollagen, multiscale model, fibrils, knee ligaments

Procedia PDF Downloads 128
270 Synthesis of Smart Materials Based on Polyaniline Coated Fibers

Authors: Mihaela Beregoi, Horia Iovu, Cristina Busuioc, Alexandru Evanghelidis, Elena Matei, Monica Enculescu, Ionut Enculescu

Abstract:

Nanomaterials field is very attractive for all researchers who are attempting to develop new devices with the same or improved properties than the micro-sized ones, while reducing the reagents and power consumptions. In this way, a wide range of nanomaterials were fabricated and integrated in applications for electronics, optoelectronics, solar cells, tissue reconstruction and drug delivery. Obviously, the most appealing ones are those dedicated to the medical domain. Different types of nano-sized materials, such as particles, fibers, films etc., can be synthesized by using physical, chemical or electrochemical methods. One of these techniques is electrospinning, which enable the production of fibers with nanometric dimensions by pumping a polymeric solution in a high electric field; due to the electrostatic charging and solvent evaporation, the precursor mixture is converted into nonwoven meshes with different fiber densities and mechanical properties. Moreover, polyaniline is a conducting polymer with interesting optical properties, suitable for displays and electrochromic windows. Otherwise, polyaniline is an electroactive polymer that can contract/expand by applying electric stimuli, due to the oxidation/reduction reactions which take place in the polymer chains. These two main properties can be exploited in order to synthesize smart materials that change their dimensions, exhibiting in the same time good electrochromic properties. In the context aforesaid, a poly(methyl metacrylate) solution was spun to get webs composed of fibers with diameter values between 500 nm and 1 µm. Further, the polymer meshes were covered with a gold layer in order to make them conductive and also appropriate as working electrode in an electrochemical cell. The gold shell was deposited by DC sputtering. Such metalized fibers can be transformed into smart materials by covering them with a thin layer of conductive polymer. Thus, the webs were coated with a polyaniline film by the electrochemical route, starting from and aqueous solution of aniline and sulfuric acid, where sulfuric acid acts as oxidant agent. For the polymerization of aniline, a saturated calomel electrode was employed as reference, a platinum plate as counter electrode and the gold covered webs as working electrode. Chronoamperometry was selected as deposition method for polyaniline, by modifying the deposition time. Metalized meshes with different fiber densities were used, the transmission ranging between 70 and 80 %. The morphological investigation showed that polyaniline layer has a granular structure for all deposition experiments. As well, some preliminary optical tests were done by using sulfuric acid as electrolyte, which revealed the modification of polyaniline colour from green to dark blue when applying a voltage. In conclusion, new multilayered materials were obtained by a simple approach: the merge of the electrospinning method benefits with polyaniline chemistry. This synthesis method allows the fabrication of structures with reproducible characteristics, suitable for display or tissue substituents.

Keywords: electrospinning, fibers, smart materials, polyaniline

Procedia PDF Downloads 293
269 Social Licence to Operate Methodology to Secure Commercial, Community and Regulatory Approval for Small and Large Scale Fisheries

Authors: Kelly S. Parkinson, Katherine Y. Teh-White

Abstract:

Futureye has a bespoke social licence to operate methodology which has successfully secured community approval and commercial return for fisheries which have faced regulatory and financial risk. This unique approach to fisheries management focuses on delivering improved social and environmental outcomes to support the fishing industry make steps towards achieving the United Nations SDGs. An SLO is the community’s implicit consent for a business or project to exist. An SLO must be earned and maintained alongside regulatory licences. In current and new operations, it helps you to anticipate and measure community concerns around your operations – leading to more predictable and sensible policy outcomes that will not jeopardise your commercial returns. Rising societal expectations and increasing activist sophistication mean the international fishing industry needs to resolve community concerns at each stage their supply chain. Futureye applied our tested social licence to operate (SLO) methodology to help Austral Fisheries who was being attacked by activists concerned about the sustainability of Patagonian Toothfish. Austral was Marine Stewardship Council certified, but pirates were making the overall catch unsustainable. Austral wanted to be carbon neutral. SLO provides a lens on the risk that helps industries and companies act before regulatory and political risk escalates. To do this assessment, we have a methodology that assesses the risk that we can then translate into a process to create a strategy. 1) Audience: we understand the drivers of change and the transmission of those drivers across all audience segments. 2) Expectation: we understand the level of social norming of changing expectations. 3) Outrage: we understand the technical and perceptual aspects of risk and the opportunities to mitigate these. 4) Inter-relationships: we understand the political, regulatory, and reputation system so that we can understand the levers of change. 5) Strategy: we understand whether the strategy will achieve a social licence through bringing the internal and external stakeholders on the journey. Futureye’s SLO methodologies helped Austral to understand risks and opportunities to enhance its resilience. Futureye reviewed the issues, assessed outrage and materiality and mapped SLO threats to the company. Austral was introduced to a new way that it could manage activism, climate action, and responsible consumption. As a result of Futureye’s work, Austral worked closely with Sea Shepherd who was campaigning against pirates illegally fishing Patagonian Toothfish as well as international governments. In 2016 Austral launched the world’s first carbon neutral fish which won Austral a thirteen percent premium for tender on the open market. In 2017, Austral received the prestigious Banksia Foundation Sustainability Leadership Award for seafood that is sustainable, healthy and carbon neutral. Austral’s position as a leader in sustainable development has opened doors for retailers all over the world. Futureye’s SLO methodology can identify the societal, political and regulatory risks facing fisheries and position them to proactively address the issues and become an industry leader in sustainability.

Keywords: carbon neutral, fisheries management, risk communication, social licence to operate, sustainable development

Procedia PDF Downloads 120
268 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing

Authors: Arjun Kumar Rath, Titus Dhanasingh

Abstract:

Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.

Keywords: board diagnostics software, embedded system, hardware testing, test frameworks

Procedia PDF Downloads 145
267 Formulation and Optimization of Self Nanoemulsifying Drug Delivery System of Rutin for Enhancement of Oral Bioavailability Using QbD Approach

Authors: Shrestha Sharma, Jasjeet K. Sahni, Javed Ali, Sanjula Baboota

Abstract:

Introduction: Rutin is a naturally occurring strong antioxidant molecule belonging to bioflavonoid category. Due to its free radical scavenging properties, it has been found to be beneficial in the treatment of various diseases including inflammation, cancer, diabetes, allergy, cardiovascular disorders and various types of microbial infections. Despite its beneficial effects, it suffers from the problem of low aqueous solubility which is responsible for low oral bioavailability. The aim of our study was to optimize and characterize self-nanoemulsifying drug delivery system (SNEDDS) of rutin using Box-Behnken design (BBD) combined with a desirability function. Further various antioxidant, pharmacokinetic and pharmacodynamic studies were performed for the optimized rutin SNEDDS formulation. Methodologies: Selection of oil, surfactant and co-surfactant was done on the basis of solubility/miscibility studies. Sefsol+ Vitamin E, Solutol HS 15 and Transcutol P were selected as oil phase, surfactant and co-surfactant respectively. Optimization of SNEDDS formulations was done by a three-factor, three-level (33)BBD. The independent factors were Sefsol+ Vitamin E, Solutol HS15, and Transcutol P. The dependent variables were globule size, self emulsification time (SEF), % transmittance and cumulative percentage drug released. Various response surface graphs and contour plots were constructed to understand the effect of different factor, their levels and combinations on the responses. The optimized Rutin SNEDDS formulation was characterized for various parameters such as globule size, zeta potential, viscosity, refractive index , % Transmittance and in vitro drug release. Ex vivo permeation studies and pharmacokinetic studies were performed for optimized formulation. Antioxidant activity was determined by DPPH and reducing power assays. Anti-inflammatory activity was determined by using carrageenan induced rat paw oedema method. Permeation of rutin across small intestine was assessed using confocal laser scanning microscopy (CLSM). Major findings:The optimized SNEDDS formulation consisting of Sefsol+ Vitamin E - Solutol HS15 -Transcutol HP at proportions of 25:35:17.5 (w/w) was prepared and a comparison of the predicted values and experimental values were found to be in close agreement. The globule size and PDI of optimized SNEDDS formulation was found to be 16.08 ± 0.02 nm and 0.124±0.01 respectively. Significant (p˂0.05) increase in percentage drug release was achieved in the case of optimized SNEDDS formulation (98.8 %) as compared to rutin suspension. Furthermore, pharmacokinetic study showed a 2.3-fold increase in relative oral bioavailability compared with that of the suspension. Antioxidant assay results indicated better efficacy of the developed formulation than the pure drug and it was found to be comparable with ascorbic acid. The results of anti-inflammatory studies showed 72.93 % inhibition for the SNEDDS formulation which was significantly higher than the drug suspension 46.56%. The results of CLSM indicated that the absorption of SNEDDS formulation was considerably higher than that from rutin suspension. Conclusion: Rutin SNEDDS have been successfully prepared and they can serve as an effective tool in enhancing oral bioavailability and efficacy of Rutin.

Keywords: rutin, oral bioavilability, pharamacokinetics, pharmacodynamics

Procedia PDF Downloads 500