Search results for: cellular network
48 Gis Based Flash Flood Runoff Simulation Model of Upper Teesta River Besin - Using Aster Dem and Meteorological Data
Authors: Abhisek Chakrabarty, Subhraprakash Mandal
Abstract:
Flash flood is one of the catastrophic natural hazards in the mountainous region of India. The recent flood in the Mandakini River in Kedarnath (14-17th June, 2013) is a classic example of flash floods that devastated Uttarakhand by killing thousands of people.The disaster was an integrated effect of high intensityrainfall, sudden breach of Chorabari Lake and very steep topography. Every year in Himalayan Region flash flood occur due to intense rainfall over a short period of time, cloud burst, glacial lake outburst and collapse of artificial check dam that cause high flow of river water. In Sikkim-Derjeeling Himalaya one of the probable flash flood occurrence zone is Teesta Watershed. The Teesta River is a right tributary of the Brahmaputra with draining mountain area of approximately 8600 Sq. km. It originates in the Pauhunri massif (7127 m). The total length of the mountain section of the river amounts to 182 km. The Teesta is characterized by a complex hydrological regime. The river is fed not only by precipitation, but also by melting glaciers and snow as well as groundwater. The present study describes an attempt to model surface runoff in upper Teesta basin, which is directly related to catastrophic flood events, by creating a system based on GIS technology. The main object was to construct a direct unit hydrograph for an excess rainfall by estimating the stream flow response at the outlet of a watershed. Specifically, the methodology was based on the creation of a spatial database in GIS environment and on data editing. Moreover, rainfall time-series data collected from Indian Meteorological Department and they were processed in order to calculate flow time and the runoff volume. Apart from the meteorological data, background data such as topography, drainage network, land cover and geological data were also collected. Clipping the watershed from the entire area and the streamline generation for Teesta watershed were done and cross-sectional profiles plotted across the river at various locations from Aster DEM data using the ERDAS IMAGINE 9.0 and Arc GIS 10.0 software. The analysis of different hydraulic model to detect flash flood probability ware done using HEC-RAS, Flow-2D, HEC-HMS Software, which were of great importance in order to achieve the final result. With an input rainfall intensity above 400 mm per day for three days the flood runoff simulation models shows outbursts of lakes and check dam individually or in combination with run-off causing severe damage to the downstream settlements. Model output shows that 313 Sq. km area were found to be most vulnerable to flash flood includes Melli, Jourthang, Chungthang, and Lachung and 655sq. km. as moderately vulnerable includes Rangpo,Yathang, Dambung,Bardang, Singtam, Teesta Bazarand Thangu Valley. The model was validated by inserting the rain fall data of a flood event took place in August 1968, and 78% of the actual area flooded reflected in the output of the model. Lastly preventive and curative measures were suggested to reduce the losses by probable flash flood event.Keywords: flash flood, GIS, runoff, simulation model, Teesta river basin
Procedia PDF Downloads 31747 Voices of Dissent: Case Study of a Digital Archive of Testimonies of Political Oppression
Authors: Andrea Scapolo, Zaya Rustamova, Arturo Matute Castro
Abstract:
The “Voices in Dissent” initiative aims at collecting and making available in a digital format, testimonies, letters, and other narratives produced by victims of political oppression from different geographical spaces across the Atlantic. By recovering silenced voices behind the official narratives, this open-access online database will provide indispensable tools for rewriting the history of authoritarian regimes from the margins as memory debates continue to provoke controversy among academic and popular transnational circles. In providing an extensive database of non-hegemonic discourses in a variety of political and social contexts, the project will complement the existing European and Latin-American studies, and invite further interdisciplinary and trans-national research. This digital resource will be available to academic communities and the general audience and will be organized geographically and chronologically. “Voices in Dissent” will offer a first comprehensive study of these personal accounts of persecution and repression against determined historical backgrounds and their impact on collective memory formation in contemporary societies. The digitalization of these texts will allow to run metadata analyses and adopt comparatist approaches for a broad range of research endeavors. Most of the testimonies included in our archive are testimonies of trauma: the trauma of exile, imprisonment, torture, humiliation, censorship. The research on trauma has now reached critical mass and offers a broad spectrum of critical perspectives. By putting together testimonies from different geographical and historical contexts, our project will provide readers and scholars with an extraordinary opportunity to investigate how culture shapes individual and collective memories and provides or denies resources to make sense and cope with the trauma. For scholars dealing with the epistemological and rhetorical analysis of testimonies, an online open-access archive will prove particularly beneficial to test theories on truth status and the formation of belief as well as to study the articulation of discourse. An important aspect of this project is also its pedagogical applications since it will contribute to the creation of Open Educational Resources (OER) to support students and educators worldwide. Through collaborations with our Library System, the archive will form part of the Digital Commons database. The texts collected in this online archive will be made available in the original languages as well as in English translation. They will be accompanied by a critical apparatus that will contextualize them historically by providing relevant background information and bibliographical references. All these materials can serve as a springboard for a broad variety of educational projects and classroom activities. They can also be used to design specific content courses or modules. In conclusion, the desirable outcomes of the “Voices in Dissent” project are: 1. the collections and digitalization of political dissent testimonies; 2. the building of a network of scholars, educators, and learners involved in the design, development, and sustainability of the digital archive; 3. the integration of the content of the archive in both research and teaching endeavors, such as publication of scholarly articles, design of new upper-level courses, and integration of the materials in existing courses.Keywords: digital archive, dissent, open educational resources, testimonies, transatlantic studies
Procedia PDF Downloads 10646 Posts by Influencers Promoting Water Saving: The Impact of Distance and the Perception of Effectiveness on Behavior
Authors: Sancho-Esper Franco, Rodríguez Sánchez Carla, Sánchez Carolina, Orús-Sanclemente Carlos
Abstract:
Water scarcity is a reality that affects many regions of the world and is aggravated by climate change and population growth. Saving water has become an urgent need to ensure the sustainability of the planet and the survival of many communities, where youth and social networks play a key role in promoting responsible practices and adopting habits that contribute to environmental preservation. This study analyzes the persuasion capacity of messages designed to promote pro-environmental behaviors among youth. Specifically, it studies how the efficacy (effectiveness) of the response (personal response efficacy/effectiveness) and the perception of distance from the source of the message influence the water-saving behavior of the audience. To do so, two communication frameworks are combined. First, the Construal Level Theory, which is based on the concept of "psychological distance", that is, people, objects or events can be perceived as psychologically near or far, and this subjective distance (i.e., social, temporal, or spatial) determines their attitudes, emotions, and actions. This perceived distance can be social, temporal, or spatial. This research focuses on studying the spatial distance and social distance generated by cultural differences between influencers and their audience to understand how cultural distance can influence the persuasiveness of a message. Research on the effects of psychological distance between influencers-followers in the pro-environmental field is very limited, being relevant because people could learn specific behaviors suggested by opinion leaders such as influencers in social networks. Second, different approaches to behavioral change suggest that the perceived efficacy of a behavior can explain individual pro-environmental actions. People will be more likely to adopt a new behavior if they perceive that they are capable of performing it (efficacy belief) and that their behavior will effectively contribute to solving that problem (personal response efficacy). It is also important to study the different actors (social and individual) that are perceived as responsible for addressing environmental problems. Specifically, we analyze to what extent the belief individual’s water-saving actions are effective in solving the problem can influence water-saving behavior since this individual effectiveness increases people's sense of obligation and responsibility with the problem. However, in this regard, empirical evidence presents mixed results. Our study addresses the call for experimental studies manipulating different subtypes of response effectiveness to generate robust causal evidence. Based on all the above, this research analyzes whether cultural distance (local vs. international influencer) and the perception of effectiveness of behavior (personal response efficacy) (personal/individual vs. collective) affect the actual behavior and the intention to conserve water of social network users. An experiment of 2 (local influencer vs. international influencer) x 2 (effectiveness of individual vs. collective response) is designed and estimated. The results show that a message from a local influencer appealing to individual responsibility exerts greater influence on intention and actual water-saving behavior, given the cultural closeness between influencer-follower, and the appeal to individual responsibility increases the feeling of obligation to participate in pro-environmental actions. These results offer important implications for social marketing campaigns that seek to promote water conservation.Keywords: social marketing, influencer, message framing, experiment, personal response efficacy, water saving
Procedia PDF Downloads 6245 Comparative Assessment of the Thermal Tolerance of Spotted Stemborer, Chilo partellus Swinhoe (Lepidoptera: Crambidae) and Its Larval Parasitoid, Cotesia sesamiae Cameron (Hymenoptera: Braconidae)
Authors: Reyard Mutamiswa, Frank Chidawanyika, Casper Nyamukondiwa
Abstract:
Under stressful thermal environments, insects adjust their behaviour and physiology to maintain key life-history activities and improve survival. For interacting species, mutual or antagonistic, thermal stress may affect the participants in differing ways, which may then affect the outcome of the ecological relationship. In agroecosystems, this may be the fate of relationships between insect pests and their antagonistic parasitoids under acute and chronic thermal variability. Against this background, we therefore investigated the thermal tolerance of different developmental stages of Chilo partellus Swinhoe (Lepidoptera: Crambidae) and its larval parasitoid Cotesia sesamiae Cameron (Hymenoptera: Braconidae) using both dynamic and static protocols. In laboratory experiments, we determined lethal temperature assays (upper and lower lethal temperatures) using direct plunge protocols in programmable water baths (Systronix, Scientific, South Africa), effects of ramping rate on critical thermal limits following standardized protocols using insulated double-jacketed chambers (‘organ pipes’) connected to a programmable water bath (Lauda Eco Gold, Lauda DR.R. Wobser GMBH and Co. KG, Germany), supercooling points (SCPs) following dynamic protocols using a Pico logger connected to a programmable water bath, heat knock-down time (HKDT) and chill-coma recovery (CCRT) time following static protocols in climate chambers (HPP 260, Memmert GmbH + Co.KG, Germany) connected to a camera (HD Covert Network Camera, DS-2CD6412FWD-20, Hikvision Digital Technology Co., Ltd, China). When exposed for two hours to a static temperature, lower lethal temperatures ranged -9 to 6; -14 to -2 and -1 to 4ºC while upper lethal temperatures ranged from 37 to 48; 41 to 49 and 36 to 39ºC for C. partellus eggs, larvae and C. sesamiae adults respectively. Faster heating rates improved critical thermal maxima (CTmax) in C. partellus larvae and adult C. partellus and C. sesamiae. Lower cooling rates improved critical thermal minima (CTmin) in C. partellus and C. sesamiae adults while compromising CTmin in C. partellus larvae. The mean SCPs for C. partellus larvae, pupae and adults were -11.82±1.78, -10.43±1.73 and -15.75±2.47 respectively with adults having the lowest SCPs. Heat knock-down time and chill-coma recovery time varied significantly between C. partellus larvae and adults. Larvae had higher HKDT than adults, while the later recovered significantly faster following chill-coma. Current results suggest developmental stage differences in C. partellus thermal tolerance (with respect to lethal temperatures and critical thermal limits) and a compromised temperature tolerance of parasitoid C. sesamiae relative to its host, suggesting potential asynchrony between host-parasitoid population phenology and consequently biocontrol efficacy under global change. These results have broad implications to biological pest management insect-natural enemy interactions under rapidly changing thermal environments.Keywords: chill-coma recovery time, climate change, heat knock-down time, lethal temperatures, supercooling point
Procedia PDF Downloads 23844 Influence of Atmospheric Pollutants on Child Respiratory Disease in Cartagena De Indias, Colombia
Authors: Jose A. Alvarez Aldegunde, Adrian Fernandez Sanchez, Matthew D. Menden, Bernardo Vila Rodriguez
Abstract:
Up to five statistical pre-processings have been carried out considering the pollutant records of the stations present in Cartagena de Indias, Colombia, also taking into account the childhood asthma incidence surveys conducted in hospitals in the city by the Health Ministry of Colombia for this study. These pre-processings have consisted of different techniques such as the determination of the quality of data collection, determination of the quality of the registration network, identification and debugging of errors in data collection, completion of missing data and purified data, as well as the improvement of the time scale of records. The characterization of the quality of the data has been conducted by means of density analysis of the pollutant registration stations using ArcGis Software and through mass balance techniques, making it possible to determine inconsistencies in the records relating the registration data between stations following the linear regression. The results obtained in this process have highlighted the positive quality in the pollutant registration process. Consequently, debugging of errors has allowed us to identify certain data as statistically non-significant in the incidence and series of contamination. This data, together with certain missing records in the series recorded by the measuring stations, have been completed by statistical imputation equations. Following the application of these prior processes, the basic series of incidence data for respiratory disease and pollutant records have allowed the characterization of the influence of pollutants on respiratory diseases such as, for example, childhood asthma. This characterization has been carried out using statistical correlation methods, including visual correlation, simple linear regression correlation and spectral analysis with PAST Software which identifies maximum periodicity cycles and minimums under the formula of the Lomb periodgram. In relation to part of the results obtained, up to eleven maximums and minimums considered contemporary between the incidence records and the particles have been identified taking into account the visual comparison. The spectral analyses that have been performed on the incidence and the PM2.5 have returned a series of similar maximum periods in both registers, which are at a maximum during a period of one year and another every 25 days (0.9 and 0.07 years). The bivariate analysis has managed to characterize the variable "Daily Vehicular Flow" in the ninth position of importance of a total of 55 variables. However, the statistical correlation has not obtained a favorable result, having obtained a low value of the R2 coefficient. The series of analyses conducted has demonstrated the importance of the influence of pollutants such as PM2.5 in the development of childhood asthma in Cartagena. The quantification of the influence of the variables has been able to determine that there is a 56% probability of dependence between PM2.5 and childhood respiratory asthma in Cartagena. Considering this justification, the study could be completed through the application of the BenMap Software, throwing a series of spatial results of interpolated values of the pollutant contamination records that exceeded the established legal limits (represented by homogeneous units up to the neighborhood level) and results of the impact on the exacerbation of pediatric asthma. As a final result, an economic estimate (in Colombian Pesos) of the monthly and individual savings derived from the percentage reduction of the influence of pollutants in relation to visits to the Hospital Emergency Room due to asthma exacerbation in pediatric patients has been granted.Keywords: Asthma Incidence, BenMap, PM2.5, Statistical Analysis
Procedia PDF Downloads 11643 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 13642 Microstructural Characterization of Bitumen/Montmorillonite/Isocyanate Composites by Atomic Force Microscopy
Authors: Francisco J. Ortega, Claudia Roman, Moisés García-Morales, Francisco J. Navarro
Abstract:
Asphaltic bitumen has been largely used in both industrial and civil engineering, mostly in pavement construction and roofing membrane manufacture. However, bitumen as such is greatly susceptible to temperature variations, and dramatically changes its in-service behavior from a viscoelastic liquid, at medium-high temperatures, to a brittle solid at low temperatures. Bitumen modification prevents these problems and imparts improved performance. Isocyanates like polymeric MDI (mixture of 4,4′-diphenylmethane di-isocyanate, 2,4’ and 2,2’ isomers, and higher homologues) have shown to remarkably enhance bitumen properties at the highest in-service temperatures expected. This comes from the reaction between the –NCO pendant groups of the oligomer and the most polar groups of asphaltenes and resins in bitumen. In addition, oxygen diffusion and/or UV radiation may provoke bitumen hardening and ageing. With the purpose of minimizing these effects, nano-layered-silicates (nanoclays) are increasingly being added to bitumen formulations. Montmorillonites, a type of naturally occurring mineral, may produce a nanometer scale dispersion which improves bitumen thermal, mechanical and barrier properties. In order to increase their lipophilicity, these nanoclays are normally treated so that organic cations substitute the inorganic cations located in their intergallery spacing. In the present work, the combined effect of polymeric MDI and the commercial montmorillonite Cloisite® 20A was evaluated. A selected bitumen with penetration within the range 160/220 was modified with 10 wt.% Cloisite® 20A and 2 wt.% polymeric MDI, and the resulting ternary composites were characterized by linear rheology, X-ray diffraction (XRD) and Atomic Force Microscopy (AFM). The rheological tests evidenced a notable solid-like behavior at the highest temperatures studied when bitumen was just loaded with 10 wt.% Cloisite® 20A and high-shear blended for 20 minutes. However, if polymeric MDI was involved, the sequence of addition exerted a decisive control on the linear rheology of the final ternary composites. Hence, in bitumen/Cloisite® 20A/polymeric MDI formulations, the previous solid-like behavior disappeared. By contrast, an inversion in the order of addition (bitumen/polymeric MDI/ Cloisite® 20A) enhanced further the solid-like behavior imparted by the nanoclay. In order to gain a better understanding of the factors that govern the linear rheology of these ternary composites, a morphological and microstructural characterization based on XRD and AFM was conducted. XRD demonstrated the existence of clay stacks intercalated by bitumen molecules to some degree. However, the XRD technique cannot provide detailed information on the extent of nanoclay delamination, unless the entire fraction has effectively been fully delaminated (situation in which no peak is observed). Furthermore, XRD was unable to provide precise knowledge neither about the spatial distribution of the intercalated/exfoliated platelets nor about the presence of other structures at larger length scales. In contrast, AFM proved its power at providing conclusive information on the morphology of the composites at the nanometer scale and at revealing the structural modification that yielded the rheological properties observed. It was concluded that high-shear blending brought about a nanoclay-reinforced network. As for the bitumen/Cloisite® 20A/polymeric MDI formulations, the solid-like behavior was destroyed as a result of the agglomeration of the nanoclay platelets promoted by chemical reactions.Keywords: Atomic Force Microscopy, bitumen, composite, isocyanate, montmorillonite.
Procedia PDF Downloads 26141 An Intelligence-Led Methodologly for Detecting Dark Actors in Human Trafficking Networks
Authors: Andrew D. Henshaw, James M. Austin
Abstract:
Introduction: Human trafficking is an increasingly serious transnational criminal enterprise and social security issue. Despite ongoing efforts to mitigate the phenomenon and a significant expansion of security scrutiny over past decades, it is not receding. This is true for many nations in Southeast Asia, widely recognized as the global hub for trafficked persons, including men, women, and children. Clearly, human trafficking is difficult to address because there are numerous drivers, causes, and motivators for it to persist, such as non-military and non-traditional security challenges, i.e., climate change, global warming displacement, and natural disasters. These make displaced persons and refugees particularly vulnerable. The issue is so large conservative estimates put a dollar value at around $150 billion-plus per year (Niethammer, 2020) spanning sexual slavery and exploitation, forced labor, construction, mining and in conflict roles, and forced marriages of girls and women. Coupled with corruption throughout military, police, and civil authorities around the world, and the active hands of powerful transnational criminal organizations, it is likely that such figures are grossly underestimated as human trafficking is misreported, under-detected, and deliberately obfuscated to protect those profiting from it. For example, the 2022 UN report on human trafficking shows a 56% reduction in convictions in that year alone (UNODC, 2022). Our Approach: To better understand this, our research utilizes a bespoke methodology. Applying a JAM (Juxtaposition Assessment Matrix), which we previously developed to detect flows of dark money around the globe (Henshaw, A & Austin, J, 2021), we now focus on the human trafficking paradigm. Indeed, utilizing a JAM methodology has identified key indicators of human trafficking not previously explored in depth. Being a set of structured analytical techniques that provide panoramic interpretations of the subject matter, this iteration of the JAM further incorporates behavioral and driver indicators, including the employment of Open-Source Artificial Intelligence (OS-AI) across multiple collection points. The extracted behavioral data was then applied to identify non-traditional indicators as they contribute to human trafficking. Furthermore, as the JAM OS-AI analyses data from the inverted position, i.e., the viewpoint of the traffickers, it examines the behavioral and physical traits required to succeed. This transposed examination of the requirements of success delivers potential leverage points for exploitation in the fight against human trafficking in a new and novel way. Findings: Our approach identified new innovative datasets that have previously been overlooked or, at best, undervalued. For example, the JAM OS-AI approach identified critical 'dark agent' lynchpins within human trafficking that are difficult to detect and harder to connect to actors and agents within a network. Our preliminary data suggests this is in part due to the fact that ‘dark agents’ in extant research have been difficult to detect and potentially much harder to directly connect to the actors and organizations in human trafficking networks. Our research demonstrates that using new investigative techniques such as OS-AI-aided JAM introduces a powerful toolset to increase understanding of human trafficking and transnational crime and illuminate networks that, to date, avoid global law enforcement scrutiny.Keywords: human trafficking, open-source intelligence, transnational crime, human security, international human rights, intelligence analysis, JAM OS-AI, Dark Money
Procedia PDF Downloads 9040 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains
Authors: Jing Jin
Abstract:
The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry
Procedia PDF Downloads 6439 Leveraging Information for Building Supply Chain Competitiveness
Authors: Deepika Joshi
Abstract:
Operations in automotive industry rely greatly on information shared between Supply Chain (SC) partners. This leads to efficient and effective management of SC activity. Automotive sector in India is growing at 14.2 percent per annum and has huge economic importance. We find that no study has been carried out on the role of information sharing in SC management of Indian automotive manufacturers. Considering this research gap, the present study is planned to establish the significance of information sharing in Indian auto-component supply chain activity. An empirical research was conducted for large scale auto component manufacturers from India. Twenty four Supply Chain Performance Indicators (SCPIs) were collected from existing literature. These elements belong to eight diverse but internally related areas of SC management viz., demand management, cost, technology, delivery, quality, flexibility, buyer-supplier relationship, and operational factors. A pair-wise comparison and an open ended questionnaire were designed using these twenty four SCPIs. The questionnaire was then administered among managerial level employees of twenty-five auto-component manufacturing firms. Analytic Network Process (ANP) technique was used to analyze the response of pair-wise questionnaire. Finally, twenty-five priority indexes are developed, one for each respondent. These were averaged to generate an industry specific priority index. The open-ended questions depicted strategies related to information sharing between buyers and suppliers and their influence on supply chain performance. Results show that the impact of information sharing on certain performance indicators is relatively greater than their corresponding variables. For example, flexibility, delivery, demand and cost related elements have massive impact on information sharing. Technology is relatively less influenced by information sharing but it immensely influence the quality of information shared. Responses obtained from managers reveal that timely and accurate information sharing lowers the cost, increases flexibility and on-time delivery of auto parts, therefore, enhancing the competitiveness of Indian automotive industry. Any flaw in dissemination of information can disturb the cycle time of both the parties and thus increases the opportunity cost. Due to supplier’s involvement in decisions related to design of auto parts, quality conformance is found to improve, leading to reduction in rejection rate. Similarly, mutual commitment to share right information at right time between all levels of SC enhances trust level. SC partners share information to perform comprehensive quality planning to ingrain total quality management. This study contributes to operations management literature which faces scarcity of empirical examination on this subject. It views information sharing as a building block which firms can promote and evolve to leverage the operational capability of all SC members. It will provide insights for Indian managers and researchers as every market is unique and suppliers and buyers are driven by local laws, industry status and future vision. While major emphasis in this paper is given to SC operations happening between domestic partners, placing more focus on international SC can bring in distinguished results.Keywords: Indian auto component industry, information sharing, operations management, supply chain performance indicators
Procedia PDF Downloads 55038 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing
Authors: Tolulope Aremu
Abstract:
The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods
Procedia PDF Downloads 1837 The Impact of β Nucleating Agents and Carbon-Based Nanomaterials on Water Vapor Permeability of Polypropylene Composite Films
Authors: Glykeria A. Visvini, George Ν. Mathioudakis, Amaia Soto Beobide, George A. Voyiatzis
Abstract:
Polymer nanocomposites are materials in which a polymer matrix is reinforced with nanoscale inclusions, such as nanoparticles, nanoplates, or nanofibers. These nanoscale inclusions can significantly enhance the mechanical, thermal, electrical, and other properties of the polymer matrix, making them attractive for a wide range of industrial applications. These properties can be tailored by adjusting the type and the concentration of the nanoinclusions, which provides a high degree of flexibility in their design and development. An important property that polymeric membranes can exhibit is water vapor permeability (WVP). This can be accomplished by various methods, including the incorporation of micro/nano-fillers into the polymer matrix. In this way, a micro/nano-pore network can be formed, allowing water vapor to permeate through the membrane. At the same time, the membrane can be stretched uni- or bi-axially, creating aligned or cross-linked micropores in the composite, respectively, which can also increase the WVP. Nowadays, in industry, stretched films reinforced with CaCO3 develop micro-porosity sufficient to give them breathability characteristics. Carbon-based nanomaterials, such as graphene oxide (GO), are tentatively expected to be able to effectively improve the WVP of corresponding composite polymer films. The presence in the GO structure of various functional oxidizing groups enhances its ability to attract and channel water molecules, exploiting the unique large surface area of graphene that allows the rapid transport of water molecules. Polypropylene (PP) is widely used in various industrial applications due to its desirable properties, including good chemical resistance, excellent thermal stability, low cost, and easy processability. The specific properties of PP are highly influenced by its crystalline behavior, which is determined by its processing conditions. The development of the β-crystalline phase in PP, in combination with stretching, is anticipating improving the microporosity of the polymer matrix, thereby enhancing its WVP. The aim of present study is to create breathable PP composite membranes using carbon-based nanomaterials, such as graphene oxide (GO), reduced graphene oxide (rGO), and graphene nanoplatelets (GNPs). Unlike traditional methods that rely on the drawing process to enhance the WVP of PP, this study intents to develop a low-cost approach using melt mixing with β-nucleating agents and carbon fillers to create highly breathable PP composite membranes. The study aims to investigate how the concentration of these additives affects the water vapor transport properties of the resulting PP films/membranes. The presence of β-nucleating agents and carbon fillers is expected to enhance β-phase growth in PP, while an alternation between β- and α-phase is expected to lead to improved microporosity and WVP. Our ambition is to develop highly breathable PP composite films with superior performance and at a lower cost compared to the benchmark. Acknowledgment: This research has been co‐financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call «Special Actions "AQUACULTURE"-"INDUSTRIAL MATERIALS"-"OPEN INNOVATION IN CULTURE"» (project code: Τ6YBP-00337)Keywords: carbon based nanomaterials, nanocomposites, nucleating agent, polypropylene, water vapor permeability
Procedia PDF Downloads 8636 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques
Authors: Chandu Rathnayake, Isuri Anuradha
Abstract:
Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.Keywords: CNN, random forest, decision tree, machine learning, deep learning
Procedia PDF Downloads 7335 Remote BioMonitoring of Mothers and Newborns for Temperature Surveillance Using a Smart Wearable Sensor: Techno-Feasibility Study and Clinical Trial in Southern India
Authors: Prem K. Mony, Bharadwaj Amrutur, Prashanth Thankachan, Swarnarekha Bhat, Suman Rao, Maryann Washington, Annamma Thomas, N. Sheela, Hiteshwar Rao, Sumi Antony
Abstract:
The disease burden among mothers and newborns is caused mostly by a handful of avoidable conditions occurring around the time of childbirth and within the first month following delivery. Real-time monitoring of vital parameters of mothers and neonates offers a potential opportunity to impact access as well as the quality of care in vulnerable populations. We describe the design, development and testing of an innovative wearable device for remote biomonitoring (RBM) of body temperatures in mothers and neonates in a hospital in southern India. The architecture consists of: [1] a low-cost, wearable sensor tag; [2] a gateway device for ‘real-time’ communication link; [3] piggy-backing on a commercial GSM communication network; and [4] an algorithm-based data analytics system. Requirements for the device were: long battery-life upto 28 days (with sampling frequency 5/hr); robustness; IP 68 hermetic sealing; and human-centric design. We undertook pre-clinical laboratory testing followed by clinical trial phases I & IIa for evaluation of safety and efficacy in the following sequence: seven healthy adult volunteers; 18 healthy mothers; and three sets of babies – 3 healthy babies; 10 stable babies in the Neonatal Intensive Care Unit (NICU) and 1 baby with hypoxic ischaemic encephalopathy (HIE). The 3-coin thickness, pebble-design sensor weighing about 8 gms was secured onto the abdomen for the baby and over the upper arm for adults. In the laboratory setting, the response-time of the sensor device to attain thermal equilibrium with the surroundings was 4 minutes vis-a-vis 3 minutes observed with a precision-grade digital thermometer used as a reference standard. The accuracy was ±0.1°C of the reference standard within the temperature range of 25-40°C. The adult volunteers, aged 20 to 45 years, contributed a total of 345 hours of readings over a 7-day period and the postnatal mothers provided a total of 403 paired readings. The mean skin temperatures measured in the adults by the sensor were about 2°C lower than the axillary temperature readings (sensor =34.1 vs digital = 36.1); this difference was statistically significant (t-test=13.8; p<0.001). The healthy neonates provided a total of 39 paired readings; the mean difference in temperature was 0.13°C (sensor =36.9 vs digital = 36.7; p=0.2). The neonates in the NICU provided a total of 130 paired readings. Their mean skin temperature measured by the sensor was 0.6°C lower than that measured by the radiant warmer probe (sensor =35.9 vs warmer probe = 36.5; p < 0.001). The neonate with HIE provided a total of 25 paired readings with the mean sensor reading being not different from the radian warmer probe reading (sensor =33.5 vs warmer probe = 33.5; p=0.8). No major adverse events were noted in both the adults and neonates; four adult volunteers reported mild sweating under the device/arm band and one volunteer developed mild skin allergy. This proof-of-concept study shows that real-time monitoring of temperatures is technically feasible and that this innovation appears to be promising in terms of both safety and accuracy (with appropriate calibration) for improved maternal and neonatal health.Keywords: public health, remote biomonitoring, temperature surveillance, wearable sensors, mothers and newborns
Procedia PDF Downloads 20834 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 22333 Resilience in the Face of Environmental Extremes through Networking and Resource Mobilization
Authors: Abdullah Al Mohiuddin
Abstract:
Bangladesh is one of the poorest countries in the world, and ranks low on almost all measures of economic development, thus leaving the population extremely vulnerable to natural disasters and climate events. 20% of GDP come from agriculture but more than 60% of the population relies on agriculture as their main source of income making the entire economy vulnerable to climate change and natural disasters. High population density exacerbates the exposure to and effect of climate events, and increases the levels of vulnerability, as does the poor institutional development of the country. The most vulnerable sectors to climate change impacts in Bangladesh are agriculture, coastal zones, water resources, forestry, fishery, health, biomass, and energy. High temperatures, heavy rainfall, high humidity and fairly marked seasonal variations characterize the climate in Bangladesh: Mild winter, hot humid summer and humid, warm rainy monsoon. Much of the country is flooded during the summer monsoon. The Department of Environment (DOE) under the Ministry of Environment and Forestry (MoEF) is the focal point for the United Nations Framework Convention on Climate Change (UNFCCC) and coordinates climate related activities in the country. Recently, a Climate Change Cell (CCC) has been established to address several issues including adaptation to climate change. The climate change focus started with The National Environmental Management Action Plan (NEMAP) which was prepared in 1995 in order to initiate the process to address environmental and climate change issues as long-term environmental problems for Bangladesh. Bangladesh was one of the first countries to finalise a NAPA (Preparation of a National Adaptation Plan of Action) which addresses climate change issues. The NAPA was completed in 2005, and is the first official initiative for mainstreaming adaptation to national policies and actions to cope with climate change and vulnerability. The NAPA suggests a number of adaptation strategies, for example: - Providing drinking water to coastal communities to fight the enhanced salinity caused by sea level rise, - Integrating climate change in planning and design of infrastructure, - Including climate change issues in education, - Supporting adaptation of agricultural systems to new weather extremes, - Mainstreaming CCA into policies and programmes in different sectors, e.g. disaster management, water and health, - Dissemination of CCA information and awareness raising on enhanced climate disasters, especially in vulnerable communities. Bangladesh has geared up its environment conservation steps to save the world’s poorest countries from the adverse effects of global warming. Now it is turning towards green economy policies to save the degrading ecosystem. Bangladesh is a developing country and always fights against Natural Disaster. At the same time we also fight for establishing ecological environment through promoting Green Economy/Energy by Youth Networking. ANTAR is coordinating a big Youth Network in the southern part of Bangladesh where 30 Youth group involved. It can be explained as the economic development based on sustainable development which generates growth and improvement in human’s lives while significantly reducing environmental risks and ecological scarcities. Green economy in Bangladesh promotes three bottom lines – sustaining economic, environment and social well-being.Keywords: resilience, networking, mobilizing, resource
Procedia PDF Downloads 31032 Enabling Rather Than Managing: Organizational and Cultural Innovation Mechanisms in a Heterarchical Organization
Authors: Sarah M. Schoellhammer, Stephen Gibb
Abstract:
Bureaucracy, in particular, its core element, a formal and stable hierarchy of authority, is proving less and less appropriate under the conditions of today’s knowledge economy. Centralization and formalization were consistently found to hinder innovation, undermining cross-functional collaboration, personal responsibility, and flexibility. With its focus on systematical planning, controlling and monitoring the development of new or improved solutions for customers, even innovation management as a discipline is to a significant extent based on a mechanistic understanding of organizations. The most important drivers of innovation, human creativity, and initiative, however, can be more hindered than supported by central elements of classic innovation management, such as predefined innovation strategies, rigid stage gate processes, and decisions made in management gate meetings. Heterarchy, as an alternative network form of organization, is essentially characterized by its dynamic influence structures, whereby the biggest influence is allocated by the collective to the persons perceived the most competent in a certain issue. Theoretical arguments that the non-hierarchical concept better supports innovation than bureaucracy have been supported by empirical research. These prior studies either focus on the structure and general functioning of non-hierarchical organizations or on their innovativeness, that means innovation as an outcome. Complementing classic innovation management approaches, this work aims to shed light on how innovations are initiated and realized in heterarchies in order to identify alternative solutions practiced under conditions of the post-bureaucratic organization. Through an initial individual case study, which is part of a multiple-case project, the innovation practices of an innovative and highly heterarchical medium-sized company in the German fire engineering industry are investigated. In a pragmatic mixed methods approach media resonance, company documents, and workspace architecture are analyzed, in addition to qualitative interviews with the CEO and employees of the case company, as well as a quantitative survey aiming to characterize the company along five scaled dimensions of a heterarchy spectrum. The analysis reveals some similarities and striking differences to approaches suggested by classic innovation management. The studied heterarchy has no predefined innovation strategy guiding new product and service development. Instead, strategic direction is provided by the CEO, described as visionary and creative. Procedures for innovation are hardly formalized, with new product ideas being evaluated on the basis of gut feeling and flexible, rather general criteria. Employees still being hesitant to take responsibility and make decisions, hierarchical influence is still prominent. Described as open-minded and collaborative, culture and leadership were found largely congruent with definitions of innovation culture. Overall, innovation efforts at the case company tend to be coordinated more through cultural than through formal organizational mechanisms. To better enable innovation in mainstream organizations, responsible practitioners are recommended not to limit changes to reducing the central elements of the bureaucratic organization, formalization, and centralization. The freedoms this entails need to be sustained through cultural coordination mechanisms, with personal initiative and responsibility by employees as well as common innovation-supportive norms and values. These allow to integrate diverse competencies, opinions, and activities and, thus, to guide innovation efforts.Keywords: bureaucracy, heterarchy, innovation management, values
Procedia PDF Downloads 18731 Implementation of Building Information Modelling to Monitor, Assess, and Control the Indoor Environmental Quality of Higher Education Buildings
Authors: Mukhtar Maigari
Abstract:
The landscape of Higher Education (HE) institutions, especially following the CVID-19 pandemic, necessitates advanced approaches to manage Indoor Environmental Quality (IEQ) which is crucial for the comfort, health, and productivity of students and staff. This study investigates the application of Building Information Modelling (BIM) as a multifaceted tool for monitoring, assessing, and controlling IEQ in HE buildings aiming to bridge the gap between traditional management practices and the innovative capabilities of BIM. Central to the study is a comprehensive literature review, which lays the foundation by examining current knowledge and technological advancements in both IEQ and BIM. This review sets the stage for a deeper investigation into the practical application of BIM in IEQ management. The methodology consists of Post-Occupancy Evaluation (POE) which encompasses physical monitoring, questionnaire surveys, and interviews under the umbrella of case studies. The physical data collection focuses on vital IEQ parameters such as temperature, humidity, CO2 levels etc, conducted by using different equipment including dataloggers to ensure accurate data. Complementing this, questionnaire surveys gather perceptions and satisfaction levels from students, providing valuable insights into the subjective aspects of IEQ. The interview component, targeting facilities management teams, offers an in-depth perspective on IEQ management challenges and strategies. The research delves deeper into the development of a conceptual BIM-based framework, informed by the insight findings from case studies and empirical data. This framework is designed to demonstrate the critical functions necessary for effective IEQ monitoring, assessment, control and automation with real time data handling capabilities. This BIM-based framework leads to the developing and testing a BIM-based prototype tool. This prototype leverages on software such as Autodesk Revit with its visual programming tool i.e., Dynamo and an Arduino-based sensor network thereby allowing for real-time flow of IEQ data for monitoring, control and even automation. By harnessing the capabilities of BIM technology, the study presents a forward-thinking approach that aligns with current sustainability and wellness goals, particularly vital in the post-COVID-19 era. The integration of BIM in IEQ management promises not only to enhance the health, comfort, and energy efficiency of educational environments but also to transform them into more conducive spaces for teaching and learning. Furthermore, this research could influence the future of HE buildings by prompting universities and government bodies to revaluate and improve teaching and learning environments. It demonstrates how the synergy between IEQ and BIM can empower stakeholders to monitor IEQ conditions more effectively and make informed decisions in real-time. Moreover, the developed framework has broader applications as well; it can serve as a tool for other sustainability assessments, like energy analysis in HE buildings, leveraging measured data synchronized with the BIM model. In conclusion, this study bridges the gap between theoretical research and real-world application by practicalizing how advanced technologies like BIM can be effectively integrated to enhance environmental quality in educational institutions. It portrays the potential of integrating advanced technologies like BIM in the pursuit of improved environmental conditions in educational institutions.Keywords: BIM, POE, IEQ, HE-buildings
Procedia PDF Downloads 4830 Anajaa-Visual Substitution System: A Navigation Assistive Device for the Visually Impaired
Authors: Juan Pablo Botero Torres, Alba Avila, Luis Felipe Giraldo
Abstract:
Independent navigation and mobility through unknown spaces pose a challenge for the autonomy of visually impaired people (VIP), who have relied on the use of traditional assistive tools like the white cane and trained dogs. However, emerging visually assistive technologies (VAT) have proposed several human-machine interfaces (HMIs) that could improve VIP’s ability for self-guidance. Hereby, we introduce the design and implementation of a visually assistive device, Anajaa – Visual Substitution System (AVSS). This system integrates ultrasonic sensors with custom electronics, and computer vision models (convolutional neural networks), in order to achieve a robust system that acquires information of the surrounding space and transmits it to the user in an intuitive and efficient manner. AVSS consists of two modules: the sensing and the actuation module, which are fitted to a chest mount and belt that communicate via Bluetooth. The sensing module was designed for the acquisition and processing of proximity signals provided by an array of ultrasonic sensors. The distribution of these within the chest mount allows an accurate representation of the surrounding space, discretized in three different levels of proximity, ranging from 0 to 6 meters. Additionally, this module is fitted with an RGB-D camera used to detect potentially threatening obstacles, like staircases, using a convolutional neural network specifically trained for this purpose. Posteriorly, the depth data is used to estimate the distance between the stairs and the user. The information gathered from this module is then sent to the actuation module that creates an HMI, by the means of a 3x2 array of vibration motors that make up the tactile display and allow the system to deliver haptic feedback. The actuation module uses vibrational messages (tactones); changing both in amplitude and frequency to deliver different awareness levels according to the proximity of the obstacle. This enables the system to deliver an intuitive interface. Both modules were tested under lab conditions, and the HMI was additionally tested with a focal group of VIP. The lab testing was conducted in order to establish the processing speed of the computer vision algorithms. This experimentation determined that the model can process 0.59 frames per second (FPS); this is considered as an adequate processing speed taking into account that the walking speed of VIP is 1.439 m/s. In order to test the HMI, we conducted a focal group composed of two females and two males between the ages of 35-65 years. The subject selection was aided by the Colombian Cooperative of Work and Services for the Sightless (COOTRASIN). We analyzed the learning process of the haptic messages throughout five experimentation sessions using two metrics: message discrimination and localization success. These correspond to the ability of the subjects to recognize different tactones and locate them within the tactile display. Both were calculated as the mean across all subjects. Results show that the focal group achieved message discrimination of 70% and a localization success of 80%, demonstrating how the proposed HMI leads to the appropriation and understanding of the feedback messages, enabling the user’s awareness of its surrounding space.Keywords: computer vision on embedded systems, electronic trave aids, human-machine interface, haptic feedback, visual assistive technologies, vision substitution systems
Procedia PDF Downloads 8029 Renewable Energy Micro-Grid Control Using Microcontroller in LabVIEW
Authors: Meena Agrawal, Chaitanya P. Agrawal
Abstract:
The power systems are transforming and becoming smarter with innovations in technologies to enable embark simultaneously upon the sustainable energy needs, rising environmental concerns, economic benefits and quality requirements. The advantages provided by inter-connection of renewable energy resources are becoming more viable and dependable with the smart controlling technologies. The limitation of most renewable resources have their diversity and intermittency causing problems in power quality, grid stability, reliability, security etc. is being cured by these efforts. A necessitate of optimal energy management by intelligent Micro-Grids at the distribution end of the power system has been accredited to accommodate sustainable renewable Distributed Energy Resources on large scale across the power grid. All over the world Smart Grids are emerging now as foremost concern infrastructure upgrade programs. The hardware setup includes NI cRIO 9022, Compact Reconfigurable Input Output microcontroller board connected to the PC on a LAN router with three hardware modules. The Real-Time Embedded Controller is reconfigurable controller device consisting of an embedded real-time processor controller for communication and processing, a reconfigurable chassis housing the user-programmable FPGA, Eight hot-swappable I/O modules, and graphical LabVIEW system design software. It has been employed for signal analysis, controls and acquisition and logging of the renewable sources with the LabVIEW Real-Time applications. The employed cRIO chassis controls the timing for the module and handles communication with the PC over the USB, Ethernet, or 802.11 Wi-Fi buses. It combines modular I/O, real-time processing, and NI LabVIEW programmable. In the presented setup, the Analog Input Module NI 9205 five channels have been used for input analog voltage signals from renewable energy sources and NI 9227 four channels have been used for input analog current signals of the renewable sources. For switching actions based on the programming logic developed in software, a module having Electromechanical Relays (single-pole single throw) with 4-Channels, electrically isolated and LED indicating the state of that channel have been used for isolating the renewable Sources on fault occurrence, which is decided by the logic in the program. The module for Ethernet based Data Acquisition Interface ENET 9163 Ethernet Carrier, which is connected on the LAN Router for data acquisition from a remote source over Ethernet also has the module NI 9229 installed. The LabVIEW platform has been employed for efficient data acquisition, monitoring and control. Control logic utilized in program for operation of the hardware switching Related to Fault Relays has been portrayed as a flowchart. A communication system has been successfully developed amongst the sources and loads connected on different computers using Hypertext transfer protocol, HTTP or Ethernet Local Stacked area Network TCP/IP protocol. There are two main I/O interfacing clients controlling the operation of the switching control of the renewable energy sources over internet or intranet. The paper presents experimental results of the briefed setup for intelligent control of the micro-grid for renewable energy sources, besides the control of Micro-Grid with data acquisition and control hardware based on a microcontroller with visual program developed in LabVIEW.Keywords: data acquisition and control, LabVIEW, microcontroller cRIO, Smart Micro-Grid
Procedia PDF Downloads 33328 Sustainable Urban Regenaration the New Vocabulary and the Timless Grammar of the Urban Tissue
Authors: Ruth Shapira
Abstract:
Introduction: The rapid urbanization of the last century confronts planners, regulatory bodies, developers and most of all the public with seemingly unsolved conflicts regarding values, capital, and wellbeing of the built and un-built urban space. There is an out of control change of scale of the urban form and of the rhythm of the urban life which has known no significant progress in the last 2-3 decades despite the on-growing urban population. It is the objective of this paper to analyze some of these fundamental issues through the case study of a relatively small town in the center of Israel (Kiryat-Ono, 36,000 inhabitants), unfold the deep structure of qualities versus disruptors, present some cure that we have developed to bridge over and humbly suggest a practice that may bring about a sustainable new urban environment based on timeless values of the past, an approach that can be generic for similar cases. Basic Methodologies:The object, the town of Kiryat Ono, shall be experimented upon in a series of four action processes: De-composition, Re-composition, the Centering process and, finally, Controlled Structural Disintegration. Each stage will be based on facts, analysis of previous multidisciplinary interventions on various layers – and the inevitable reaction of the OBJECT, leading to the conclusion based on innovative theoretical and practical methods that we have developed and that we believe are proper for the open ended network, setting the rules for the contemporary urban society to cluster by – thus – a new urban vocabulary based on the old structure of times passed. The Study: Kiryat Ono, was founded 70 years ago as an agricultural settlement and rapidly turned into an urban entity. In spite the massive intensification, the original DNA of the old small town was still deeply embedded, mostly in the quality of the public space and in the sense of clustered communities. In the past 20 years, the recent demand for housing has been addressed to on the national level with recent master plans and urban regeneration policies mostly encouraging individual economic initiatives. Unfortunately, due to the obsolete existing planning platform the present urban renewal is characterized by pressure of developers, a dramatic change in building scale and widespread disintegration of the existing urban and social tissue.Our office was commissioned to conceptualize two master plans for the two contradictory processes of Kiryat Ono’s future: intensification and conservation. Following a comprehensive investigation into the deep structures and qualities of the existing town, we developed a new vocabulary of conservation terms thus redefying the sense of PLACE. The main challenge was to create master plans that should offer a regulatory basis to the accelerated and sporadic development providing for the public good and preserving the characteristics of the place consisting of a tool box of design guidelines that will have the ability to reorganize space along the time axis in a sustainable way. In conclusion: The system of rules that we have developed can generate endless possible patterns making sure that at each implementation fragment an event is created, and a better place is revealed. It takes time and perseverance but it seems to be the way to provide a healthy and sustainable framework for the accelerated urbanization of our chaotic present.Keywords: sustainable urban design, intensification, emergent urban patterns, sustainable housing, compact urban neighborhoods, sustainable regeneration, restoration, complexity, uncertainty, need for change, implications of legislation on local planning
Procedia PDF Downloads 38827 The Routes of Human Suffering: How Point-Source and Destination-Source Mapping Can Help Victim Services Providers and Law Enforcement Agencies Effectively Combat Human Trafficking
Authors: Benjamin Thomas Greer, Grace Cotulla, Mandy Johnson
Abstract:
Human trafficking is one of the fastest growing international crimes and human rights violations in the world. The United States Department of State (State Department) approximates some 800,000 to 900,000 people are annually trafficked across sovereign borders, with approximately 14,000 to 17,500 of these people coming into the United States. Today’s slavery is conducted by unscrupulous individuals who are often connected to organized criminal enterprises and transnational gangs, extracting huge monetary sums. According to the International Labour Organization (ILO), human traffickers collect approximately $32 billion worldwide annually. Surpassed only by narcotics dealing, trafficking of humans is tied with illegal arms sales as the second largest criminal industry in the world and is the fastest growing field in the 21st century. Perpetrators of this heinous crime abound. They are not limited to single or “sole practitioners” of human trafficking, but rather, often include Transnational Criminal Organizations (TCO), domestic street gangs, labor contractors, and otherwise seemingly ordinary citizens. Monetary gain is being elevated over territorial disputes and street gangs are increasingly operating in a collaborative effort with TCOs to further disguise their criminal activity; to utilizing their vast networks, in an attempt to avoid detection. Traffickers rely on a network of clandestine routes to sell their commodities with impunity. As law enforcement agencies seek to retard the expansion of transnational criminal organization’s entry into human trafficking, it is imperative that they develop reliable trafficking mapping of known exploitative routes. In a recent report given to the Mexican Congress, The Procuraduría General de la República (PGR) disclosed, from 2008 to 2010 they had identified at least 47 unique criminal networking routes used to traffic victims and that Mexico’s estimated domestic victims number between 800,000 adults and 20,000 children annually. Designing a reliable mapping system is a crucial step to effective law enforcement response and deploying a successful victim support system. Creating this mapping analytic is exceedingly difficult. Traffickers are constantly changing the way they traffic and exploit their victims. They swiftly adapt to local environmental factors and react remarkably well to market demands, exploiting limitations in the prevailing laws. This article will highlight how human trafficking has become one of the fastest growing and most high profile human rights violations in the world today; compile current efforts to map and illustrate trafficking routes; and will demonstrate how the proprietary analytical mapping analysis of point-source and destination-source mapping can help local law enforcement, governmental agencies and victim services providers effectively respond to the type and nature of trafficking to their specific geographical locale. Trafficking transcends state and international borders. It demands an effective and consistent cooperation between local, state, and federal authorities. Each region of the world has different impact factors which create distinct challenges for law enforcement and victim services. Our mapping system lays the groundwork for a targeted anti-trafficking response.Keywords: human trafficking, mapping, routes, law enforcement intelligence
Procedia PDF Downloads 38126 Case Study about Women Driving in Saudi Arabia Announced in 2018: Netnographic and Data Mining Study
Authors: Majdah Alnefaie
Abstract:
The ‘netnographic study’ and data mining have been used to monitor the public interaction on Social Media Sites (SMSs) to understand what the motivational factors influence the Saudi intentions regarding allowing women driving in Saudi Arabia in 2018. The netnographic study monitored the publics’ textual and visual communications in Twitter, Snapchat, and YouTube. SMSs users’ communications method is also known as electronic word of mouth (eWOM). Netnography methodology is still in its initial stages as it depends on manual extraction, reading and classification of SMSs users text. On the other hand, data mining is come from the computer and physical sciences background, therefore it is much harder to extract meaning from unstructured qualitative data. In addition, the new development in data mining software does not support the Arabic text, especially local slang in Saudi Arabia. Therefore, collaborations between social and computer scientists such as ‘netnographic study’ and data mining will enhance the efficiency of this study methodology leading to comprehensive research outcome. The eWOM communications between individuals on SMSs can promote a sense that sharing their preferences and experiences regarding politics and social government regulations is a part of their daily life, highlighting the importance of using SMSs as assistance in promoting participation in political and social. Therefore, public interactions on SMSs are important tools to comprehend people’s intentions regarding the new government regulations in the country. This study aims to answer this question, "What factors influence the Saudi Arabians' intentions of Saudi female's car-driving in 2018". The study utilized qualitative method known as netnographic study. The study used R studio to collect and analyses 27000 Saudi users’ comments from 25th May until 25th June 2018. The study has developed data collection model that support importing and analysing the Arabic text in the local slang. The data collection model in this study has been clustered based on different type of social networks, gender and the study main factors. The social network analysis was employed to collect comments from SMSs owned by governments’ originations, celebrities, vloggers, social activist and news SMSs accounts. The comments were collected from both males and females SMSs users. The sentiment analysis shows that the total number of positive comments Saudi females car driving was higher than negative comments. The data have provided the most important factors influenced the Saudi Arabians’ intention of Saudi females car driving including, culture and environment, freedom of choice, equal opportunities, security and safety. The most interesting finding indicted that women driving would play a role in increasing the individual freedom of choice. Saudi female will be able to drive cars to fulfill her daily life and family needs without being stressed due to the lack of transportation. The study outcome will help Saudi government to improve woman quality of life by increasing the ability to find more jobs and studies, increasing income through decreasing the spending on transport means such as taxi and having more freedom of choice in woman daily life needs. The study enhances the importance of using use marketing research to measure the public opinions on the new government regulations in the country. The study has explained the limitations and suggestions for future research.Keywords: netnographic study, data mining, social media, Saudi Arabia, female driving
Procedia PDF Downloads 15325 Urban Ecosystem Health and Urban Agriculture
Authors: Mahbuba Kaneez Hasna
Abstract:
Introductory Statement outlining the background: Little has been written about political ecology of urban gardening, such as a network of knowledge generation, technologies of food production and distribution, food consumption practices, and the regulation of ‘agricultural activities. For urban food gardens to sustain as a long-term food security enterprise, we will need to better understand the anthropological, ecological, political, and institutional factors influencing their development, management, and ongoing viability. Significance of the study: Dhaka as one of the fastest growing city. There are currently no studies regards to Bangladesh on how urban slum dwellerscope with the changing urban environment in the city, where they overcome challenges, and how they cope with the urban ecological cycle of food and vegetable production. It is also essential to understand the importance of their access to confined spaces in the slums they apply their indigenous knowledge. These relationships in nature are important factors in community and conservation ecology. Until now, there has been no significant published academic work on relationships between urban and environmental anthropology, urban planning, geography, ecology, and social anthropology with a focus on urban agriculture and how this contributes to the moral economies, indigenous knowledge, and government policies in order to improve the lives and livelihoods of slum dwellers surrounding parks and open spaces in Dhaka, Bangladesh. Methodology: it have applied participant observation, semi-structured questionnaire-based interviews, and focus group discussions to collect social data. Interviews were conducted with the urban agriculture practitioners who are slum dwellers who carry out their urban agriculture activities. Some of the interviews were conducted with non-government organisations (NGOs) and local and state government officials, using semi-structured interviews. Using these methods developed a clearer understanding of how green space cultivation, local economic self-reliance, and urban gardening are producing distinctive urban ecologies in Dhaka and their policy-implications on urban sustainability. Major findings of the study: The research provided an in-depth knowledge on the challenges that slum dwellers encounter in establishing and maintaining urban gardens, such as the economic development of the city, conflicting political agendas, and environmental constraints in areas within which gardening activities take place. The research investigated (i) How do slum dwellers perform gardening practices from rural areas to open spaces in the city? (ii) How do men and women’s ethno-botanical knowledge contribute to urban biodiversity; (iii) And how do slum dwellers navigate complex constellations of land use policy, competing political agendas, and conflicting land and water tenures to meet livelihood functions provided by their gardens. Concluding statement: Lack of infrastructure facilities such as water supply and sanitation, micro-drains and waste disposal areas, and poor access to basic health care services increase the misery of people in the slum areas. Lack of environmental health awareness information for farmers, such as the risks from the use of chemical pesticides in gardens and from grazing animals in contaminated fields or cropping and planting trees or vegetable in contaminated dumping grounds, can all cause high health risk to humans and their environment.Keywords: gender, urban agriculture, ecosystem health, urban slum systems
Procedia PDF Downloads 8224 Wellbeing Effects from Family Literacy Education: An Ecological Study
Authors: Jane Furness, Neville Robertson, Judy Hunter, Darrin Hodgetts, Linda Nikora
Abstract:
Background and significance: This paper describes the first use of community psychology theories to investigate family-focused literacy education programmes, enabling a wide range of wellbeing effects of such programmes to be identified for the first time. Evaluations of family literacy programmes usually focus on the economic advantage of gains in literacy skills. By identifying other effects on aspects of participants’ lives that are important to them, and how they occur, understanding of how such programmes contribute to wellbeing and social justice is augmented. Drawn from community psychology, an ecological systems-based, culturally adaptive framework for personal, relational and collective wellbeing illuminated outcomes of family literacy programmes that enhanced wellbeing and quality of life for adult participants, their families and their communities. All programmes, irrespective of their institutional location, could be similarly scrutinized. Methodology: The study traced the experiences of nineteen adult participants in four family-focused literacy programmes located in geographically and culturally different communities throughout New Zealand. A critical social constructionist paradigm framed this interpretive study. Participants were mainly Māori, Pacific islands, or European New Zealanders. Seventy-nine repeated conversational interviews were conducted over 18 months with the adult participants, programme staff and people who knew the participants well. Twelve participant observations of programme sessions were conducted, and programme documentation was reviewed. Latent theoretical thematic analysis of data drew on broad perspectives of literacy and ecological systems theory, network theory and holistic, integrative theories of wellbeing. Steps taken to co-construct meaning with participants included the repeated conversational interviews and participant checking of interview transcripts and section drafts. The researcher (this paper’s first author) followed methodological guidelines developed by indigenous peoples for non-indigenous researchers. Findings: The study found that the four family literacy programmes, differing in structure, content, aims and foci, nevertheless shared common principles and practices that reflected programme staff’s overarching concern for people’s wellbeing along with their desire to enhance literacy abilities. A human rights and strengths-based based view of people based on respect for diverse culturally based values and practices were evident in staff expression of their values and beliefs and in their practices. This enacted stance influenced the outcomes of programme participation for the adult participants, their families and their communities. Alongside the literacy and learning gains identified, participants experienced positive social and relational events and changes, affirmation and strengthening of their culturally based values, and affirmation and building of positive identity. Systemically, interconnectedness of programme effects with participants’ personal histories and circumstances; the flow on of effects to other aspects of people’s lives and to their families and communities; and the personalised character of the pathways people journeyed towards enhanced wellbeing were identified. Concluding statement: This paper demonstrates the critical contribution of community psychology to a fuller understanding of family-focused educational programme outcomes than has been previously attainable, the meaning of these broader outcomes to people in their lives, and their role in wellbeing and social justice.Keywords: community psychology, ecological theory, family literacy education, flow on effects, holistic wellbeing
Procedia PDF Downloads 25423 Optimal Pressure Control and Burst Detection for Sustainable Water Management
Authors: G. K. Viswanadh, B. Rajasekhar, G. Venkata Ramana
Abstract:
Water distribution networks play a vital role in ensuring a reliable supply of clean water to urban areas. However, they face several challenges, including pressure control, pump speed optimization, and burst event detection. This paper combines insights from two studies to address these critical issues in Water distribution networks, focusing on the specific context of Kapra Municipality, India. The first part of this research concentrates on optimizing pressure control and pump speed in complex Water distribution networks. It utilizes the EPANET- MATLAB Toolkit to integrate EPANET functionalities into the MATLAB environment, offering a comprehensive approach to network analysis. By optimizing Pressure Reduce Valves (PRVs) and variable speed pumps (VSPs), this study achieves remarkable results. In the Benchmark Water Distribution System (WDS), the proposed PRV optimization algorithm reduces average leakage by 20.64%, surpassing the previous achievement of 16.07%. When applied to the South-Central and East zone WDS of Kapra Municipality, it identifies PRV locations that were previously missed by existing algorithms, resulting in average leakage reductions of 22.04% and 10.47%. These reductions translate to significant daily Water savings, enhancing Water supply reliability and reducing energy consumption. The second part of this research addresses the pressing issue of burst event detection and localization within the Water Distribution System. Burst events are a major contributor to Water losses and repair expenses. The study employs wireless sensor technology to monitor pressure and flow rate in real time, enabling the detection of pipeline abnormalities, particularly burst events. The methodology relies on transient analysis of pressure signals, utilizing Cumulative Sum and Wavelet analysis techniques to robustly identify burst occurrences. To enhance precision, burst event localization is achieved through meticulous analysis of time differentials in the arrival of negative pressure waveforms across distinct pressure sensing points, aided by nodal matrix analysis. To evaluate the effectiveness of this methodology, a PVC Water pipeline test bed is employed, demonstrating the algorithm's success in detecting pipeline burst events at flow rates of 2-3 l/s. Remarkably, the algorithm achieves a localization error of merely 3 meters, outperforming previously established algorithms. This research presents a significant advancement in efficient burst event detection and localization within Water pipelines, holding the potential to markedly curtail Water losses and the concomitant financial implications. In conclusion, this combined research addresses critical challenges in Water distribution networks, offering solutions for optimizing pressure control, pump speed, burst event detection, and localization. These findings contribute to the enhancement of Water Distribution System, resulting in improved Water supply reliability, reduced Water losses, and substantial cost savings. The integrated approach presented in this paper holds promise for municipalities and utilities seeking to improve the efficiency and sustainability of their Water distribution networks.Keywords: pressure reduce valve, complex networks, variable speed pump, wavelet transform, burst detection, CUSUM (Cumulative Sum), water pipeline monitoring
Procedia PDF Downloads 8722 MusicTherapy for Actors: An Exploratory Study Applied to Students from University Theatre Faculty
Authors: Adriana De Serio, Adrian Korek
Abstract:
Aims: This experiential research work presents a Group-MusicTherapy-Theatre-Plan (MusThePlan) the authors have carried out to support the actors. The MusicTherapy gives rise to individual psychophysical feedback and influences the emotional centres of the brain and the subconsciousness. Therefore, the authors underline the effectiveness of the preventive, educational, and training goals of the MusThePlan to lead theatre students and actors to deal with anxiety and to overcome psychophysical weaknesses, shyness, emotional stress in stage performances, to increase flexibility, awareness of one's identity and resources for a positive self-development and psychophysical health, to develop and strengthen social bonds, increasing a network of subjects working for social inclusion and reduction of stigma. Materials-Methods: Thirty students from the University Theatre Faculty participated in weekly music therapy sessions for two months; each session lasted 120 minutes. MusThePlan: Each session began with a free group rhythmic-sonorous-musical-production by body-percussion, voice-canto, instruments, to stimulate communication. Then, a synchronized-structured bodily-rhythmic-sonorous-musical production also involved acting, dances, movements of hands and arms, hearing, and more sensorial perceptions and speech to balance motor skills and the muscular tone. Each student could be the director-leader of the group indicating a story to inspire the group's musical production. The third step involved the students in rhythmic speech and singing drills and in vocal exercises focusing on the musical pitch to improve the intonation and on the diction to improve the articulation and lead up it to an increased intelligibility. At the end of each musictherapy session and of the two months, the Musictherapy Assessment Document was drawn up by analysis of observation protocols and two Indices by the authors: Patient-Environment-Music-Index (time to - tn) to estimate the behavior evolution, Somatic Pattern Index to monitor subject’s eye and mouth and limb motility, perspiration, before, during and after musictherapy sessions. Results: After the first month, the students (non musicians) learned to play percussion instruments and formed a musical band that played classical/modern music on the percussion instruments with the musictherapist/pianist/conductor in a public concert. At the end of the second month, the students performed a public musical theatre show, acting, dancing, singing, and playing percussion instruments. The students highlighted the importance of the playful aspects of the group musical production in order to achieve emotional contact and harmony within the group. The students said they had improved kinetic and vocal and all the skills useful for acting activity and the nourishment of the bodily and emotional balance. Conclusions: The MusThePlan makes use of some specific MusicTherapy methodological models, techniques, and strategies useful for the actors. The MusThePlan can destroy the individual "mask" and can be useful when the verbal language is unable to undermine the defense mechanisms of the subject. The MusThePlan improves actor’s psychophysical activation, motivation, gratification, knowledge of one's own possibilities, and the quality of life. Therefore, the MusThePlan could be useful to carry out targeted interventions for the actors with characteristics of repeatability, objectivity, and predictability of results. Furthermore, it would be useful to plan a University course/master in “MusicTherapy for the Theatre”.Keywords: musictherapy, sonorous-musical energy, quality of life, theatre
Procedia PDF Downloads 7621 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU
Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais
Abstract:
Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking
Procedia PDF Downloads 3420 Enhancing Plant Throughput in Mineral Processing Through Multimodal Artificial Intelligence
Authors: Muhammad Bilal Shaikh
Abstract:
Mineral processing plants play a pivotal role in extracting valuable minerals from raw ores, contributing significantly to various industries. However, the optimization of plant throughput remains a complex challenge, necessitating innovative approaches for increased efficiency and productivity. This research paper investigates the application of Multimodal Artificial Intelligence (MAI) techniques to address this challenge, aiming to improve overall plant throughput in mineral processing operations. The integration of multimodal AI leverages a combination of diverse data sources, including sensor data, images, and textual information, to provide a holistic understanding of the complex processes involved in mineral extraction. The paper explores the synergies between various AI modalities, such as machine learning, computer vision, and natural language processing, to create a comprehensive and adaptive system for optimizing mineral processing plants. The primary focus of the research is on developing advanced predictive models that can accurately forecast various parameters affecting plant throughput. Utilizing historical process data, machine learning algorithms are trained to identify patterns, correlations, and dependencies within the intricate network of mineral processing operations. This enables real-time decision-making and process optimization, ultimately leading to enhanced plant throughput. Incorporating computer vision into the multimodal AI framework allows for the analysis of visual data from sensors and cameras positioned throughout the plant. This visual input aids in monitoring equipment conditions, identifying anomalies, and optimizing the flow of raw materials. The combination of machine learning and computer vision enables the creation of predictive maintenance strategies, reducing downtime and improving the overall reliability of mineral processing plants. Furthermore, the integration of natural language processing facilitates the extraction of valuable insights from unstructured textual data, such as maintenance logs, research papers, and operator reports. By understanding and analyzing this textual information, the multimodal AI system can identify trends, potential bottlenecks, and areas for improvement in plant operations. This comprehensive approach enables a more nuanced understanding of the factors influencing throughput and allows for targeted interventions. The research also explores the challenges associated with implementing multimodal AI in mineral processing plants, including data integration, model interpretability, and scalability. Addressing these challenges is crucial for the successful deployment of AI solutions in real-world industrial settings. To validate the effectiveness of the proposed multimodal AI framework, the research conducts case studies in collaboration with mineral processing plants. The results demonstrate tangible improvements in plant throughput, efficiency, and cost-effectiveness. The paper concludes with insights into the broader implications of implementing multimodal AI in mineral processing and its potential to revolutionize the industry by providing a robust, adaptive, and data-driven approach to optimizing plant operations. In summary, this research contributes to the evolving field of mineral processing by showcasing the transformative potential of multimodal artificial intelligence in enhancing plant throughput. The proposed framework offers a holistic solution that integrates machine learning, computer vision, and natural language processing to address the intricacies of mineral extraction processes, paving the way for a more efficient and sustainable future in the mineral processing industry.Keywords: multimodal AI, computer vision, NLP, mineral processing, mining
Procedia PDF Downloads 6819 4-Channel CWDM Optical Transceiver Applying Silicon Photonics Ge-Photodiode and MZ-Modulator
Authors: Do-Won Kim, Andy Eu Jin Lim, Raja Muthusamy Kumarasamy, Vishal Vinayak, Jacky Wang Yu-Shun, Jason Liow Tsung Yang, Patrick Lo Guo Qiang
Abstract:
In this study, we demonstrate 4-channel coarse wavelength division multiplexing (CWDM) optical transceiver based on silicon photonics integrated circuits (PIC) of waveguide Ge-photodiode (Ge-PD) and Mach Zehnder (MZ)-modulator. 4-channel arrayed PICs of Ge-PD and MZ-modulator are verified to operate at 25 Gbps/ch achieving 4x25 Gbps of total data rate. 4 bare dies of single-channel commercial electronics ICs (EICs) of trans-impedance amplifier (TIA) for Ge-PD and driver IC for MZ-modulator are packaged with PIC on printed circuit board (PCB) in a chip-on-board (COB) manner. Each single-channel EIC is electrically connected to the one channel of 4-channel PICs by wire bonds to trace. The PICs have 4-channel multiplexer for MZ-modulator and 4-channel demultiplexer for Ge-PD. The 4-channel multiplexer/demultiplexer have echelle gratings for4 CWDM optic signals of which center wavelengths are 1511, 1531, 1553, and 1573 nm. Its insertion loss is around 4dB with over 15dB of extinction ratio.The dimension of 4-channel Ge-PD is 3.6x1.4x0.3mm, and its responsivity is 1A/W with dark current of less than 20 nA.Its measured 3dB bandwidth is around 20GHz. The dimension of the 4-channel MZ-modulator is 3.6x4.8x0.3mm, and its 3dB bandwidth is around 11Ghz at -2V of reverse biasing voltage. It has 2.4V•cmbyVπVL of 6V for π shift to 4 mm length modulator.5x5um of Inversed tapered mode size converter with less than 2dB of coupling loss is used for the coupling of the lensed fiber which has 5um of mode field diameter.The PCB for COB packaging and signal transmission is designed to have 6 layers in the hybrid layer structure. 0.25 mm-thick Rogers Duroid RT5880 is used as the first core dielectric layer for high-speed performance over 25 Gbps. It has 0.017 mm-thick of copper layers and its dielectric constant is 2.2and dissipation factor is 0.0009 at 10 GHz. The dimension of both single ended and differential microstrip transmission lines are calculated using full-wave electromagnetic (EM) field simulator HFSS which RF industry is using most. It showed 3dB bandwidth at around 15GHz in S-parameter measurement using network analyzer. The wire bond length for transmission line and ground connection from EIC is done to have less than 300 µm to minimize the parasitic effect to the system.Single layered capacitors (SLC) of 100pF and 1000pF are connected as close as possible to the EICs for stabilizing the DC biasing voltage by decoupling. Its signal transmission performance is under measurement at 25Gbps achieving 100Gbps by 4chx25Gbps. This work can be applied for the active optical cable (AOC) and quad small form-factor pluggable (QSFP) for high-speed optical interconnections. Its demands are quite large in data centers targeting 100 Gbps, 400 Gbps, and 1 Tbps. As the demands of high-speed AOC and QSFP for the application to intra/inter data centers increase, this silicon photonics based high-speed 4 channel CWDM scheme can have advantages not only in data throughput but also cost effectiveness since it reduces fiber cost dramatically through WDM.Keywords: active optical cable(AOC), 4-channel coarse wavelength division multiplexing (CWDM), communication system, data center, ge-photodiode, Mach Zehnder (MZ) modulator, optical interconnections, optical transceiver, photonics integrated circuits (PIC), quad small form-factor pluggable (QSFP), silicon photonics
Procedia PDF Downloads 418