Search results for: real
739 Characterization of Particle Charge from Aerosol Generation Process: Impact on Infrared Signatures and Material Reactivity
Authors: Erin M. Durke, Monica L. McEntee, Meilu He, Suresh Dhaniyala
Abstract:
Aerosols are one of the most important and significant surfaces in the atmosphere. They can influence weather, absorption, and reflection of light, and reactivity of atmospheric constituents. A notable feature of aerosol particles is the presence of a surface charge, a characteristic imparted via the aerosolization process. The existence of charge can complicate the interrogation of aerosol particles, so many researchers remove or neutralize aerosol particles before characterization. However, the charge is present in real-world samples, and likely has an effect on the physical and chemical properties of an aerosolized material. In our studies, we aerosolized different materials in an attempt to characterize the charge imparted via the aerosolization process and determine what impact it has on the aerosolized materials’ properties. The metal oxides, TiO₂ and SiO₂, were aerosolized expulsively and then characterized, using several different techniques, in an effort to determine the surface charge imparted upon the particles via the aerosolization process. Particle charge distribution measurements were conducted via the employment of a custom scanning mobility particle sizer. The results of the charge distribution measurements indicated that expulsive generation of 0.2 µm SiO₂ particles produced aerosols with upwards of 30+ charges on the surface of the particle. Determination of the degree of surface charging led to the use of non-traditional techniques to explore the impact of additional surface charge on the overall reactivity of the metal oxides, specifically TiO₂. TiO₂ was aerosolized, again expulsively, onto a gold-coated tungsten mesh, which was then evaluated with transmission infrared spectroscopy in an ultra-high vacuum environment. The TiO₂ aerosols were exposed to O₂, H₂, and CO, respectively. Exposure to O₂ resulted in a decrease in the overall baseline of the aerosol spectrum, suggesting O₂ removed some of the surface charge imparted during aerosolization. Upon exposure to H₂, there was no observable rise in the baseline of the IR spectrum, as is typically seen for TiO₂, due to the population of electrons into the shallow trapped states and subsequent promotion of the electrons into the conduction band. This result suggests that the additional charge imparted via aerosolization fills the trapped states, therefore no rise is seen upon exposure to H₂. Dosing the TiO₂ aerosols with CO showed no adsorption of CO on the surface, even at lower temperatures (~100 K), indicating the additional charge on the aerosol surface prevents the CO molecules from adsorbing to the TiO₂ surface. The results observed during exposure suggest that the additional charge imparted via aerosolization impacts the interaction with each probe gas.Keywords: aerosols, charge, reactivity, infrared
Procedia PDF Downloads 123738 Layouting Phase II of New Priok Using Adaptive Port Planning Frameworks
Authors: Mustarakh Gelfi, Tiedo Vellinga, Poonam Taneja, Delon Hamonangan
Abstract:
The development of New Priok/Kalibaru as an expansion terminal of the old port has been being done by IPC (Indonesia Port Cooperation) together with the subsidiary company, Port Developer (PT Pengembangan Pelabuhan Indonesia). As stated in the master plan, from 2 phases that had been proposed, phase I has shown its form and even Container Terminal I has been operated in 2016. It was planned principally, the development will be divided into Phase I (2013-2018) consist of 3 container terminals and 2 product terminals and Phase II (2018-2023) consist of 4 container terminals. In fact, the master plan has to be changed due to some major uncertainties which were escaped in prediction. This study is focused on the design scenario of phase II (2035- onwards) to deal with future uncertainty. The outcome is the robust design of phase II of the Kalibaru Terminal taking into account the future changes. Flexibility has to be a major goal in such a large infrastructure project like New Priok in order to deal and manage future uncertainty. The phasing of project needs to be adapted and re-look frequently before being irrelevant to future challenges. One of the frameworks that have been developed by an expert in port planning is Adaptive Port Planning (APP) with scenario-based planning. The idea behind APP framework is the adaptation that might be needed at any moment as an answer to a challenge. It is a continuous procedure that basically aims to increase the lifespan of waterborne transport infrastructure by increasing flexibility in the planning, contracting and design phases. Other methods used in this study are brainstorming with the port authority, desk study, interview and site visit to the real project. The result of the study is expected to be the insight for the port authority of Tanjung Priok over the future look and how it will impact the design of the port. There will be guidelines to do the design in an uncertain environment as well. Solutions of flexibility can be divided into: 1 - Physical solutions, all the items related hard infrastructure in the projects. The common things in this type of solution are using modularity, standardization, multi-functional, shorter and longer design lifetime, reusability, etc. 2 - Non-physical solutions, usually related to the planning processes, decision making and management of the projects. To conclude, APP framework seems quite robust to deal with the problem of designing phase II of New Priok Project for such a long period.Keywords: Indonesia port, port's design, port planning, scenario-based planning
Procedia PDF Downloads 240737 Terrestrial Laser Scans to Assess Aerial LiDAR Data
Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani
Abstract:
The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy
Procedia PDF Downloads 100736 Human Factors as the Main Reason of the Accident in Scaffold Use Assessment
Authors: Krzysztof J. Czarnocki, E. Czarnocka, K. Szaniawska
Abstract:
Main goal of the research project is Scaffold Use Risk Assessment Model (SURAM) formulation, developed for the assessment of risk levels as a various construction process stages with various work trades. Finally, in 2016, the project received financing by the National Center for Research and development according to PBS3/A2/19/2015–Research Grant. The presented data, calculations and analyzes discussed in this paper were created as a result of the completion on the first and second phase of the PBS3/A2/19/2015 project. Method: One of the arms of the research project is the assessment of worker visual concentration on the sight zones as well as risky visual point inadequate observation. In this part of research, the mobile eye-tracker was used to monitor the worker observation zones. SMI Eye Tracking Glasses is a tool, which allows us to analyze in real time and place where our eyesight is concentrated on and consequently build the map of worker's eyesight concentration during a shift. While the project is still running, currently 64 construction sites have been examined, and more than 600 workers took part in the experiment including monitoring of typical parameters of the work regimen, workload, microclimate, sound vibration, etc. Full equipment can also be useful in more advanced analyses. Because of that technology we have verified not only main focus of workers eyes during work on or next to scaffolding, but we have also examined which changes in the surrounding environment during their shift influenced their concentration. In the result of this study it has been proven that only up to 45.75% of the shift time, workers’ eye concentration was on one of three work-related areas. Workers seem to be distracted by noisy vehicles or people nearby. In opposite to our initial assumptions and other authors’ findings, we observed that the reflective parts of the scaffoldings were not more recognized by workers in their direct workplaces. We have noticed that the red curbs were the only well recognized part on a very few scaffoldings. Surprisingly on numbers of samples, we have not recognized any significant number of concentrations on those curbs. Conclusion: We have found the eye-tracking method useful for the construction of the SURAM model in the risk perception and worker’s behavior sub-modules. We also have found that the initial worker's stress and work visual conditions seem to be more predictive for assessment of the risky developing situation or an accident than other parameters relating to a work environment.Keywords: accident assessment model, eye tracking, occupational safety, scaffolding
Procedia PDF Downloads 199735 Nanoliposomes in Photothermal Therapy: Advancements and Applications
Authors: Mehrnaz Mostafavi
Abstract:
Nanoliposomes, minute lipid-based vesicles at the nano-scale, show promise in the realm of photothermal therapy (PTT). This study presents an extensive overview of nanoliposomes in PTT, exploring their distinct attributes and the significant progress in this therapeutic methodology. The research delves into the fundamental traits of nanoliposomes, emphasizing their adaptability, compatibility with biological systems, and their capacity to encapsulate diverse therapeutic substances. Specifically, it examines the integration of light-absorbing materials, like gold nanoparticles or organic dyes, into nanoliposomal formulations, enabling their efficacy as proficient agents for photothermal treatment Additionally, this paper elucidates the mechanisms involved in nanoliposome-mediated PTT, highlighting their capability to convert light energy into localized heat, facilitating the precise targeting of diseased cells or tissues. This precise regulation of light absorption and heat generation by nanoliposomes presents a non-invasive and precisely focused therapeutic approach, particularly in conditions like cancer. The study explores advancements in nanoliposomal formulations aimed at optimizing PTT outcomes. These advancements include strategies for improved stability, enhanced drug loading, and the targeted delivery of therapeutic agents to specific cells or tissues. Furthermore, the paper discusses multifunctional nanoliposomal systems, integrating imaging components or targeting elements for real-time monitoring and improved accuracy in PTT. Moreover, the review highlights recent preclinical and clinical trials showcasing the effectiveness and safety of nanoliposome-based PTT across various disease models. It also addresses challenges in clinical implementation, such as scalability, regulatory considerations, and long-term safety assessments. In conclusion, this paper underscores the substantial potential of nanoliposomes in advancing PTT as a promising therapeutic approach. Their distinctive characteristics, combined with their precise ability to convert light into heat, offer a tailored and efficient method for treating targeted diseases. The encouraging outcomes from preclinical studies pave the way for further exploration and potential clinical applications of nanoliposome-based PTT.Keywords: nanoliposomes, photothermal therapy, light absorption, heat conversion, therapeutic agents, targeted delivery, cancer therapy
Procedia PDF Downloads 112734 Evaluating Multiple Diagnostic Tests: An Application to Cervical Intraepithelial Neoplasia
Authors: Areti Angeliki Veroniki, Sofia Tsokani, Evangelos Paraskevaidis, Dimitris Mavridis
Abstract:
The plethora of diagnostic test accuracy (DTA) studies has led to the increased use of systematic reviews and meta-analysis of DTA studies. Clinicians and healthcare professionals often consult DTA meta-analyses to make informed decisions regarding the optimum test to choose and use for a given setting. For example, the human papilloma virus (HPV) DNA, mRNA, and cytology can be used for the cervical intraepithelial neoplasia grade 2+ (CIN2+) diagnosis. But which test is the most accurate? Studies directly comparing test accuracy are not always available, and comparisons between multiple tests create a network of DTA studies that can be synthesized through a network meta-analysis of diagnostic tests (DTA-NMA). The aim is to summarize the DTA-NMA methods for at least three index tests presented in the methodological literature. We illustrate the application of the methods using a real data set for the comparative accuracy of HPV DNA, HPV mRNA, and cytology tests for cervical cancer. A search was conducted in PubMed, Web of Science, and Scopus from inception until the end of July 2019 to identify full-text research articles that describe a DTA-NMA method for three or more index tests. Since the joint classification of the results from one index against the results of another index test amongst those with the target condition and amongst those without the target condition are rarely reported in DTA studies, only methods requiring the 2x2 tables of the results of each index test against the reference standard were included. Studies of any design published in English were eligible for inclusion. Relevant unpublished material was also included. Ten relevant studies were finally included to evaluate their methodology. DTA-NMA methods that have been presented in the literature together with their advantages and disadvantages are described. In addition, using 37 studies for cervical cancer obtained from a published Cochrane review as a case study, an application of the identified DTA-NMA methods to determine the most promising test (in terms of sensitivity and specificity) for use as the best screening test to detect CIN2+ is presented. As a conclusion, different approaches for the comparative DTA meta-analysis of multiple tests may conclude to different results and hence may influence decision-making. Acknowledgment: This research is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme «Human Resources Development, Education and Lifelong Learning 2014-2020» in the context of the project “Extension of Network Meta-Analysis for the Comparison of Diagnostic Tests ” (MIS 5047640).Keywords: colposcopy, diagnostic test, HPV, network meta-analysis
Procedia PDF Downloads 139733 Intellectual Capital as Resource Based Business Strategy
Authors: Vidya Nimkar Tayade
Abstract:
Introduction: Intellectual capital of an organization is a key factor to success. Many companies invest a huge amount in their Research and development activities. Any innovation is helpful not only to that particular company but also to many other companies, industry and mankind as a whole. Companies undertake innovative changes for increasing their capital profitability and indirectly increase in pay packages of their employees. The quality of human capital can also improve due to such positive changes. Employees become more skilled and experienced due to such innovations and inventions. For increasing intangible capital, the author has referred to a couple of books and referred case studies to come to a conclusion. Different charts and tables are also referred to by the author. Case studies are more important because they are proven and established techniques. They enable students to apply theoretical concepts in real-world situations. It gives solutions to an open-ended problem with multiple potential solutions. There are three different strategies for undertaking intellectual capital increase. They are: Research push strategy/ Technology pushed approach, Market pull strategy/ approach and Open innovation strategy/approach. Research push strategy, In this strategy, research is undertaken and innovation is achieved on its own. After invention inventor company protects such invention and finds buyers for such invention. In this way, the invention is pushed into the market. In this method, research and development are undertaken first and the outcome of this research is commercialized. Market pull strategy, In this strategy, commercial opportunities are identified first and our research is concentrated in that particular area. For solving a particular problem, research is undertaken. It becomes easier to commercialize this type of invention. Because what is the problem is identified first and in that direction, research and development activities are carried on. Open invention strategy, In this type of research, more than one company enters into an agreement of research. The benefits of the outcome of this research will be shared by both companies. Internal and external ideas and technologies are involved. These ideas are coordinated and then they are commercialized. Due to globalization, people from the outside company are also invited to undertake research and development activities. Remuneration of employees of both the companies can increase and the benefit of commercialization of such invention is also shared by both the companies. Conclusion: In modern days, not only can tangible assets be commercialized, but also intangible assets can also be commercialized. The benefits of such an invention can be shared by more than one company. Competition can become more meaningful. Pay packages of employees can improve. It Is a need for time to adopt such strategies to benefit employees, competitors, stakeholders.Keywords: innovation, protection, management, commercialization
Procedia PDF Downloads 168732 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism
Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape
Abstract:
Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders
Procedia PDF Downloads 23731 Visitor Management in the National Parks: Recreational Carrying Capacity Assessment of Çıralı Coast, Turkey
Authors: Tendü H. Göktuğ, Gönül T. İçemer, Bülent Deniz
Abstract:
National parks, which are rich in natural and cultural resources values are protected in the context of the idea to develop sustainability, are among the most important recreated areas demanding with each passing day. Increasing recreational use or unplanned use forms negatively affect the resource values and visitor satisfaction. The intent of national parks management is to protect the natural and cultural resource values and to provide the visitors with a quality of recreational experience, as well. In this context, the current studies to improve the appropriate tourism and recreation planning and visitor management, approach have focused on recreational carrying capacity analysis. The aim of this study is to analyze recreational carrying capacity of Çıralı Coast in the Bey Mountains Coastal National Park to compare the analyze results with the current usage format and to develop alternative management strategies. In the first phase of the study, the annual and daily visitations, geographic, bio-physical, and managerial characteristics of the park and the type of recreational usage and the recreational areas were analyzed. In addition to these, ecological observations were carried out in order to determine recreational-based pressures on the ecosystems. On-site questionnaires were administrated to a sample of 284 respondents in the August 2015 - 2016 to collect data concerning the demographics and visit characteristics. The second phase of the study, the coastal area separated into four different usage zones and the methodology proposed by Cifuentes (1992) was used for capacity analyses. This method supplies the calculation of physical, real and effective carrying capacities by using environmental, ecological, climatic and managerial parameters in a formulation. Expected numbers which estimated three levels of carrying capacities were compared to current numbers of national parks’ visitors. In the study, it was determined that the current recreational uses in the north of the beach were caused by ecological pressures, and the current numbers in the south of beach much more than estimated numbers of visitors. Based on these results management strategies were defined and the appropriate management tools were developed in accordance with these strategies. The authors are grateful for the financial support of this project by The Scientific and Technological Research Council of Turkey (No: 114O344)Keywords: Çıralı Coast, national parks, recreational carrying capacity, visitor management
Procedia PDF Downloads 274730 Preliminary Seismic Vulnerability Assessment of Existing Historic Masonry Building in Pristina, Kosovo
Authors: Florim Grajcevci, Flamur Grajcevci, Fatos Tahiri, Hamdi Kurteshi
Abstract:
The territory of Kosova is actually included in one of the most seismic-prone regions in Europe. Therefore, the earthquakes are not so rare in Kosova; and when they occurred, the consequences have been rather destructive. The importance of assessing the seismic resistance of existing masonry structures has drawn strong and growing interest in the recent years. Engineering included those of Vulnerability, Loss of Buildings and Risk assessment, are also of a particular interest. This is due to the fact that this rapidly developing field is related to great impact of earthquakes on the socioeconomic life in seismic-prone areas, as Kosova and Prishtina are, too. Such work paper for Prishtina city may serve as a real basis for possible interventions in historic buildings as are museums, mosques, old residential buildings, in order to adequately strengthen and/or repair them, by reducing the seismic risk within acceptable limits. The procedures of the vulnerability assessment of building structures have concentrated on structural system, capacity, and the shape of layout and response parameters. These parameters will provide expected performance of the very important existing building structures on the vulnerability and the overall behavior during the earthquake excitations. The structural systems of existing historical buildings in Pristina, Kosovo, are dominantly unreinforced brick or stone masonry with very high risk potential from the expected earthquakes in the region. Therefore, statistical analysis based on the observed damage-deformation, cracks, deflections and critical building elements, would provide more reliable and accurate results for the regional assessments. The analytical technique was used to develop a preliminary evaluation methodology for assessing seismic vulnerability of the respective structures. One of the main objectives is also to identify the buildings that are highly vulnerable to damage caused from inadequate seismic performance-response. Hence, the damage scores obtained from the derived vulnerability functions will be used to categorize the evaluated buildings as “stabile”, “intermediate”, and “unstable”. The vulnerability functions are generated based on the basic damage inducing parameters, namely number of stories (S), lateral stiffness (LS), capacity curve of total building structure (CCBS), interstory drift (IS) and overhang ratio (OR).Keywords: vulnerability, ductility, seismic microzone, ductility, energy efficiency
Procedia PDF Downloads 407729 MicroRNA Drivers of Resistance to Androgen Deprivation Therapy in Prostate Cancer
Authors: Philippa Saunders, Claire Fletcher
Abstract:
INTRODUCTION: Prostate cancer is the most prevalent malignancy affecting Western males. It is initially an androgen-dependent disease: androgens bind to the androgen receptor and drive the expression of genes that promote proliferation and evasion of apoptosis. Despite reduced androgen dependence in advanced prostate cancer, androgen receptor signaling remains a key driver of growth. Androgen deprivation therapy (ADT) is, therefore, a first-line treatment approach and works well initially, but resistance inevitably develops. Abiraterone and Enzalutamide are drugs widely used in ADT and are androgen synthesis and androgen receptor signaling inhibitors, respectively. The shortage of other treatment options means acquired resistance to these drugs is a major clinical problem. MicroRNAs (miRs) are important mediators of post-transcriptional gene regulation and show altered expression in cancer. Several have been linked to the development of resistance to ADT. Manipulation of such miRs may be a pathway to breakthrough treatments for advanced prostate cancer. This study aimed to validate ADT resistance-implicated miRs and their clinically relevant targets. MATERIAL AND METHOD: Small RNA-sequencing of Abiraterone- and Enzalutamide-resistant C42 prostate cancer cells identified subsets of miRs dysregulated as compared to parental cells. Real-Time Quantitative Reverse Transcription PCR (qRT-PCR) was used to validate altered expression of candidate ADT resistance-implicated miRs 195-5p, 497-5p and 29a-5p in ADT-resistant and -responsive prostate cancer cell lines, patient-derived xenografts (PDXs) and primary prostate cancer explants. RESULTS AND DISCUSSION: This study suggests a possible role for miR-497-5p in the development of ADT resistance in prostate cancer. MiR-497-5p expression was increased in ADT-resistant versus ADT-responsive prostate cancer cells. Importantly, miR-497-5p expression was also increased in Enzalutamide-treated, castrated (ADT-mimicking) PDXs versus intact PDXs. MiR-195-5p was also elevated in ADT-resistant versus -responsive prostate cancer cells, while there was a drop in miR-29a-5p expression. Candidate clinically relevant targets of miR-497-5p in prostate cancer were identified by mining AGO-PAR-CLIP-seq data sets and may include AVL9 and FZD6. CONCLUSION: In summary, this study identified microRNAs that are implicated in prostate cancer resistance to androgen deprivation therapy and could represent novel therapeutic targets for advanced disease.Keywords: microRNA, androgen deprivation therapy, Enzalutamide, abiraterone, patient-derived xenograft
Procedia PDF Downloads 143728 A Reduced Ablation Model for Laser Cutting and Laser Drilling
Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz
Abstract:
In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling
Procedia PDF Downloads 214727 Comics as an Intermediary for Media Literacy Education
Authors: Ryan C. Zlomek
Abstract:
The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.Keywords: comics, graphics novels, mass communication, media literacy, metacognition
Procedia PDF Downloads 298726 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring
Authors: Younghoon Kim, Seoung Bum Kim
Abstract:
One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.Keywords: control chart, mixed integer programming, one-class classification, support vector data description
Procedia PDF Downloads 174725 Enhancing Tower Crane Safety: A UAV-based Intelligent Inspection Approach
Authors: Xin Jiao, Xin Zhang, Jian Fan, Zhenwei Cai, Yiming Xu
Abstract:
Tower cranes play a crucial role in the construction industry, facilitating the vertical and horizontal movement of materials and aiding in building construction, especially for high-rise structures. However, tower crane accidents can lead to severe consequences, highlighting the importance of effective safety management and inspection. This paper presents an innovative approach to tower crane inspection utilizing Unmanned Aerial Vehicles (UAVs) and an Intelligent Inspection APP System. The system leverages UAVs equipped with high-definition cameras to conduct efficient and comprehensive inspections, reducing manual labor, inspection time, and risk. By integrating advanced technologies such as Real-Time Kinematic (RTK) positioning and digital image processing, the system enables precise route planning and collection of safety hazards images. A case study conducted on a construction site demonstrates the practicality and effectiveness of the proposed method, showcasing its potential to enhance tower crane safety. On-site testing of UAV intelligent inspections reveals key findings: efficient tower crane hazard inspection within 30 minutes, with a full-identification capability coverage rates of 76.3%, 64.8%, and 76.2% for major, significant, and general hazards respectively and a preliminary-identification capability coverage rates of 18.5%, 27.2%, and 19%, respectively. Notably, UAVs effectively identify various tower crane hazards, except for those requiring auditory detection. The limitations of this study primarily involve two aspects: Firstly, during the initial inspection, manual drone piloting is required for marking tower crane points, followed by automated flight inspections and reuse based on the marked route. Secondly, images captured by the drone necessitate manual identification and review, which can be time-consuming for equipment management personnel, particularly when dealing with a large volume of images. Subsequent research efforts will focus on AI training and recognition of safety hazard images, as well as the automatic generation of inspection reports and corrective management based on recognition results. The ongoing development in this area is currently in progress, and outcomes will be released at an appropriate time.Keywords: tower crane, inspection, unmanned aerial vehicle (UAV), intelligent inspection app system, safety management
Procedia PDF Downloads 42724 Hydrological Benefits Sharing Concepts in Constructing Friendship Dams on Transboundary Tigris River Between Iraq and Turkey
Authors: Thair Mahmood Altaiee
Abstract:
Because of the increasing population and the growing water requirements from the transboundary water resources within riparian countries in addition to un-proper management of these transboundary water resources, it is likely that a conflicts on the water will be occurred. So it is mandatory to search solutions to mitigate the action and probabilities of these undesired conflicts. One of the solutions for these crises may be sharing the riparian countries in the management of their transboundary water resources and share benefit. Effective cooperation on a transboundary river is any action by the riparian countries that lead to improve management of the river to their mutual acceptance. In principle, friendship dams constructed by riparian countries may play an important role in preventing conflicts like the Turkish-Syrian friendship dam on Asi river (Orontes), Iranian-Tukmenistan dam on Hariroud river, Bulgarian-Turkish dam on Tundzha river, Brazil-Paraguay dam on Parana river, and Aras dam between Iran and Azerbaijan. The objective of this study is to focus the light on the hydrological aspects of cooperation in constructing dams on the transboundary rivers, which may consider an option to prevent conflicts on water between the riparian countries. The various kinds of benefits and external impacts associated with cooperation in dams construction on the transboundary rivers with a real examples will be presented and analyzed. The hydrological benefit sharing from cooperation in dams construction, which type of benefit sharing mechanisms are applicable to dams, and how they vary were discussed. The study considered the cooperative applicability to dams on shared rivers according to selected case study of friendship dams in the world to illustrate the relevance of the cooperation concepts and the feasibility of such propose cooperation between Turkey and Iraq within the Tigris river. It is found that the opportunities of getting benefit from cooperation depend mainly on the hydrological boundary and location of the dam in relation to them. The desire to cooperate on dams construction on transboundary rivers exists if the location of a dam upstream will increase aggregate net benefits. The case studies show that various benefit sharing mechanisms due to cooperation in constructing friendship dams on the riparian countries border are possible for example when the downstream state (Iraq) convinces the upstream state (Turkey) to share building a dam on Tigris river across the Iraqi –Turkish border covering the cost and sharing the net benefit derived from this dam. These initial findings may provide guidance for riparian states engaged in and donors facilitating negotiation on dam projects on transboundary rivers.Keywords: friendship dams, transboundary rivers, water cooperation, benefit sharing
Procedia PDF Downloads 141723 Sequential Mixed Methods Study to Examine the Potentiality of Blackboard-Based Collaborative Writing as a Solution Tool for Saudi Undergraduate EFL Students’ Writing Difficulties
Authors: Norah Alosayl
Abstract:
English is considered the most important foreign language in the Kingdom of Saudi Arabia (KSA) because of the usefulness of English as a global language compared to Arabic. As students’ desire to improve their English language skills has grown, English writing has been identified as the most difficult problem for Saudi students in their language learning. Although the English language in Saudi Arabia is taught beginning in the seventh grade, many students have problems at the university level, especially in writing, due to a gap between what is taught in secondary and high schools and university expectations- pupils generally study English at school, based on one book with few exercises in vocabulary and grammar exercises, and there are no specific writing lessons. Moreover, from personal teaching experience at King Saud bin Abdulaziz University, students face real problems with their writing. This paper revolves around the blackboard-based collaborative writing to help the undergraduate Saudi EFL students, in their first year enrolled in two sections of ENGL 101 in the first semester of 2021 at King Saud bin Abdulaziz University, practice the most difficult skill they found in their writing through a small group. Therefore, a sequential mixed methods design will be suited. The first phase of the study aims to highlight the most difficult skill experienced by students from an official writing exam that is evaluated by their teachers through an official rubric used in King Saud bin Abdulaziz University. In the second phase, this study will intend to investigate the benefits of social interaction on the process of learning writing. Students will be provided with five collaborative writing tasks via discussion feature on Blackboard to practice a skill that they found difficult in writing. the tasks will be formed based on social constructivist theory and pedagogic frameworks. The interaction will take place between peers and their teachers. The frequencies of students’ participation and the quality of their interaction will be observed through manual counting, screenshotting. This will help the researcher understand how students actively work on the task through the amount of their participation and will also distinguish the type of interaction (on task, about task, or off-task). Semi-structured interviews will be conducted with students to understand their perceptions about the blackboard-based collaborative writing tasks, and questionnaires will be distributed to identify students’ attitudes with the tasks.Keywords: writing difficulties, blackboard-based collaborative writing, process of learning writing, interaction, participations
Procedia PDF Downloads 191722 An Evaluation of the Use of Telematics for Improving the Driving Behaviours of Young People
Authors: James Boylan, Denny Meyer, Won Sun Chen
Abstract:
Background: Globally, there is an increasing trend of road traffic deaths, reaching 1.35 million in 2016 in comparison to 1.3 million a decade ago, and overall, road traffic injuries are ranked as the eighth leading cause of death for all age groups. The reported death rate for younger drivers aged 16-19 years is almost twice the rate reported for older drivers aged 25 and above, with a rate of 3.5 road traffic fatalities per annum for every 10,000 licenses held. Telematics refers to a system with the ability to capture real-time data about vehicle usage. The data collected from telematics can be used to better assess a driver's risk. It is typically used to measure acceleration, turn, braking, and speed, as well as to provide locational information. With the Australian government creating the National Telematics Framework, there has been an increase in the government's focus on using telematics data to improve road safety outcomes. The purpose of this study is to test the hypothesis that improvements in telematics measured driving behaviour to relate to improvements in road safety attitudes measured by the Driving Behaviour Questionnaire (DBQ). Methodology: 28 participants were recruited and given a telematics device to insert into their vehicles for the duration of the study. The participant's driving behaviour over the course of the first month will be compared to their driving behaviour in the second month to determine whether feedback from telematics devices improves driving behaviour. Participants completed the DBQ, evaluated using a 6-point Likert scale (0 = never, 5 = nearly all the time) at the beginning, after the first month, and after the second month of the study. This is a well-established instrument used worldwide. Trends in the telematics data will be captured and correlated with the changes in the DBQ using regression models in SAS. Results: The DBQ has provided a reliable measure (alpha = .823) of driving behaviour based on a sample of 23 participants, with an average of 50.5 and a standard deviation of 11.36, and a range of 29 to 76, with higher scores, indicating worse driving behaviours. This initial sample is well stratified in terms of gender and age (range 19-27). It is expected that in the next six weeks, a larger sample of around 40 will have completed the DBQ after experiencing in-vehicle telematics for 30 days, allowing a comparison with baseline levels. The trends in the telematics data over the first 30 days will be compared with the changes observed in the DBQ. Conclusions: It is expected that there will be a significant relationship between the improvements in the DBQ and the trends in reduced telematics measured aggressive driving behaviours supporting the hypothesis.Keywords: telematics, driving behavior, young drivers, driving behaviour questionnaire
Procedia PDF Downloads 106721 Ribotaxa: Combined Approaches for Taxonomic Resolution Down to the Species Level from Metagenomics Data Revealing Novelties
Authors: Oshma Chakoory, Sophie Comtet-Marre, Pierre Peyret
Abstract:
Metagenomic classifiers are widely used for the taxonomic profiling of metagenomic data and estimation of taxa relative abundance. Small subunit rRNA genes are nowadays a gold standard for the phylogenetic resolution of complex microbial communities, although the power of this marker comes down to its use as full-length. We benchmarked the performance and accuracy of rRNA-specialized versus general-purpose read mappers, reference-targeted assemblers and taxonomic classifiers. We then built a pipeline called RiboTaxa to generate a highly sensitive and specific metataxonomic approach. Using metagenomics data, RiboTaxa gave the best results compared to other tools (Kraken2, Centrifuge (1), METAXA2 (2), PhyloFlash (3)) with precise taxonomic identification and relative abundance description, giving no false positive detection. Using real datasets from various environments (ocean, soil, human gut) and from different approaches (metagenomics and gene capture by hybridization), RiboTaxa revealed microbial novelties not seen by current bioinformatics analysis opening new biological perspectives in human and environmental health. In a study focused on corals’ health involving 20 metagenomic samples (4), an affiliation of prokaryotes was limited to the family level with Endozoicomonadaceae characterising healthy octocoral tissue. RiboTaxa highlighted 2 species of uncultured Endozoicomonas which were dominant in the healthy tissue. Both species belonged to a genus not yet described, opening new research perspectives on corals’ health. Applied to metagenomics data from a study on human gut and extreme longevity (5), RiboTaxa detected the presence of an uncultured archaeon in semi-supercentenarians (aged 105 to 109 years) highlighting an archaeal genus, not yet described, and 3 uncultured species belonging to the Enorma genus that could be species of interest participating in the longevity process. RiboTaxa is user-friendly, rapid, allowing microbiota structure description from any environment and the results can be easily interpreted. This software is freely available at https://github.com/oschakoory/RiboTaxa under the GNU Affero General Public License 3.0.Keywords: metagenomics profiling, microbial diversity, SSU rRNA genes, full-length phylogenetic marker
Procedia PDF Downloads 120720 Optimized Deep Learning-Based Facial Emotion Recognition System
Authors: Erick C. Valverde, Wansu Lim
Abstract:
Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.Keywords: deep learning, face detection, facial emotion recognition, network optimization methods
Procedia PDF Downloads 118719 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives
Authors: Chen Guo, Heng Tang, Ben Niu
Abstract:
Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives
Procedia PDF Downloads 139718 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction
Authors: Mohammad Ghahramani, Fahimeh Saei Manesh
Abstract:
Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.Keywords: soccer, analytics, machine learning, database
Procedia PDF Downloads 238717 Investigation on Perception, Awareness and Health Impact of Air Pollution in Rural and Urban Area in Mymensingh Regions of Bangladesh
Authors: M. Azharul Islam, M. Russel Sarker, M. Shahadat Hossen
Abstract:
Air pollution is one of the major environmental problems that have gained importance in all over the world. Air pollution is a problem for all of us. The present study was conducted to explore the people’s perception level and awareness of air pollution in selected areas of Mymensingh in Bangladesh. Health impacts of air pollution also studied through personal interview and structured questionnaire. The relationship of independent variables (age, educational qualification, family size, residence and communication exposure) with the respondent’s perception level and awareness of air pollution (dependent variable) was studied to achieve the objectives of the study. About 600 respondents were selected randomly from six sites for collecting data during the period of July 2016 to June 2017. Pearson’s product-moment correlation coefficients were computed to examine the relationship between the concerned variables. The results revealed that about half (46.67%) of the respondents had a medium level of perception and awareness about air pollution in their areas where 31.67 percent had low, and 21.67 percent had a high level. In rural areas of the study sites, 43.33 percent respondents had low, 50 percent had medium, and only 6.67 percent had high perception and awareness on air pollution. In case of urban areas, 20 percent respondents had low, 43.33 percent had medium, and 36.67 percent had a high level of awareness and perception on air pollution. The majority of the respondents (93.33 percent) were lacking of proper awareness about air pollution in rural areas while 63.33 percent in urban areas. Out of five independent variables, three variables such as- educational qualification, residence status and communication exposure had positive and significant relationship. Age of respondents had negative and significant relationship with their awareness of air pollution where family size of the respondents had no significant relationship with their perception and awareness of air pollution. Thousands of people live in urban areas where urban smog, particle pollution, and toxic pollutants pose serious health concerns. But most of the respondents of the urban sites are not familiarize about the real causes of air pollution. Respondents exposed higher level of experience for air pollutants, such as- irritation of the eyes, coughing, tightness of chest and many health difficulties. But respondents of both rural and urban area hugely suffered such health problems and the tendency of certain difficulties increased day by day. In this study, most of the respondents had lack of knowledge on the causes of such health difficulties due to their lower perception level. Proper attempts should be taken to raise literacy level, communication exposure to increase the perception and awareness of air pollution among the respondents of the study areas. Extra care with above concerned fields should be taken to increase perception and awareness of air pollution in rural areas.Keywords: air pollution, awareness, health impacts, perception of people
Procedia PDF Downloads 234716 Mapping Actors in Sao Paulo's Urban Development Policies: Interests at Stake in the Challenge to Sustainability
Authors: A. G. Back
Abstract:
In the context of global climate change, extreme weather events are increasingly intense and frequent, challenging the adaptability of urban space. In this sense, urban planning is a relevant instrument for addressing, in a systemic manner, various sectoral policies capable of linking the urban agenda to the reduction of social and environmental risks. The Master Plan of the Municipality of Sao Paulo, 2014, presents innovations capable of promoting the transition to sustainability in the urban space. Among such innovations, the following stand out: i) promotion of density in the axes of mass transport involving mixture of commercial, residential, services, and leisure uses (principles related to the compact city); ii) vulnerabilities reduction based on housing policies, including regular sources of funds for social housing and land reservation in urbanized areas; iii) reserve of green areas in the city to create parks and environmental regulations for new buildings focused on reducing the effects of heat island and improving urban drainage. However, long-term implementation involves distributive conflicts and may change in different political, economic, and social contexts over time. Thus, the central objective of this paper is to identify which factors limit or support the implementation of these policies. That is, to map the challenges and interests of converging and/or divergent urban actors in the sustainable urban development agenda and what resources they mobilize to support or limit these actions in the city of Sao Paulo. Recent proposals to amend the urban zoning law undermine the implementation of the Master Plan guidelines. In this context, three interest groups with different views of the city come into dispute: the real estate market, upper middle class neighborhood associations ('not in my backyard' movements), and social housing rights movements. This paper surveys the different interests and visions of these groups taking into account their convergences, or not, with the principles of sustainable urban development. This approach seeks to fill a gap in the international literature on the causes that underpin or hinder the continued implementation of policies aimed at the transition to urban sustainability in the medium and long term.Keywords: adaptation, ecosystem-based adaptation, interest groups, urban planning, urban transition to sustainability
Procedia PDF Downloads 121715 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails
Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali
Abstract:
When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis
Procedia PDF Downloads 48714 Improving the Technology of Assembly by Use of Computer Calculations
Authors: Mariya V. Yanyukina, Michael A. Bolotov
Abstract:
Assembling accuracy is the degree of accordance between the actual values of the parameters obtained during assembly, and the values specified in the assembly drawings and technical specifications. However, the assembling accuracy depends not only on the quality of the production process but also on the correctness of the assembly process. Therefore, preliminary calculations of assembly stages are carried out to verify the correspondence of real geometric parameters to their acceptable values. In the aviation industry, most calculations involve interacting dimensional chains. This greatly complicates the task. Solving such problems requires a special approach. The purpose of this article is to carry out the problem of improving the technology of assembly of aviation units by use of computer calculations. One of the actual examples of the assembly unit, in which there is an interacting dimensional chain, is the turbine wheel of gas turbine engine. Dimensional chain of turbine wheel is formed by geometric parameters of disk and set of blades. The interaction of the dimensional chain consists in the formation of two chains. The first chain is formed by the dimensions that determine the location of the grooves for the installation of the blades, and the dimensions of the blade roots. The second dimensional chain is formed by the dimensions of the airfoil shroud platform. The interaction of the dimensional chain of the turbine wheel is the interdependence of the first and second chains by means of power circuits formed by a plurality of middle parts of the turbine blades. The timeliness of the calculation of the dimensional chain of the turbine wheel is the need to improve the technology of assembly of this unit. The task at hand contains geometric and mathematical components; therefore, its solution can be implemented following the algorithm: 1) research and analysis of production errors by geometric parameters; 2) development of a parametric model in the CAD system; 3) creation of set of CAD-models of details taking into account actual or generalized distributions of errors of geometrical parameters; 4) calculation model in the CAE-system, loading of various combinations of models of parts; 5) the accumulation of statistics and analysis. The main task is to pre-simulate the assembly process by calculating the interacting dimensional chains. The article describes the approach to the solution from the point of view of mathematical statistics, implemented in the software package Matlab. Within the framework of the study, there are data on the measurement of the components of the turbine wheel-blades and disks, as a result of which it is expected that the assembly process of the unit will be optimized by solving dimensional chains.Keywords: accuracy, assembly, interacting dimension chains, turbine
Procedia PDF Downloads 373713 Co-Creational Model for Blended Learning in a Flipped Classroom Environment Focusing on the Combination of Coding and Drone-Building
Authors: A. Schuchter, M. Promegger
Abstract:
The outbreak of the COVID-19 pandemic has shown us that online education is so much more than just a cool feature for teachers – it is an essential part of modern teaching. In online math teaching, it is common to use tools to share screens, compute and calculate mathematical examples, while the students can watch the process. On the other hand, flipped classroom models are on the rise, with their focus on how students can gather knowledge by watching videos and on the teacher’s use of technological tools for information transfer. This paper proposes a co-educational teaching approach for coding and engineering subjects with the help of drone-building to spark interest in technology and create a platform for knowledge transfer. The project combines aspects from mathematics (matrices, vectors, shaders, trigonometry), physics (force, pressure and rotation) and coding (computational thinking, block-based programming, JavaScript and Python) and makes use of collaborative-shared 3D Modeling with clara.io, where students create mathematics knowhow. The instructor follows a problem-based learning approach and encourages their students to find solutions in their own time and in their own way, which will help them develop new skills intuitively and boost logically structured thinking. The collaborative aspect of working in groups will help the students develop communication skills as well as structural and computational thinking. Students are not just listeners as in traditional classroom settings, but play an active part in creating content together by compiling a Handbook of Knowledge (called “open book”) with examples and solutions. Before students start calculating, they have to write down all their ideas and working steps in full sentences so other students can easily follow their train of thought. Therefore, students will learn to formulate goals, solve problems, and create a ready-to use product with the help of “reverse engineering”, cross-referencing and creative thinking. The work on drones gives the students the opportunity to create a real-life application with a practical purpose, while going through all stages of product development.Keywords: flipped classroom, co-creational education, coding, making, drones, co-education, ARCS-model, problem-based learning
Procedia PDF Downloads 120712 From Achilles to Chris Kyle-Militarized Masculinity and Hollywood in the Post-9/11 Era
Authors: Mary M. Park
Abstract:
Hollywood has had a long and enduring history of showcasing the United States military to civilian audiences, and the portrayals of soldiers in films have had a definite impact on the civilian perception of the US military. The growing gap between the civilian population and the military in the US has led to certain stereotypes of military personnel to proliferate, especially in the area of militarized masculinity, which has often been harmful to the psychological and spiritual wellbeing of military personnel. Examining Hollywood's portrayal of soldiers can serve to enhance our understanding of how civilians may be influenced in their perception of military personnel. Moreover, it can provide clues as to how male military personnel may also be influenced by Hollywood films as they form their own military identity. The post 9/11 era has seen numerous high budget films lionizing a particular type of soldier, the 'warrior-hero', who adheres to a traditional form of hegemonic masculinity and exhibits traits such as physical strength, bravery, stoicism, and an eagerness to fight. This paper examines how the portrayal of the 'warrior-hero' perpetuates a negative stereotype that soldiers are a blend of superheroes and emotionless robots and, therefore, inherently different from civilians. This paper examines the portrayal of militarized masculinity in three of the most successful war films of the post-9/11 era; Black Hawk Down (2001), The Hurt Locker (2008), and American Sniper (2014). The characters and experiences of the soldiers depicted in these films are contrasted with the lived experiences of soldiers during the Iraq and Afghanistan wars. Further, there is an analysis of popular films depicting ancient warriors, such as Troy (2004) and 300 (2007), which were released during the early years of the War on Terror. This paper draws on the concept of hegemonic militarised masculinity by leading scholars and feminist international relations theories on militarized masculinity. This paper uses veteran testimonies collected from a range of public sources, as well as previous studies on the link between traditional masculinity and war-related mental illness. This paper concludes that the seemingly exclusive portrayal of soldiers as 'warrior-heroes' in films in the post-9/11 era is misleading and damaging to civil-military relations and that the reality of the majority of soldiers' experiences is neglected in Hollywood films. As civilians often believe they are being shown true depictions of the US military in Hollywood films, especially in films that portray real events, it is important to find the differences between the idealized fictional 'warrior-heroes' and the reality of the soldiers on the ground in the War on Terror.Keywords: civil-military relations, gender studies, militarized masculinity, social pyschology
Procedia PDF Downloads 123711 A Study of Life Expectancy in an Urban Set up of North-Eastern India under Dynamic Consideration Incorporating Cause Specific Mortality
Authors: Mompi Sharma, Labananda Choudhury, Anjana M. Saikia
Abstract:
Background: The period life table is entirely based on the assumption that the mortality patterns of the population existing in the given period will persist throughout their lives. However, it has been observed that the mortality rate continues to decline. As such, if the rates of change of probabilities of death are considered in a life table then we get a dynamic life table. Although, mortality has been declining in all parts of India, one may be interested to know whether these declines had appeared more in an urban area of underdeveloped regions like North-Eastern India. So, attempt has been made to know the mortality pattern and the life expectancy under dynamic scenario in Guwahati, the biggest city of North Eastern India. Further, if the probabilities of death changes then there is a possibility that its different constituent probabilities will also change. Since cardiovascular disease (CVD) is the leading cause of death in Guwahati. Therefore, an attempt has also been made to formulate dynamic cause specific death ratio and probabilities of death due to CVD. Objectives: To construct dynamic life table for Guwahati for the year 2011 based on the rates of change of probabilities of death over the previous 10 and 25 years (i.e.,2001 and 1986) and to compute corresponding dynamic cause specific death ratio and probabilities of death due to CVD. Methodology and Data: The study uses the method proposed by Denton and Spencer (2011) to construct dynamic life table for Guwahati. So, the data from the Office of the Birth and Death, Guwahati Municipal Corporation for the years 1986, 2001 and 2011 are taken. The population based data are taken from 2001 and 2011 census (India). However, the population data for 1986 has been estimated. Also, the cause of death ratio and probabilities of death due to CVD are computed for the aforementioned years and then extended to dynamic set up for the year 2011 by considering the rates of change of those probabilities over the previous 10 and 25 years. Findings: The dynamic life expectancy at birth (LEB) for Guwahati is found to be higher than the corresponding values in the period table by 3.28 (5.65) years for males and 8.30 (6.37) years for females during the period of 10 (25) years. The life expectancies under dynamic consideration in all the other age groups are also seen higher than the usual life expectancies, which may be possible due to gradual decline in probabilities of death since 1986-2011. Further, a continuous decline has also been observed in death ratio due to CVD along with cause specific probabilities of death for both sexes. As a consequence, dynamic cause of death probability due to CVD is found to be less in comparison to usual procedure. Conclusion: Since incorporation of changing mortality rates in period life table for Guwahati resulted in higher life expectancies and lower probabilities of death due to CVD, this would possibly bring out the real situation of deaths prevailing in the city.Keywords: cause specific death ratio, cause specific probabilities of death, dynamic, life expectancy
Procedia PDF Downloads 232710 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures
Authors: Rui Teixeira, Alan O’Connor, Maria Nogal
Abstract:
The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data
Procedia PDF Downloads 272