Search results for: PDF to story feature
48 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.Keywords: cross-validation, importance sampling, information criteria, predictive accuracy
Procedia PDF Downloads 39347 Lifting Body Concepts for Unmanned Fixed-Wing Transport Aircrafts
Authors: Anand R. Nair, Markus Trenker
Abstract:
Lifting body concepts were conceived as early as 1917 and patented by Roy Scroggs. It was an idea of using the fuselage as a lift producing body with no or small wings. Many of these designs were developed and even flight tested between 1920’s to 1970’s, but it was not pursued further for commercial flight as at lower airspeeds, such a configuration was incapable to produce sufficient lift for the entire aircraft. The concept presented in this contribution is combining the lifting body design along with a fixed wing to maximise the lift produced by the aircraft. Conventional aircraft fuselages are designed to be aerodynamically efficient, which is to minimise the drag; however, these fuselages produce very minimal or negligible lift. For the design of an unmanned fixed wing transport aircraft, many of the restrictions which are present for commercial aircraft in terms of fuselage design can be excluded, such as windows for the passengers/pilots, cabin-environment systems, emergency exits, and pressurization systems. This gives new flexibility to design fuselages which are unconventionally shaped to contribute to the lift of the aircraft. The two lifting body concepts presented in this contribution are targeting different applications: For a fast cargo delivery drone, the fuselage is based on a scaled airfoil shape with a cargo capacity of 500 kg for euro pallets. The aircraft has a span of 14 m and reaches 1500 km at a cruising speed of 90 m/s. The aircraft could also easily be adapted to accommodate pilot and passengers with modifications to the internal structures, but pressurization is not included as the service ceiling envisioned for this type of aircraft is limited to 10,000 ft. The next concept to be investigated is called a multi-purpose drone, which incorporates a different type of lifting body and is a much more versatile aircraft as it will have a VTOL capability. The aircraft will have a wingspan of approximately 6 m and flight speeds of 60 m/s within the same service ceiling as the fast cargo delivery drone. The multi-purpose drone can be easily adapted for various applications such as firefighting, agricultural purposes, surveillance, and even passenger transport. Lifting body designs are not a new concept, but their effectiveness in terms of cargo transportation has not been widely investigated. Due to their enhanced lift producing capability, lifting body designs enable the reduction of the wing area and the overall weight of the aircraft. This will, in turn, reduce the thrust requirement and ultimately the fuel consumption. The various designs proposed in this contribution will be based on the general aviation category of aircrafts and will be focussed on unmanned methods of operation. These unmanned fixed-wing transport drones will feature appropriate cargo loading/unloading concepts which can accommodate large size cargo for efficient time management and ease of operation. The various designs will be compared in performance to their conventional counterpart to understand their benefits/shortcomings in terms of design, performance, complexity, and ease of operation. The majority of the performance analysis will be carried out using industry relevant standards in computational fluid dynamics software packages.Keywords: lifting body concept, computational fluid dynamics, unmanned fixed-wing aircraft, cargo drone
Procedia PDF Downloads 24646 Cuban's Supply Chains Development Model: Qualitative and Quantitative Impact on Final Consumers
Authors: Teresita Lopez Joy, Jose A. Acevedo Suarez, Martha I. Gomez Acosta, Ana Julia Acevedo Urquiaga
Abstract:
Current trends in business competitiveness indicate the need to manage businesses as supply chains and not in isolation. The use of strategies aimed at maximum satisfaction of customers in a network and based on inter-company cooperation; contribute to obtaining successful joint results. In the Cuban economic context, the development of productive linkages to achieve integrated management of supply chains is considering a key aspect. In order to achieve this jump, it is necessary to develop acting capabilities in the entities that make up the chains through a systematic procedure that allows arriving at a management model in consonance with the environment. The objective of the research focuses on: designing a model and procedure for the development of integrated management of supply chains in economic entities. The results obtained are: the Model and the Procedure for the Development of the Supply Chains Integrated Management (MP-SCIM). The Model is based on the development of logistics in the network actors, the joint work between companies, collaborative planning and the monitoring of a main indicator according to the end customers. The application Procedure starts from the well-founded need for development in a supply chain and focuses on training entrepreneurs as doers. The characterization and diagnosis is done to later define the design of the network and the relationships between the companies. It takes into account the feedback as a method of updating the conditions and way to focus the objectives according to the final customers. The MP-SCIM is the result of systematic work with a supply chain approach in companies that have consolidated as coordinators of their network. The cases of the edible oil chain and explosives for construction sector reflect results of more remarkable advances since they have applied this approach for more than 5 years and maintain it as a general strategy of successful development. The edible oil trading company experienced a jump in sales. In 2006, the company started the analysis in order to define the supply chain, apply diagnosis techniques, define problems and implement solutions. The involvement of the management and the progressive formation of performance capacities in the personnel allowed the application of tools according to the context. The company that coordinates the explosives chain for construction sector shows adequate training with independence and opportunity in the face of different situations and variations of their business environment. The appropriation of tools and techniques for the analysis and implementation of proposals is a characteristic feature of this case. The coordinating entity applies integrated supply chain management to its decisions based on the timely training of the necessary action capabilities for each situation. Other cases of study and application that validate these tools are also detailed in this paper, and they highlight the results of generalization in the quantitative and qualitative improvement according to the final clients. These cases are: teaching literature in universities, agricultural products of local scope and medicine supply chains.Keywords: integrated management, logistic system, supply chain management, tactical-operative planning
Procedia PDF Downloads 15445 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry
Authors: Nadia Belu, Laurenţiu Mihai Ionescu, Agnieszka Misztal
Abstract:
The automotive industry is one of the most important industries in the world that concerns not only the economy, but also the world culture. In the present financial and economic context, this field faces new challenges posed by the current crisis, companies must maintain product quality, deliver on time and at a competitive price in order to achieve customer satisfaction. Two of the most recommended techniques of quality management by specific standards of the automotive industry, in the product development, are Failure Mode and Effects Analysis (FMEA) and Control Plan. FMEA is a methodology for risk management and quality improvement aimed at identifying potential causes of failure of products and processes, their quantification by risk assessment, ranking of the problems identified according to their importance, to the determination and implementation of corrective actions related. The companies use Control Plans realized using the results from FMEA to evaluate a process or product for strengths and weaknesses and to prevent problems before they occur. The Control Plans represent written descriptions of the systems used to control and minimize product and process variation. In addition Control Plans specify the process monitoring and control methods (for example Special Controls) used to control Special Characteristics. In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.Keywords: automotive industry, FMEA, control plan, automotive technology
Procedia PDF Downloads 40644 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 8843 Operation System for Aluminium-Air Cell: A Strategy to Harvest the Energy from Secondary Aluminium
Authors: Binbin Chen, Dennis Y. C. Leung
Abstract:
Aluminium (Al) -air cell holds a high volumetric capacity density of 8.05 Ah cm-3, benefit from the trivalence of Al ions. Additional benefits of Al-air cell are low price and environmental friendliness. Furthermore, the Al energy conversion process is characterized of 100% recyclability in theory. Along with a large base of raw material reserve, Al attracts considerable attentions as a promising material to be integrated within the global energy system. However, despite the early successful applications in military services, several problems exist that prevent the Al-air cells from widely civilian use. The most serious issue is the parasitic corrosion of Al when contacts with electrolyte. To overcome this problem, super-pure Al alloyed with various traces of metal elements are used to increase the corrosion resistance. Nevertheless, high-purity Al alloys are costly and require high energy consumption during production process. An alternative approach is to add inexpensive inhibitors directly into the electrolyte. However, such additives would increase the internal ohmic resistance and hamper the cell performance. So far these methods have not provided satisfactory solutions for the problem within Al-air cells. For the operation of alkaline Al-air cell, there are still other minor problems. One of them is the formation of aluminium hydroxide in the electrolyte. This process decreases ionic conductivity of electrolyte. Another one is the carbonation process within the gas diffusion layer of cathode, blocking the porosity of gas diffusion. Both these would hinder the performance of cells. The present work optimizes the above problems by building an Al-air cell operation system, consisting of four components. A top electrolyte tank containing fresh electrolyte is located at a high level, so that it can drive the electrolyte flow by gravity force. A mechanical rechargeable Al-air cell is fabricated with low-cost materials including low grade Al, carbon paper, and PMMA plates. An electrolyte waste tank with elaborate channel is designed to separate the hydrogen generated from the corrosion, which would be collected by gas collection device. In the first section of the research work, we investigated the performance of the mechanical rechargeable Al-air cell with a constant flow rate of electrolyte, to ensure the repeatability experiments. Then the whole system was assembled together and the feasibility of operating was demonstrated. During experiment, pure hydrogen is collected by collection device, which holds potential for various applications. By collecting this by-product, high utilization efficiency of aluminum is achieved. Considering both electricity and hydrogen generated, an overall utilization efficiency of around 90 % or even higher under different working voltages are achieved. Fluidic electrolyte could remove aluminum hydroxide precipitate and solve the electrolyte deterioration problem. This operation system provides a low-cost strategy for harvesting energy from the abundant secondary Al. The system could also be applied into other metal-air cells and is suitable for emergency power supply, power plant and other applications. The low cost feature implies great potential for commercialization. Further optimization, such as scaling up and optimization of fabrication, will help to refine the technology into practical market offerings.Keywords: aluminium-air cell, high efficiency, hydrogen, mechanical recharge
Procedia PDF Downloads 28442 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices
Authors: Kaustav Mukherjee
Abstract:
In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parametersKeywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss
Procedia PDF Downloads 13341 Blood Chemo-Profiling in Workers Exposed to Occupational Pyrethroid Pesticides to Identify Associated Diseases
Authors: O. O. Sufyani, M. E. Oraiby, S. A. Qumaiy, A. I. Alaamri, Z. M. Eisa, A. M. Hakami, M. A. Attafi, O. M. Alhassan, W. M. Elsideeg, E. M. Noureldin, Y. A. Hobani, Y. Q. Majrabi, I. A. Khardali, A. B. Maashi, A. A. Al Mane, A. H. Hakami, I. M. Alkhyat, A. A. Sahly, I. M. Attafi
Abstract:
According to the Food and Agriculture Organization (FAO) Pesticides Use Database, pesticide use in agriculture in Saudi Arabia has more than doubled from 4539 tons in 2009 to 10496 tons in 2019. Among pesticides, pyrethroids is commonly used in Saudi Arabia. Pesticides may increase susceptibility to a variety of diseases, particularly among pesticide workers, due to their extensive use, indiscriminate use, and long-term exposure. Therefore, analyzing blood chemo-profiles and evaluating the detected substances as biomarkers for pyrethroid pesticide exposure may assist to identify and predicting adverse effects of exposure, which may be used for both preventative and risk assessment purposes. The purpose of this study was to (a) analyze chemo-profiling by Gas Chromatography-Mass Spectrometry (GC-MS) analysis, (b) identify the most commonly detected chemicals in a time-exposure-dependent manner using a Venn diagram, and (c) identify their associated disease among pesticide workers using analyzer tools on the Comparative Toxicogenomics Database (CTD) website, (250 healthy male volunteers (20-60 years old) who deal with pesticides in the Jazan region of Saudi Arabia (exposure intervals: 1-2, 4-6, 6-8, more than 8 years) were included in the study. A questionnaire was used to collect demographic information, the duration of pesticide exposure, and the existence of chronic conditions. Blood samples were collected for biochemistry analysis and extracted by solid-phase extraction for gas chromatography-mass spectrometry (GC-MS) analysis. Biochemistry analysis reveals no significant changes in response to the exposure period; however, an inverse association between the albumin level and the exposure interval was observed. The blood chemo-profiling was differentially expressed in an exposure time-dependent manner. This analysis identified the common chemical set associated with each group and their associated significant occupational diseases. While some of these chemicals are associated with a variety of diseases, the distinguishing feature of these chemically associated disorders is their applicability for prevention measures. The most interesting finding was the identification of several chemicals; erucic acid, pelargonic acid, alpha-linolenic acid, dibutyl phthalate, diisobutyl phthalate, dodecanol, myristic Acid, pyrene, and 8,11,14-eicosatrienoic acid, associated with pneumoconiosis, asbestosis, asthma, silicosis and berylliosis. Chemical-disease association study also found that cancer, digestive system disease, nervous system disease, and metabolic disease were the most often recognized disease categories in the common chemical set. The hierarchical clustering approach was used to compare the expression patterns and exposure intervals of the chemicals found commonly. More study is needed to validate these chemicals as early markers of pyrethroid insecticide-related occupational disease, which might assist evaluate and reducing risk. The current study contributes valuable data and recommendations to public health.Keywords: occupational, toxicology, chemo-profiling, pesticide, pyrethroid, GC-MS
Procedia PDF Downloads 10340 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers
Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala
Abstract:
The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification
Procedia PDF Downloads 16439 Pupils' and Teachers' Perceptions and Experiences of Welsh Language Instruction
Authors: Mirain Rhys, Kevin Smith
Abstract:
In 2017, the Welsh Government introduced an ambitious, new strategy to increase the number of Welsh speakers in Wales to 1 million by 2050. The Welsh education system is a vitally important feature of this strategy. All children attending state schools in Wales learn Welsh as a second language until the age of 16 and are assessed at General Certificate of Secondary Education (GCSE) level. In 2013, a review of Welsh second language instruction in Key Stages 3 and 4 was completed. The report identified considerable gaps in teachers’ preparation and training for teaching Welsh; poor Welsh language ethos at many schools; and a general lack of resources to support the instruction of Welsh. Recommendations were made across a number of dimensions including curriculum content, pedagogical practice, and teacher assessment, training, and resources. With a new national curriculum currently in development, this study builds on this review and provides unprecedented detail into pupils’ and teachers’ perceptions of Welsh language instruction. The current research built on data taken from an existing capacity building research project on Welsh education, the Wales multi-cohort study (WMS). Quantitative data taken from WMS surveys with over 1200 pupils in schools in Wales indicated that Welsh language lessons were the least enjoyable subject among pupils. The current research aimed to unpick pupil experiences in order to add to the policy development context. To achieve this, forty-four pupils and four teachers in three schools from the larger WMS sample participated in focus groups. Participants from years 9, 11 and 13 who had indicated positive, negative and neutral attitudes towards the Welsh language in a previous WMS survey were selected. Questions were based on previous research exploring issues including, but not limited to pedagogy, policy, assessment, engagement and (teacher) training. A thematic analysis of the focus group recordings revealed that the majority of participants held positive views around keeping the language alive but did not want to take on responsibility for its maintenance. These views were almost entirely based on their experiences of learning Welsh at school, especially in relation to their perceived lack of choice and opinions around particular lesson strategies and assessment. Analysis of teacher interviews highlighted a distinct lack of resources (materials and staff alike) compared to modern foreign languages, which had a negative impact on student motivation and attitudes. Both staff and students indicated a need for more practical, oral language instruction which could lead to Welsh being used outside the classroom. The data corroborate many of the review’s previous findings, but what makes this research distinctive is the way in which pupils poignantly address generally misguided aims for Welsh language instruction, poor pedagogical practice and a general disconnect between Welsh instruction and its daily use in their lives. These findings emphasize the complexity of incorporating the educational sector in strategies for Welsh language maintenance and the complications arising from pedagogical training, support, and resources, as well as teacher and pupil perceptions of, and attitudes towards, teaching and learning Welsh.Keywords: bilingual education, language maintenance, language revitalisation, minority languages, Wales
Procedia PDF Downloads 11238 A Qualitative Investigation into Street Art in an Indonesian City
Authors: Michelle Mansfield
Abstract:
Introduction: This paper uses the work of Deleuze and Guattari to consider the street art practice of youth in the Indonesian city of Yogyakarta, a hub of arts and culture in Central Java. Around the world young people have taken to city streets to populate the new informal exhibition spaces outside the galleries of official art institutions. However, rarely is the focus outside the urban metropolis of the ‘Global North.' This paper looks at these practices in a ‘Global South’ Asian context. Space and place are concepts central to understanding youth cultural expression as it emerges on the streets. Deleuze and Guattari’s notion of assemblage enriches understanding of this complex spatial and creative relationship. Yogyakarta street art combines global patterns and motifs with local meanings, symbolism, and language to express local youth voices that convey a unique sense of place on the world stage. Street art has developed as a global urban youth art movement and is theorised as a way in which marginalised young people reclaim urban space for themselves. Methodologies: This study utilised a variety of qualitative methodologies to collect and analyse data. This project took a multi-method approach to data collection, incorporating the qualitative social research methods of ethnography, nongkrong (deep hanging out), participatory action research, online research, in-depth interviews and focus group discussions. Both interviews and focus groups employed photo-elicitation methodology to stimulate rich data gathering. To analyse collected data, rhizoanalytic approaches incorporating discourse analysis and visual analysis were utilised. Street art practice is a fluid and shifting phenomenon, adding to the complexity of inquiry sites. A qualitative approach to data collection and analysis was the most appropriate way to map the components of the street art assemblage and to draw out complexities of this youth cultural practice in Yogyakarta. Major Findings: The rhizoanalytic approach devised for this study proved a useful way of examining in the street art assemblage. It illustrated the ways in which the street art assemblage is constructed. Especially the interaction of inspiration, materials, creative techniques, audiences, and spaces operate in the creations of artworks. The study also exposed the generational tensions between the senior arts practitioners, the established art world, and the young artists. Conclusion: In summary, within the spatial processes of the city, street art is inextricably linked with its audience, its striving artistic community and everyday life in the smooth rather than the striated worlds of the state and the official art world. In this way, the anarchic rhizomatic art practice of nomadic urban street crews can be described not only as ‘becoming-artist’ but as constituting ‘nomos’, a way of arranging elements which are not dependent on a structured, hierarchical organisation practice. The site, streets, crews, neighbourhood and the passers by can all be examined with the concept of assemblage. The assemblage effectively brings into focus the complexity, dynamism, and flows of desire that is a feature of street art practice by young people in Yogyakarta.Keywords: assemblage, Indonesia, street art, youth
Procedia PDF Downloads 18337 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction
Authors: Yan Zhang
Abstract:
Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.Keywords: Internet of Things, machine learning, predictive maintenance, streaming data
Procedia PDF Downloads 38736 Functional Outcome of Speech, Voice and Swallowing Following Excision of Glomus Jugulare Tumor
Authors: B. S. Premalatha, Kausalya Sahani
Abstract:
Background: Glomus jugulare tumors arise within the jugular foramen and are commonly seen in females particularly on the left side. Surgical excision of the tumor may cause lower cranial nerve deficits. Cranial nerve involvement produces hoarseness of voice, slurred speech, and dysphagia along with other physical symptoms, thereby affecting the quality of life of individuals. Though oncological clearance is mainly emphasized on while treating these individuals, little importance is given to their communication, voice and swallowing problems, which play a crucial part in daily functioning. Objective: To examine the functions of voice, speech and swallowing outcomes of the subjects, following excision of glomus jugulare tumor. Methods: Two female subjects aged 56 and 62 years had come with a complaint of change in voice, inability to swallow and reduced clarity of speech following surgery for left glomus jugulare tumor were participants of the study. Their surgical information revealed multiple cranial nerve palsies involving the left facial, left superior and recurrent branches of the vagus nerve, left pharyngeal, left soft palate, left hypoglossal and vestibular nerves. Functional outcomes of voice, speech and swallowing were evaluated by perceptual and objective assessment procedures. Assessment included the examination of oral structures and functions, dysarthria by Frenchey dysarthria assessment, cranial nerve functions and swallowing functions. MDVP and Dr. Speech software were used to evaluate acoustic parameters of voice and quality of voice respectively. Results: The study revealed that both the subjects, subsequent to excision of glomus jugulare tumor, showed a varied picture of affected oral structure and functions, articulation, voice and swallowing functions. The cranial nerve assessment showed impairment of the vagus, hypoglossal, facial and glossopharyngeal nerves. Voice examination indicated vocal cord paralysis associated with breathy quality of voice, weak voluntary cough, reduced pitch and loudness range, and poor respiratory support. Perturbation parameters as jitter, shimmer were affected along with s/z ratio indicative of voice fold pathology. Reduced MPD(Maximum Phonation Duration) of vowels indicated that disturbed coordination between respiratory and laryngeal systems. Hypernasality was found to be a prominent feature which reduced speech intelligibility. Imprecise articulation was seen in both the subjects as the hypoglossal nerve was affected following surgery. Injury to vagus, hypoglossal, gloss pharyngeal and facial nerves disturbed the function of swallowing. All the phases of swallow were affected. Aspiration was observed before and during the swallow, confirming the oropharyngeal dysphagia. All the subsystems were affected as per Frenchey Dysarthria Assessment signifying the diagnosis of flaccid dysarthria. Conclusion: There is an observable communication and swallowing difficulty seen following excision of glomus jugulare tumor. Even with complete resection, extensive rehabilitation may be necessary due to significant lower cranial nerve dysfunction. The finding of the present study stresses the need for involvement of as speech and swallowing therapist for pre-operative counseling and assessment of functional outcomes.Keywords: functional outcome, glomus jugulare tumor excision, multiple cranial nerve impairment, speech and swallowing
Procedia PDF Downloads 25235 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning
Authors: Xingyu Gao, Qiang Wu
Abstract:
Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.Keywords: patent influence, interpretable machine learning, predictive models, SHAP
Procedia PDF Downloads 5034 Green Building for Positive Energy Districts in European Cities
Authors: Paola Clerici Maestosi
Abstract:
Positive Energy District (PED) is a rather recent concept whose aim is to contribute to the main objectives of the Energy Union strategy. It is based on an integrated multi-sectoral approach in response to Europe's most complex challenges. PED integrates energy efficiency, renewable energy production, and energy flexibility in an integrated, multi-sectoral approach at the city level. The core idea behind Positive Energy Districts (PEDs) is to establish an urban area that can generate more energy than it consumes. Additionally, it should be flexible enough to adapt to changes in the energy market. This is crucial because a PED's goal is not just to achieve an annual surplus of net energy but also to help reduce the impact on the interconnected centralized energy networks. It achieves this by providing options to increase on-site load matching and self-consumption, employing technologies for short- and long-term energy storage, and offering energy flexibility through smart control. Thus, it seems that PEDs can encompass all types of buildings in the city environment. Given this which is the added value of having green buildings being constitutive part of PEDS? The paper will present a systematic literature review identifying the role of green building in Positive Energy District to provide answer to following questions: (RQ1) the state of the art of PEDs implementation; (RQ2) penetration of green building in Positive Energy District selected case studies. Methodological approach is based on a broad holistic study of bibliographic sources according to Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) further data will be analysed, mapped and text mining through VOSviewer. Main contribution of research is a cognitive framework on Positive Energy District in Europe and a selection of case studies where green building supported the transition to PED. The inclusion of green buildings within Positive Energy Districts (PEDs) adds significant value for several reasons. Firstly, green buildings are designed and constructed with a focus on environmental sustainability, incorporating energy-efficient technologies, materials, and design principles. As integral components of PEDs, these structures contribute directly to the district's overall ability to generate more energy than it consumes. Secondly, green buildings typically incorporate renewable energy sources, such as solar panels or wind turbines, further boosting the district's capacity for energy generation. This aligns with the PED objective of achieving a surplus of net energy. Moreover, green buildings often feature advanced systems for on-site energy management, load-matching, and self-consumption. This enhances the PED's capability to respond to variations in the energy market, making the district more agile and flexible in optimizing energy use. Additionally, the environmental considerations embedded in green buildings align with the broader sustainability goals of PEDs. By reducing the ecological footprint of individual structures, PEDs with green buildings contribute to minimizing the overall impact on centralized energy networks and promote a more sustainable urban environment. In summary, the incorporation of green buildings within PEDs not only aligns with the district's energy objectives but also enhances environmental sustainability, energy efficiency, and the overall resilience of the urban environment.Keywords: positive energy district, renewables energy production, energy flexibility, energy efficiency
Procedia PDF Downloads 4933 Blood Thicker Than Water: A Case Report on Familial Ovarian Cancer
Authors: Joanna Marie A. Paulino-Morente, Vaneza Valentina L. Penolio, Grace Sabado
Abstract:
Ovarian cancer is extremely hard to diagnose in its early stages, and those afflicted at the time of diagnosis are typically asymptomatic and in the late stages of the disease, with metastasis to other organs. Ovarian cancers often occur sporadically, with only 5% associated with hereditary mutations. Mutations in the BRCA1 and BRCA2 tumor suppressor genes have been found to be responsible for the majority of hereditary ovarian cancers. One type of ovarian tumor is Malignant Mixed Mullerian Tumor (MMMT), which is a very rare and aggressive type, accounting for only 1% of all ovarian cancers. Reported is a case of a 43-year-old G3P3 (3003), who came into our institution due to a 2-month history of difficulty of breathing. Family history reveals that her eldest and younger sisters both died of ovarian malignancy, with her younger sister having a histopathology report of endometrioid ovarian carcinoma, left ovary stage IIIb. She still has 2 asymptomatic sisters. Physical examination pointed to pleural effusion of right lung, and presence of bilateral ovarian new growth, which had a Sassone score of 13. Admitting Diagnosis was G3P3 (3003), Ovarian New Growth, bilateral, Malignant; Pleural effusion secondary to malignancy. BRCA was requested to establish a hereditary mutation; however, the patient had no funds. Once the patient was stabilized, TAHBSO with surgical staging was performed. Intraoperatively, the pelvic cavity was occupied by firm, irregularly shaped ovaries, with a colorectal metastasis. Microscopic sections from both ovaries and the colorectal metastasis had pleomorphic tumor cells lined by cuboidal to columnar epithelium exhibiting glandular complexity, displaying nuclear atypia and increased nuclear-cytoplasmic ratio, which are infiltrating the stroma, consistent with the features of Malignant Mixed Mullerian Tumor, since MMMT is composed histologically of malignant epithelial and sarcomatous elements. In conclusion, discussed is the clinic-pathological feature of a patient with primary ovarian Malignant Mixed Mullerian Tumor, a rare malignancy comprising only 1% of all ovarian neoplasms. Also, by understanding the hereditary ovarian cancer syndromes and its relation to this patient, it cannot be overemphasized that a comprehensive family history is really fundamental for early diagnosis. The familial association of the disease, given that the patient has two sisters who were diagnosed with an advanced stage of ovarian cancer and succumbed to the disease at a much earlier age than what is reported in the general population, points to a possible hereditary syndrome which occurs in only 5% of ovarian neoplasms. In a low-resource setting, being in a third world country, the following will be recommended for monitoring and/or screening women who are at high risk for developing ovarian cancer, such as the remaining sisters of the patient: 1) Physical examination focusing on the breast, abdomen, and rectal area every 6 months. 2) Transvaginal sonography every 6 months. 3) Mammography annually. 4) CA125 for postmenopausal women. 5) Genetic testing for BRCA1 and BRCA2 will be reserved for those who are financially capable.Keywords: BRCA, hereditary breast-ovarian cancer syndrome, malignant mixed mullerian tumor, ovarian cancer
Procedia PDF Downloads 28932 A Next-Generation Pin-On-Plate Tribometer for Use in Arthroplasty Material Performance Research
Authors: Lewis J. Woollin, Robert I. Davidson, Paul Watson, Philip J. Hyde
Abstract:
Introduction: In-vitro testing of arthroplasty materials is of paramount importance when ensuring that they can withstand the performance requirements encountered in-vivo. One common machine used for in-vitro testing is a pin-on-plate tribometer, an early stage screening device that generates data on the wear characteristics of arthroplasty bearing materials. These devices test vertically loaded rotating cylindrical pins acting against reciprocating plates, representing the bearing surfaces. In this study, a pin-on-plate machine has been developed that provides several improvements over current technology, thereby progressing arthroplasty bearing research. Historically, pin-on-plate tribometers have been used to investigate the performance of arthroplasty bearing materials under conditions commonly encountered during a standard gait cycle; nominal operating pressures of 2-6 MPa and an operating frequency of 1 Hz are typical. There has been increased interest in using pin-on-plate machines to test more representative in-vivo conditions, due to the drive to test 'beyond compliance', as well as their testing speed and economic advantages over hip simulators. Current pin-on-plate machines do not accommodate the increased performance requirements associated with more extreme kinematic conditions, therefore a next-generation pin-on-plate tribometer has been developed to bridge the gap between current technology and future research requirements. Methodology: The design was driven by several physiologically relevant requirements. Firstly, an increased loading capacity was essential to replicate the peak pressures that occur in the natural hip joint during running and chair-rising, as well as increasing the understanding of wear rates in obese patients. Secondly, the introduction of mid-cycle load variation was of paramount importance, as this allows for an approximation of the loads present in a gait cycle to be applied and to test the fatigue properties of materials. Finally, the rig must be validated against previous-generation pin-on-plate and arthroplasty wear data. Results: The resulting machine is a twelve station device that is split into three sets of four stations, providing an increased testing capacity compared to most current pin-on-plate tribometers. The loading of the pins is generated using a pneumatic system, which can produce contact pressures of up to 201 MPa on a 3.2 mm² round pin face. This greatly exceeds currently achievable contact pressures in literature and opens new research avenues such as testing rim wear of mal-positioned hip implants. Additionally, the contact pressure of each set can be changed independently of the others, allowing multiple loading conditions to be tested simultaneously. Using pneumatics also allows the applied pressure to be switched ON/OFF mid-cycle, another feature not currently reported elsewhere, which allows for investigation into intermittent loading and material fatigue. The device is currently undergoing a series of validation tests using Ultra-High-Molecular-Weight-Polyethylene pins and 316L Stainless Steel Plates (polished to a Ra < 0.05 µm). The operating pressures will be between 2-6 MPa, operating at 1 Hz, allowing for validation of the machine against results reported previously in the literature. The successful production of this next-generation pin-on-plate tribometer will, following its validation, unlock multiple previously unavailable research avenues.Keywords: arthroplasty, mechanical design, pin-on-plate, total joint replacement, wear testing
Procedia PDF Downloads 9531 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing
Authors: Arjun Kumar Rath, Titus Dhanasingh
Abstract:
Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.Keywords: board diagnostics software, embedded system, hardware testing, test frameworks
Procedia PDF Downloads 14630 Small Scale Mobile Robot Auto-Parking Using Deep Learning, Image Processing, and Kinematics-Based Target Prediction
Authors: Mingxin Li, Liya Ni
Abstract:
Autonomous parking is a valuable feature applicable to many robotics applications such as tour guide robots, UV sanitizing robots, food delivery robots, and warehouse robots. With auto-parking, the robot will be able to park at the charging zone and charge itself without human intervention. As compared to self-driving vehicles, auto-parking is more challenging for a small-scale mobile robot only equipped with a front camera due to the camera view limited by the robot’s height and the narrow Field of View (FOV) of the inexpensive camera. In this research, auto-parking of a small-scale mobile robot with a front camera only was achieved in a four-step process: Firstly, transfer learning was performed on the AlexNet, a popular pre-trained convolutional neural network (CNN). It was trained with 150 pictures of empty parking slots and 150 pictures of occupied parking slots from the view angle of a small-scale robot. The dataset of images was divided into a group of 70% images for training and the remaining 30% images for validation. An average success rate of 95% was achieved. Secondly, the image of detected empty parking space was processed with edge detection followed by the computation of parametric representations of the boundary lines using the Hough Transform algorithm. Thirdly, the positions of the entrance point and center of available parking space were predicted based on the robot kinematic model as the robot was driving closer to the parking space because the boundary lines disappeared partially or completely from its camera view due to the height and FOV limitations. The robot used its wheel speeds to compute the positions of the parking space with respect to its changing local frame as it moved along, based on its kinematic model. Lastly, the predicted entrance point of the parking space was used as the reference for the motion control of the robot until it was replaced by the actual center when it became visible again by the robot. The linear and angular velocities of the robot chassis center were computed based on the error between the current chassis center and the reference point. Then the left and right wheel speeds were obtained using inverse kinematics and sent to the motor driver. The above-mentioned four subtasks were all successfully accomplished, with the transformed learning, image processing, and target prediction performed in MATLAB, while the motion control and image capture conducted on a self-built small scale differential drive mobile robot. The small-scale robot employs a Raspberry Pi board, a Pi camera, an L298N dual H-bridge motor driver, a USB power module, a power bank, four wheels, and a chassis. Future research includes three areas: the integration of all four subsystems into one hardware/software platform with the upgrade to an Nvidia Jetson Nano board that provides superior performance for deep learning and image processing; more testing and validation on the identification of available parking space and its boundary lines; improvement of performance after the hardware/software integration is completed.Keywords: autonomous parking, convolutional neural network, image processing, kinematics-based prediction, transfer learning
Procedia PDF Downloads 13229 Developing a Performance Measurement System for Arts-Based Initiatives: Action Research on Italian Corporate Museums
Authors: Eleonora Carloni, Michela Arnaboldi
Abstract:
In academia, the investigation of the relationship between cultural heritage and corporations is ubiquitous in several fields of studies. In practice corporations are more and more integrating arts and cultural heritage in their strategies for disparate benefits, such as: to foster customer’s purchase intention with authentic and aesthetic experiences, to improve their reputation towards local communities, and to motivate employees with creative thinking. There are diverse forms under which corporations set these artistic interventions, from sponsorships to arts-based training centers for employees, but scholars agree that the maximum expression of this cultural trend are corporate museums, growing in number and relevance. Corporate museums are museum-like settings, hosting artworks of corporations’ history and interests. In academia they have been ascribed as strategic asset and they have been associated with diverse uses for corporations’ benefits, from place for preservation of cultural heritage, to tools for public relations and cultural flagship stores. Previous studies have thus extensively but fragmentally studied the diverse benefits of corporate museum opening to corporations, with a lack of comprehensive approach and a digression on how to evaluate and report corporate museum’s performances. Stepping forward, the present study aims to investigate: 1) what are the key performance measures corporate museums need to report to the associated corporations; 2) how are the key performance measures reported to the concerned corporations. This direction of study is not only suggested as future direction in academia but it has solid basis in practice, aiming to answer to the need of corporate museums’ directors to account for corporate museum’s activities to the concerned corporation. Coherently, at an empirical level the study relies on action research method, whose distinctive feature is to develop practical knowledge through a participatory process. This paper indeed relies on the experience of a collaborative project between the researchers and a set of corporate museums in Italy, aimed at co-developing a performance measurement system. The project involved two steps: a first step, in which researchers derived the potential performance measures from literature along with exploratory interviews; a second step, in which researchers supported the pool of corporate museums’ directors in co-developing a set of key performance indicators for reporting. Preliminary empirical findings show that while scholars insist on corporate museums’ capability to develop networking relations, directors insist on the role of museums as internal supplier of knowledge for innovation goals. Moreover, directors stress museums’ cultural mission and outcomes as potential benefits for corporation, by remarking to include both cultural and business measures in the final tool. In addition, they give relevant attention to the wording used in humanistic terms while struggling to express all measures in economic terms. The paper aims to contribute to corporate museums’ and more broadly to arts-based initiatives’ literature in two directions. Firstly, it elaborates key performance measures with related indicators to report on cultural initiatives for corporations. Secondly, it provides evidence of challenges and practices to handle reporting on these initiatives, because of tensions arising from the co-existence of diverse perspectives, namely arts and business worlds.Keywords: arts-based initiative, corporate museum, hybrid organization, performance measurement
Procedia PDF Downloads 17728 Guard@Lis: Birdwatching Augmented Reality Mobile Application
Authors: Jose A. C. Venancio, Alexandrino J. M. Goncalves, Anabela Marto, Nuno C. S. Rodrigues, Rita M. T. Ascenso
Abstract:
Nowadays, it is common to find people who are concerned about getting away from the everyday life routine, looking forward to outcome well-being and pleasant emotions. Trying to disconnect themselves from the usual places of work and residence, they pursue different places, such as tourist destinations, aiming to have unexpected experiences. In order to make this exploration process easier, cities and tourism agencies seek new opportunities and solutions, creating routes with diverse cultural landmarks, including natural landscapes and historic buildings. These offers frequently aspire to the preservation of the local patrimony. In nature and wildlife, birdwatching is an activity that has been increasing, both in cities and in the countryside. This activity seeks to find, observe and identify the diversity of birds that live permanently or temporarily in these places, and it is usually supported by birdwatching guides. Leiria (Portugal) is a well-known city, presenting several historical and natural landmarks, like the Lis river and the castle where King D. Dinis lived in the 13th century. Along the Lis River, a conservation process was carried out and a pedestrian route was created (Polis project). This is considered an excellent spot for birdwatching, especially for the gray heron (Ardea cinerea) and for the kingfisher (Alcedo atthis). There is also a route through the city, from the riverside to the castle, which encloses a characterized variety of species, such as the barn swallow (Hirundo rustica), known for passing through different seasons of the year. Birdwatching is sometimes a difficult task since it is not always possible to see all bird species that inhabit a given place. For this reason, a need to create a technological solution was found to ease this activity. This project aims to encourage people to learn about the various species of birds that live along the Lis River and to promote the preservation of nature in a conscious way. This work is being conducted in collaboration with Leiria Municipal Council and with the Environmental Interpretation Centre. It intends to show the majesty of the Lis River, a place visited daily by several people, such as children and families, who use it for didactic and recreational activities. We are developing a mobile multi-platform application (Guard@Lis) that allows bird species to be observed along a given route, using representative digital 3D models through the integration of augmented reality technologies. Guard@Lis displays a route with points of interest for birdwatching and a list of species for each point of interest, along with scientific information, images and sounds for every species. For some birds, to ensure their observation, the user can watch them in loco, in their real and natural environment, with their mobile device by means of augmented reality, giving the sensation of presence of these birds, even if they cannot be seen in that place at that moment. The augmented reality feature is being developed with Vuforia SDK, using a hybrid approach to recognition and tracking processes, combining marks and geolocation techniques. This application proposes routes and notifies users with alerts for the possibility of viewing models of augmented reality birds. The final Guard@Lis prototype will be tested by volunteers in-situ.Keywords: augmented reality, birdwatching route, mobile application, nature tourism, watch birds using augmented reality
Procedia PDF Downloads 17827 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition
Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can
Abstract:
To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning
Procedia PDF Downloads 8526 Provotyping Futures Through Design
Authors: Elisabetta Cianfanelli, Maria Claudia Coppola, Margherita Tufarelli
Abstract:
Design practices throughout history return a critical understanding of society since they always conveyed values and meanings aimed at (re)framing reality by acting in everyday life: here, design gains cultural and normative character, since its artifacts, services, and environments hold the power to intercept, influence and inspire thoughts, behaviors, and relationships. In this sense, design can be persuasive, engaging in the production of worlds and, as such, acting in the space between poietics and politics so that chasing preferable futures and their aesthetic strategies becomes a matter full of political responsibility. This resonates with contemporary landscapes of radical interdependencies challenging designers to focus on complex socio-technical systems and to better support values such as equality and justice for both humans and nonhumans. In fact, it is in times of crisis and structural uncertainty that designers turn into visionaries at the service of society, envisioning scenarios and dwelling in the territories of imagination to conceive new fictions and frictions to be added to the thickness of the real. Here, design’s main tasks are to develop options, to increase the variety of choices, to cultivate its role as scout, jester, agent provocateur for the public, so that design for transformation emerges, making an explicit commitment to society, furthering structural change in a proactive and synergic manner. However, the exploration of possible futures is both a trap and a trampoline because, although it embodies a radical research tool, it raises various challenges when the design process goes further in the translation of such vision into an artefact - whether tangible or intangible -, through which it should deliver that bit of future into everyday experience. Today designers are making up new tools and practices to tackle current wicked challenges, combining their approaches with other disciplinary domains: futuring through design, thus, rises from research strands like speculative design, design fiction, and critical design, where the blending of design approaches and futures thinking brings an action-oriented and product-based approach to strategic insights. The contribution positions at the intersection of those approaches, aiming at discussing design’s tools of inquiry through which it is possible to grasp the agency of imagined futures into present time. Since futures are not remote, they actively participate in creating path-dependent decisions, crystallized into designed artifacts par excellence, prototypes, and their conceptual other, provotypes: with both being unfinished and multifaceted, the first ones are effective in reiterating solutions to problems already framed, while the second ones prove to be useful when the goal is to explore and break boundaries, bringing closer preferable futures. By focusing on some provotypes throughout history which challenged markets and, above all, social and cultural structures, the contribution’s final aim is understanding the knowledge produced by provotypes, understood as design spaces where designs’s humanistic side might help developing a deeper sensibility about uncertainty and, most of all, the unfinished feature of societal artifacts, whose experimentation would leave marks and traces to build up f(r)ictions as vital sparks of plurality and collective life.Keywords: speculative design, provotypes, design knowledge, political theory
Procedia PDF Downloads 13425 Internet of Things, Edge and Cloud Computing in Rock Mechanical Investigation for Underground Surveys
Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo
Abstract:
Rock mechanical investigation is one of the most crucial activities in underground operations, especially in surveys related to hydrocarbon exploration and production, geothermal reservoirs, energy storage, mining, and geotechnics. There is a wide range of traditional methods for driving, collecting, and analyzing rock mechanics data. However, these approaches may not be suitable or work perfectly in some situations, such as fractured zones. Cutting-edge technologies have been provided to solve and optimize the mentioned issues. Internet of Things (IoT), Edge, and Cloud Computing technologies (ECt & CCt, respectively) are among the most widely used and new artificial intelligence methods employed for geomechanical studies. IoT devices act as sensors and cameras for real-time monitoring and mechanical-geological data collection of rocks, such as temperature, movement, pressure, or stress levels. Structural integrity, especially for cap rocks within hydrocarbon systems, and rock mass behavior assessment, to further activities such as enhanced oil recovery (EOR) and underground gas storage (UGS), or to improve safety risk management (SRM) and potential hazards identification (P.H.I), are other benefits from IoT technologies. EC techniques can process, aggregate, and analyze data immediately collected by IoT on a real-time scale, providing detailed insights into the behavior of rocks in various situations (e.g., stress, temperature, and pressure), establishing patterns quickly, and detecting trends. Therefore, this state-of-the-art and useful technology can adopt autonomous systems in rock mechanical surveys, such as drilling and production (in hydrocarbon wells) or excavation (in mining and geotechnics industries). Besides, ECt allows all rock-related operations to be controlled remotely and enables operators to apply changes or make adjustments. It must be mentioned that this feature is very important in environmental goals. More often than not, rock mechanical studies consist of different data, such as laboratory tests, field operations, and indirect information like seismic or well-logging data. CCt provides a useful platform for storing and managing a great deal of volume and different information, which can be very useful in fractured zones. Additionally, CCt supplies powerful tools for predicting, modeling, and simulating rock mechanical information, especially in fractured zones within vast areas. Also, it is a suitable source for sharing extensive information on rock mechanics, such as the direction and size of fractures in a large oil field or mine. The comprehensive review findings demonstrate that digital transformation through integrated IoT, Edge, and Cloud solutions is revolutionizing traditional rock mechanical investigation. These advanced technologies have empowered real-time monitoring, predictive analysis, and data-driven decision-making, culminating in noteworthy enhancements in safety, efficiency, and sustainability. Therefore, by employing IoT, CCt, and ECt, underground operations have experienced a significant boost, allowing for timely and informed actions using real-time data insights. The successful implementation of IoT, CCt, and ECt has led to optimized and safer operations, optimized processes, and environmentally conscious approaches in underground geological endeavors.Keywords: rock mechanical studies, internet of things, edge computing, cloud computing, underground surveys, geological operations
Procedia PDF Downloads 6424 India's Geothermal Energy Landscape and Role of Geophysical Methods in Unravelling Untapped Reserves
Authors: Satya Narayan
Abstract:
India, a rapidly growing economy with a burgeoning population, grapples with the dual challenge of meeting rising energy demands and reducing its carbon footprint. Geothermal energy, an often overlooked and underutilized renewable source, holds immense potential for addressing this challenge. Geothermal resources offer a valuable, consistent, and sustainable energy source, and may significantly contribute to India's energy. This paper discusses the importance of geothermal exploration in India, emphasizing its role in achieving sustainable energy production while mitigating environmental impacts. It also delves into the methodology employed to assess geothermal resource feasibility, including geophysical surveys and borehole drilling. The results and discussion sections highlight promising geothermal sites across India, illuminating the nation's vast geothermal potential. It detects potential geothermal reservoirs, characterizes subsurface structures, maps temperature gradients, monitors fluid flow, and estimates key reservoir parameters. Globally, geothermal energy falls into high and low enthalpy categories, with India mainly having low enthalpy resources, especially in hot springs. The northwestern Himalayan region boasts high-temperature geothermal resources due to geological factors. Promising sites, like Puga Valley, Chhumthang, and others, feature hot springs suitable for various applications. The Son-Narmada-Tapti lineament intersects regions rich in geological history, contributing to geothermal resources. Southern India, including the Godavari Valley, has thermal springs suitable for power generation. The Andaman-Nicobar region, linked to subduction and volcanic activity, holds high-temperature geothermal potential. Geophysical surveys, utilizing gravity, magnetic, seismic, magnetotelluric, and electrical resistivity techniques, offer vital information on subsurface conditions essential for detecting, evaluating, and exploiting geothermal resources. The gravity and magnetic methods map the depth of the mantle boundary (high-temperature) and later accurately determine the Curie depth. Electrical methods indicate the presence of subsurface fluids. Seismic surveys create detailed sub-surface images, revealing faults and fractures and establishing possible connections to aquifers. Borehole drilling is crucial for assessing geothermal parameters at different depths. Detailed geochemical analysis and geophysical surveys in Dholera, Gujarat, reveal untapped geothermal potential in India, aligning with renewable energy goals. In conclusion, geophysical surveys and borehole drilling play a pivotal role in economically viable geothermal site selection and feasibility assessments. With ongoing exploration and innovative technology, these surveys effectively minimize drilling risks, optimize borehole placement, aid in environmental impact evaluations, and facilitate remote resource exploration. Their cost-effectiveness informs decisions regarding geothermal resource location and extent, ultimately promoting sustainable energy and reducing India's reliance on conventional fossil fuels.Keywords: geothermal resources, geophysical methods, exploration, exploitation
Procedia PDF Downloads 8723 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip
Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas
Abstract:
A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration
Procedia PDF Downloads 38822 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator
Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic
Abstract:
The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion
Procedia PDF Downloads 6221 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations
Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai
Abstract:
Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile
Procedia PDF Downloads 14320 Clinico-pathological Study of Xeroderma Pigmentosa: A Case Series of Eight Cases
Authors: Kakali Roy, Sahana P. Raju, Subhra Dhar, Sandipan Dhar
Abstract:
Introduction: Xeroderma pigmentosa (XP) is a rare inherited (autosomal recessive) disease resulting from impairment in DNA repair that involves recognition and repair of ultraviolet radiation (UVR) induced DNA damage in the nucleotide excision repair pathway. Which results in increased photosensitivity, UVR induced damage to skin and eye, increased susceptibility of skin and ocular cancer, and progressive neurodegeneration in some patients. XP is present worldwide, with higher incidence in areas having frequent consanguinity. Being extremely rare, there is limited literature on XP and associated complications. Here, the clinico-pathological experience (spectrum of clinical presentation, histopathological findings of malignant skin lesions, and progression) of managing 8 cases of XP is presented. Methodology: A retrospective study was conducted in a pediatric tertiary care hospital in eastern India during a ten-year period from 2013 to 2022. A clinical diagnosis was made based on severe sun burn or premature photo-aging and/or onset of cutaneous malignancies at early age (1st decade) in background of consanguinity and autosomal recessive inheritance pattern in family. Results: The mean age of presentation was 1.2 years (range of 7month-3years), while three children presented during their infancy. Male to female ratio was 5:3, and all were born of consanguineous marriage. They presented with dermatological manifestations (100%) followed by ophthalmic (75%) and/or neurological symptoms (25%). Patients had normal skin at birth but soon developed extreme sensitivity to UVR in the form of exaggerated sun tanning, burning, and blistering on minimal sun exposure, followed by abnormal skin pigmentation like freckles and lentiginosis. Subsequently, over time there was progressive xerosis, atrophy, wrinkling, and poikiloderma. Six patients had varied degree of ocular involvement, while three of them had severe manifestation, including madarosis, tylosis, ectropion, Lagopthalmos, Pthysis bulbi, clouding and scarring of the cornea with complete or partial loss of vision, and ophthalmic malignancies. 50% (n=4) cases had skin and ocular pre-malignant (actinic keratosis) and malignant lesions, including melanoma and non melanoma skin cancer (NMSC) like squamous cell carcinoma (SCC) and basal cell carcinoma (BCC) in their early childhood. One patient had simultaneous occurrence of multiple malignancies together (SCC, BCC, and melanoma). Subnormal intelligence was noticed as neurological feature, and none had sensory neural hearing loss, microcephaly, neuroregression, or neurdeficit. All the patients had been being managed by a multidisciplinary team of pediatricians, dermatologists, ophthalmologists, neurologists and psychiatrists. Conclusion: Although till date there is no complete cure for XP and the disease is ultimately fatal. But increased awareness, early diagnosis followed by persistent vigorous protection from UVR, and regular screening for early detection of malignancies along with psychological support can drastically improve patients’ quality of life and life expectancy. Further research is required on formulating optimal management of XP, specifically the role and possibilities of gene therapy in XP.Keywords: childhood malignancies, dermato-pathological findings, eastern India, Xeroderma pigmentosa
Procedia PDF Downloads 7619 Structural Behavior of Subsoil Depending on Constitutive Model in Calculation Model of Pavement Structure-Subsoil System
Authors: M. Kadela
Abstract:
The load caused by the traffic movement should be transferred in the road constructions in a harmless way to the pavement as follows: − on the stiff upper layers of the structure (e.g. layers of asphalt: abrading and binding), and − through the layers of principal and secondary substructure, − on the subsoil, directly or through an improved subsoil layer. Reliable description of the interaction proceeding in a system “road construction – subsoil” should be in such case one of the basic requirements of the assessment of the size of internal forces of structure and its durability. Analyses of road constructions are based on: − elements of mechanics, which allows to create computational models, and − results of the experiments included in the criteria of fatigue life analyses. Above approach is a fundamental feature of commonly used mechanistic methods. They allow to use in the conducted evaluations of the fatigue life of structures arbitrarily complex numerical computational models. Considering the work of the system “road construction – subsoil”, it is commonly accepted that, as a result of repetitive loads on the subsoil under pavement, the growth of relatively small deformation in the initial phase is recognized, then this increase disappears, and the deformation takes the character completely reversible. The reliability of calculation model is combined with appropriate use (for a given type of analysis) of constitutive relationships. Phenomena occurring in the initial stage of the system “road construction – subsoil” is unfortunately difficult to interpret in the modeling process. The classic interpretation of the behavior of the material in the elastic-plastic model (e-p) is that elastic phase of the work (e) is undergoing to phase (e-p) by increasing the load (or growth of deformation in the damaging structure). The paper presents the essence of the calibration process of cooperating subsystem in the calculation model of the system “road construction – subsoil”, created for the mechanistic analysis. Calibration process was directed to show the impact of applied constitutive models on its deformation and stress response. The proper comparative base for assessing the reliability of created. This work was supported by the on-going research project “Stabilization of weak soil by application of layer of foamed concrete used in contact with subsoil” (LIDER/022/537/L-4/NCBR/2013) financed by The National Centre for Research and Development within the LIDER Programme. M. Kadela is with the Department of Building Construction Elements and Building Structures on Mining Areas, Building Research Institute, Silesian Branch, Katowice, Poland (phone: +48 32 730 29 47; fax: +48 32 730 25 22; e-mail: m.kadela@ itb.pl). models should be, however, the actual, monitored system “road construction – subsoil”. The paper presents too behavior of subsoil under cyclic load transmitted by pavement layers. The response of subsoil to cyclic load is recorded in situ by the observation system (sensors) installed on the testing ground prepared for this purpose, being a part of the test road near Katowice, in Poland. A different behavior of the homogeneous subsoil under pavement is observed for different seasons of the year, when pavement construction works as a flexible structure in summer, and as a rigid plate in winter. Albeit the observed character of subsoil response is the same regardless of the applied load and area values, this response can be divided into: - zone of indirect action of the applied load; this zone extends to the depth of 1,0 m under the pavement, - zone of a small strain, extending to about 2,0 m.Keywords: road structure, constitutive model, calculation model, pavement, soil, FEA, response of soil, monitored system
Procedia PDF Downloads 357