Search results for: noise shaping
394 Optimum Design of Hybrid (Metal-Composite) Mechanical Power Transmission System under Uncertainty by Convex Modelling
Authors: Sfiso Radebe
Abstract:
The design models dealing with flawless composite structures are in abundance, where the mechanical properties of composite structures are assumed to be known a priori. However, if the worst case scenario is assumed, where material defects combined with processing anomalies in composite structures are expected, a different solution is attained. Furthermore, if the system being designed combines in series hybrid elements, individually affected by material constant variations, it implies that a different approach needs to be taken. In the body of literature, there is a compendium of research that investigates different modes of failure affecting hybrid metal-composite structures. It covers areas pertaining to the failure of the hybrid joints, structural deformation, transverse displacement, the suppression of vibration and noise. In the present study a system employing a combination of two or more hybrid power transmitting elements will be explored for the least favourable dynamic loads as well as weight minimization, subject to uncertain material properties. Elastic constants are assumed to be uncertain-but-bounded quantities varying slightly around their nominal values where the solution is determined using convex models of uncertainty. Convex analysis of the problem leads to the computation of the least favourable solution and ultimately to a robust design. This approach contrasts with a deterministic analysis where the average values of elastic constants are employed in the calculations, neglecting the variations in the material properties.Keywords: convex modelling, hybrid, metal-composite, robust design
Procedia PDF Downloads 211393 A Case Study on Post-Occupancy Evaluation of User Satisfaction in Higher Educational Buildings
Authors: Yuanhong Zhao, Qingping Yang, Andrew Fox, Tao Zhang
Abstract:
Post-occupancy evaluation (POE) is a systematic approach to assess the actual building performance after the building has been occupied for some time. In this paper, a structured POE assessment was conducted using the building use survey (BUS) methodology in two higher educational buildings in the United Kingdom. This study aims to help close the building performance gap, provide optimized building operation suggestions, and to improve occupants’ satisfaction level. In this research, the questionnaire survey investigated the influences of environmental factors on user satisfaction from the main aspects of building overall design, thermal comfort, perceived control, indoor environment quality for noise, lighting, ventilation, and other non-environmental factors, such as the background information about age, sex, time in buildings, workgroup size, and so on. The results indicate that the occupant satisfaction level with the main aspects of building overall design, indoor environment quality, and thermal comfort in summer and winter on both two buildings, which is lower than the benchmark data. The feedback of this POE assessment has been reported to the building management team to allow managers to develop high-performance building operation plans. Finally, this research provided improvement suggestions to the building operation system to narrow down the performance gap and improve the user work experience satisfaction and productivity level.Keywords: building performance assessment systems, higher educational buildings, post-occupancy evaluation, user satisfaction
Procedia PDF Downloads 152392 How Defining the Semi-Professional Journalist Is Creating Nuance and a Familiar Future for Local Journalism
Authors: Ross Hawkes
Abstract:
The rise of hyperlocal journalism and its role in the wider local news ecosystem has been debated across both industry and academic circles, particularly via the lens of structures, models, and platforms. The nuances within this sphere are now allowing for the semi-professional journalist to emerge as a key component of the landscape at the fringes of journalism. By identifying and framing the labour of these individuals against a backdrop of change within the professional local newspaper publishing industry, it is possible to address wider debates around the ways in which participants enter and exist in the space between amateur and professional journalism. Considerations around prior experience and understanding allow us to better shape and nuance the hyperlocal landscape in order to understand the challenges and opportunities facing local news via this emergent form of semi-professional journalistic production. The disruption to local news posed by the changing nature of audiences, long-established methods of production, the rise of digital platforms, and increased competition in the online space has brought questions around the very nature and identity of local news, as well as the uncertain future and precarity which surrounds it. While the hyperlocal sector has long been associated as a potential future direction for local journalism through an alternative approach to reporting and as a mechanism for participants to pass between amateurism towards professionalism, there is now a semi-professional space being occupied in a different way. Those framed as semi-professional journalists are not necessarily transiting through this space at the fringes of the professional industry; instead, they are occupying and claiming the space as an entity within itself. By framing the semi-professional journalist through a lens of prior experience and knowledge of the sector, it is possible to identify how their motivations vary from the traditional metrics of financial gain, personal progression, or a sense of civic or community duty. While such factors may be by-products of their labour, the desire of such reporters to recreate and retain experiences and values from their past as a participant or consumer is the central basis of the framework to define the semi-professional journalist. Through understanding the motivations, aims and factors shaping the semi-professional journalist within the wider journalism and hyperlocal journalism debates and landscape, it will be possible to better frame the role they can play in sustaining the longer term provision of local news and addressing broader issues and factors within the sector.Keywords: hyperlocal, journalism, local news, semi-professionalism
Procedia PDF Downloads 28391 The Clustering of Multiple Sclerosis Subgroups through L2 Norm Multifractal Denoising Technique
Authors: Yeliz Karaca, Rana Karabudak
Abstract:
Multifractal Denoising techniques are used in the identification of significant attributes by removing the noise of the dataset. Magnetic resonance (MR) image technique is the most sensitive method so as to identify chronic disorders of the nervous system such as Multiple Sclerosis. MRI and Expanded Disability Status Scale (EDSS) data belonging to 120 individuals who have one of the subgroups of MS (Relapsing Remitting MS (RRMS), Secondary Progressive MS (SPMS), Primary Progressive MS (PPMS)) as well as 19 healthy individuals in the control group have been used in this study. The study is comprised of the following stages: (i) L2 Norm Multifractal Denoising technique, one of the multifractal technique, has been used with the application on the MS data (MRI and EDSS). In this way, the new dataset has been obtained. (ii) The new MS dataset obtained from the MS dataset and L2 Multifractal Denoising technique has been applied to the K-Means and Fuzzy C Means clustering algorithms which are among the unsupervised methods. Thus, the clustering performances have been compared. (iii) In the identification of significant attributes in the MS dataset through the Multifractal denoising (L2 Norm) technique using K-Means and FCM algorithms on the MS subgroups and control group of healthy individuals, excellent performance outcome has been yielded. According to the clustering results based on the MS subgroups obtained in the study, successful clustering results have been obtained in the K-Means and FCM algorithms by applying the L2 norm of multifractal denoising technique for the MS dataset. Clustering performance has been more successful with the MS Dataset (L2_Norm MS Data Set) K-Means and FCM in which significant attributes are obtained by applying L2 Norm Denoising technique.Keywords: clinical decision support, clustering algorithms, multiple sclerosis, multifractal techniques
Procedia PDF Downloads 168390 Evaluation of Real-Time Background Subtraction Technique for Moving Object Detection Using Fast-Independent Component Analysis
Authors: Naoum Abderrahmane, Boumehed Meriem, Alshaqaqi Belal
Abstract:
Background subtraction algorithm is a larger used technique for detecting moving objects in video surveillance to extract the foreground objects from a reference background image. There are many challenges to test a good background subtraction algorithm, like changes in illumination, dynamic background such as swinging leaves, rain, snow, and the changes in the background, for example, moving and stopping of vehicles. In this paper, we propose an efficient and accurate background subtraction method for moving object detection in video surveillance. The main idea is to use a developed fast-independent component analysis (ICA) algorithm to separate background, noise, and foreground masks from an image sequence in practical environments. The fast-ICA algorithm is adapted and adjusted with a matrix calculation and searching for an optimum non-quadratic function to be faster and more robust. Moreover, in order to estimate the de-mixing matrix and the denoising de-mixing matrix parameters, we propose to convert all images to YCrCb color space, where the luma component Y (brightness of the color) gives suitable results. The proposed technique has been verified on the publicly available datasets CD net 2012 and CD net 2014, and experimental results show that our algorithm can detect competently and accurately moving objects in challenging conditions compared to other methods in the literature in terms of quantitative and qualitative evaluations with real-time frame rate.Keywords: background subtraction, moving object detection, fast-ICA, de-mixing matrix
Procedia PDF Downloads 96389 Euthanasia as a Case of Judicial Entrepreneurship in India: Analyzing the Role of the Supreme Court in the Policy Process of Euthanasia
Authors: Aishwarya Pothula
Abstract:
Euthanasia in India is a politically dormant policy issue in the sense that discussions around it are sporadic in nature (usually with developments in specific cases) and it stays as a dominant issue in the public domain for a fleeting period. In other words, it is a non-political issue that has been unable to successfully get on the policy agenda. This paper studies how the Supreme Court of India (SC) plays a role in euthanasia’s policy making. In 2011, the SC independently put a law in place that legalized passive euthanasia through its judgement in the Aruna Shanbaug v. Union of India case. According to this, it is no longer illegal to withhold/withdraw a patient’s medical treatment in certain cases. This judgement, therefore, is the empirical focus of this paper. The paper essentially employs two techniques of discourse analysis to study the SC’s system of argumentation. The two methods, Text Analysis using Gasper’s Analysis Table and Frame Analysis – are complemented by two discourse techniques called metaphor analysis and lexical analysis. The framework within which the analysis is conducted lies in 1) the judicial process of India, i.e. the SC procedures and the Constitutional rules and provisions, and 2) John W. Kingdon’s theory of policy windows and policy entrepreneurs. The results of this paper are three-fold: first, the SC dismiss the petitioner’s request for passive euthanasia on inadequate and weak grounds, thereby setting no precedent for the historic law they put in place. In other words, they leave the decision open for the Parliament to act upon. Hence the judgement, as opposed to arguments by many, is by no means an instance of judicial activism/overreach. Second, they define euthanasia in a way that resonates with existing broader societal themes. They combine this with a remarkable use of authoritative and protective tones/stances to settle at an intermediate position that balances the possible opposition to their role in the process and what they (perhaps) perceive to be an optimal solution. Third, they soften up the policy community (including the public) to the idea of passive euthanasia leading it towards a Parliamentarian legislation. They achieve this by shaping prevalent principles, provisions and worldviews through an astute use of the legal instruments at their disposal. This paper refers to this unconventional role of the SC as ‘judicial entrepreneurship’ which is also the first scholarly contribution towards research on euthanasia as a policy issue in India.Keywords: argumentation analysis, Aruna Ramachandra Shanbaug, discourse analysis, euthanasia, judicial entrepreneurship, policy-making process, supreme court of India
Procedia PDF Downloads 264388 Safe Zone: A Framework for Detecting and Preventing Drones Misuse
Authors: AlHanoof A. Alharbi, Fatima M. Alamoudi, Razan A. Albrahim, Sarah F. Alharbi, Abdullah M Almuhaideb, Norah A. Almubairik, Abdulrahman Alharby, Naya M. Nagy
Abstract:
Recently, drones received a rapid interest in different industries worldwide due to its powerful impact. However, limitations still exist in this emerging technology, especially privacy violation. These aircrafts consistently threaten the security of entities by entering restricted areas accidentally or deliberately. Therefore, this research project aims to develop drone detection and prevention mechanism to protect the restricted area. Until now, none of the solutions have met the optimal requirements of detection which are cost-effectiveness, high accuracy, long range, convenience, unaffected by noise and generalization. In terms of prevention, the existing methods are focusing on impractical solutions such as catching a drone by a larger drone, training an eagle or a gun. In addition, the practical solutions have limitations, such as the No-Fly Zone and PITBULL jammers. According to our study and analysis of previous related works, none of the solutions includes detection and prevention at the same time. The proposed solution is a combination of detection and prevention methods. To implement the detection system, a passive radar will be used to properly identify the drone against any possible flying objects. As for the prevention, jamming signals and forceful safe landing of the drone integrated together to stop the drone’s operation. We believe that applying this mechanism will limit the drone’s invasion of privacy incidents against highly restricted properties. Consequently, it effectively accelerates drones‘ usages at personal and governmental levels.Keywords: detection, drone, jamming, prevention, privacy, RF, radar, UAV
Procedia PDF Downloads 211387 725 Arcadia Street in Pretoria: A Pretoria Case Study Focusing on Urban Acupuncture
Authors: Konrad Steyn, Jacques Laubscher
Abstract:
South African urban design solutions are mostly aligned with European and North American models that are often not appropriate in addressing some of this country’s challenges such as multiculturalism and decaying urban areas. Sustainable urban redevelopment in South Africa should be comprehensive in nature, sensitive in its manifestation, and should be robust and inclusive in order to achieve social relevance. This paper argues that the success of an urban design intervention is largely dependent on the public’s perceptions and expectations, and the way people participate in shaping their environments. The concept of sustainable urbanism is thus more comprehensive than – yet should undoubtedly include – methods of construction, material usage and climate control principles. The case study is a central element of this research paper. 725 Arcadia Street in Pretoria, was originally commissioned as a food market structure. A starkly contrasting existing modernist adjacent building forms the morphological background. Built in 1969, it is a valuable part of Pretoria’s modernist fabric. It was realised early on that the project should not be a mere localised architectural intervention, but rather an occasion to revitalise the neighbourhood through urban regeneration. Because of the complex and comprehensive nature of the site and rich cultural diversity of the area, a multi-faceted approach seemed the most appropriate response. The methodology for collating data consisted of a combination of literature reviews (regarding the historic original fauna and flora and current plants, observation (frequent site visits) and physical surveying on the neighbourhood level (physical location, connectivity to surrounding landmarks as well as movement systems and pedestrian flows). This was followed by an exploratory design phase, culminating in the present redevelopment proposal. Since built environment interventions are increasingly based on generalised normative guidelines, an approach focusing of urban acupuncture could serve as an alternative. Celebrating the specific urban condition, urban acupuncture offers an opportunity to influence the surrounding urban fabric and achieve urban renewal through physical, social and cultural mediation.Keywords: neighbourhood, urban renewal, South African urban design solutions, sustainable urban redevelopment
Procedia PDF Downloads 496386 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics
Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo
Abstract:
Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.Keywords: communication signal, feature extraction, Holder coefficient, improved cloud model
Procedia PDF Downloads 155385 Technological Exploitation and User Experience in Product Innovation: The Case Study of the High-Tech Mask
Authors: Venere Ferraro, Silvia Ferraris
Abstract:
We live in a world pervaded by new advanced technologies that have been changing the way we live and experience the surrounded. Besides, new technologies enable product innovation at different levels. Nevertheless, innovation does not lie just in the technological development and in its hard aspects but also in the meaningful use of it for the final user. In order to generate innovative products, a new perspective is needed: The shift from an instrument-oriented view of the technology towards a broader view that includes aspects like aesthetics, acceptance, comfort, and sociability. In many businesses, the user experience of the product is considered the key battlefield to achieve product innovation. (Holland 2011) The use of new technologies is indeed useless without paying attention to the user experience. This paper presents a workshop activity conducted at Design School of Politecnico di Milano in collaboration with Chiba University and aimed at generating innovative design concepts of high-tech mask. The students were asked to design the user experience of a new mask by exploiting emerging technologies such as wearable sensors and information communication technology (ICT) for a chosen field of application: safety or sport. When it comes to the user experience, the mask is a very challenging design product, because it covers aspects of product interaction and, most important, psychological and cultural aspects related to the impact on the facial expression. Furthermore, since the mask affects the face expression quite a lot, it could be a barrier to hide with, or it could be a mean to enhance user’s communication to others. The main request for the students was to take on a user-centered approach: To go beyond the instrumental aspects of product use and usability and focus on the user experience by shaping the technology in a desirable and meaningful way for the user reasoning on the metaphorical and cultural level of the product. During the one-week workshop students were asked to face the design process through (i) the research phase: an in-deep analysis of the user and field of application (safety or sport) to set design spaces (brief) and user scenario; (ii) the idea generation, (iii) the idea development. This text will shortly go through the meaning of the product innovation, the use and application of wearable technologies and will then focus on the user experience design in contrast with the technology-driven approach in the field of product innovation. Finally authors will describe the workshop activity and the concepts developed by the students stressing the important role of the user experience design in new product development.Keywords: product innovation, user experience, technological exploitation, wearable technologies
Procedia PDF Downloads 345384 Design of an Acoustic Imaging Sensor Array for Mobile Robots
Authors: Dibyendu Roy, V. Ramu Reddy, Parijat Deshpande, Ranjan Dasgupta
Abstract:
Imaging of underwater objects is primarily conducted by acoustic imagery due to the severe attenuation of electro-magnetic waves in water. Acoustic imagery underwater has varied range of significant applications such as side-scan sonar, mine hunting sonar. It also finds utility in other domains such as imaging of body tissues via ultrasonography and non-destructive testing of objects. In this paper, we explore the feasibility of using active acoustic imagery in air and simulate phased array beamforming techniques available in literature for various array designs to achieve a suitable acoustic sensor array design for a portable mobile robot which can be applied to detect the presence/absence of anomalous objects in a room. The multi-path reflection effects especially in enclosed rooms and environmental noise factors are currently not simulated and will be dealt with during the experimental phase. The related hardware is designed with the same feasibility criterion that the developed system needs to be deployed on a portable mobile robot. There is a trade of between image resolution and range with the array size, number of elements and the imaging frequency and has to be iteratively simulated to achieve the desired acoustic sensor array design. The designed acoustic imaging array system is to be mounted on a portable mobile robot and targeted for use in surveillance missions for intruder alerts and imaging objects during dark and smoky scenarios where conventional optic based systems do not function well.Keywords: acoustic sensor array, acoustic imagery, anomaly detection, phased array beamforming
Procedia PDF Downloads 409383 Classifier for Liver Ultrasound Images
Authors: Soumya Sajjan
Abstract:
Liver cancer is the most common cancer disease worldwide in men and women, and is one of the few cancers still on the rise. Liver disease is the 4th leading cause of death. According to new NHS (National Health Service) figures, deaths from liver diseases have reached record levels, rising by 25% in less than a decade; heavy drinking, obesity, and hepatitis are believed to be behind the rise. In this study, we focus on Development of Diagnostic Classifier for Ultrasound liver lesion. Ultrasound (US) Sonography is an easy-to-use and widely popular imaging modality because of its ability to visualize many human soft tissues/organs without any harmful effect. This paper will provide an overview of underlying concepts, along with algorithms for processing of liver ultrasound images Naturaly, Ultrasound liver lesion images are having more spackle noise. Developing classifier for ultrasound liver lesion image is a challenging task. We approach fully automatic machine learning system for developing this classifier. First, we segment the liver image by calculating the textural features from co-occurrence matrix and run length method. For classification, Support Vector Machine is used based on the risk bounds of statistical learning theory. The textural features for different features methods are given as input to the SVM individually. Performance analysis train and test datasets carried out separately using SVM Model. Whenever an ultrasonic liver lesion image is given to the SVM classifier system, the features are calculated, classified, as normal and diseased liver lesion. We hope the result will be helpful to the physician to identify the liver cancer in non-invasive method.Keywords: segmentation, Support Vector Machine, ultrasound liver lesion, co-occurance Matrix
Procedia PDF Downloads 411382 Design and Testing of Electrical Capacitance Tomography Sensors for Oil Pipeline Monitoring
Authors: Sidi M. A. Ghaly, Mohammad O. Khan, Mohammed Shalaby, Khaled A. Al-Snaie
Abstract:
Electrical capacitance tomography (ECT) is a valuable, non-invasive technique used to monitor multiphase flow processes, especially within industrial pipelines. This study focuses on the design, testing, and performance comparison of ECT sensors configured with 8, 12, and 16 electrodes, aiming to evaluate their effectiveness in imaging accuracy, resolution, and sensitivity. Each sensor configuration was designed to capture the spatial permittivity distribution within a pipeline cross-section, enabling visualization of phase distribution and flow characteristics such as oil and water interactions. The sensor designs were implemented and tested in closed pipes to assess their response to varying flow regimes. Capacitance data collected from each electrode configuration were reconstructed into cross-sectional images, enabling a comparison of image resolution, noise levels, and computational demands. Results indicate that the 16-electrode configuration yields higher image resolution and sensitivity to phase boundaries compared to the 8- and 12-electrode setups, making it more suitable for complex flow visualization. However, the 8 and 12-electrode sensors demonstrated advantages in processing speed and lower computational requirements. This comparative analysis provides critical insights into optimizing ECT sensor design based on specific industrial requirements, from high-resolution imaging to real-time monitoring needs.Keywords: capacitance tomography, modeling, simulation, electrode, permittivity, fluid dynamics, imaging sensitivity measurement
Procedia PDF Downloads 9381 Changing Behaviour in the Digital Era: A Concrete Use Case from the Domain of Health
Authors: Francesca Spagnoli, Shenja van der Graaf, Pieter Ballon
Abstract:
Humans do not behave rationally. We are emotional, easily influenced by others, as well as by our context. The study of human behaviour became a supreme endeavour within many academic disciplines, including economics, sociology, and clinical and social psychology. Understanding what motivates humans and triggers them to perform certain activities, and what it takes to change their behaviour, is central both for researchers and companies, as well as policy makers to implement efficient public policies. While numerous theoretical approaches for diverse domains such as health, retail, environment have been developed, the methodological models guiding the evaluation of such research have reached for a long time their limits. Within this context, digitisation, the Information and communication technologies (ICT) and wearable, the Internet of Things (IoT) connecting networks of devices, and new possibilities to collect and analyse massive amounts of data made it possible to study behaviour from a realistic perspective, as never before. Digital technologies make it possible to (1) capture data in real-life settings, (2) regain control over data by capturing the context of behaviour, and (3) analyse huge set of information through continuous measurement. Within this complex context, this paper describes a new framework for initiating behavioural change, capitalising on the digital developments in applied research projects and applicable both to academia, enterprises and policy makers. By applying this model, behavioural research can be conducted to address the issues of different domains, such as mobility, environment, health or media. The Modular Behavioural Analysis Approach (MBAA) is here described and firstly validated through a concrete use case within the domain of health. The results gathered have proven that disclosing information about health in connection with the use of digital apps for health, can be a leverage for changing behaviour, but it is only a first component requiring further follow-up actions. To this end, a clear definition of different 'behavioural profiles', towards which addressing several typologies of interventions, it is essential to effectively enable behavioural change. In the refined version of the MBAA a strong focus will rely on defining a methodology for shaping 'behavioural profiles' and related interventions, as well as the evaluation of side-effects on the creation of new business models and sustainability plans.Keywords: behavioural change, framework, health, nudging, sustainability
Procedia PDF Downloads 221380 Agarose Amplification Based Sequencing (AG-seq) Characterization Cell-free RNA in Preimplantation Spent Embryo Medium
Authors: Huajuan Shi
Abstract:
Background: The biopsy of the preimplantation embryo may increase the potential risk and concern of embryo viability. Clinically discarded spent embryo medium (SEM) has entered the view of researchers, sparking an interest in noninvasive embryo screening. However, one of the major restrictions is the extremelty low quantity of cf-RNA, which is difficult to efficiently and unbiased amplify cf-RNA using traditional methods. Hence, there is urgently need to an efficient and low bias amplification method which can comprehensively and accurately obtain cf-RNA information to truly reveal the state of SEM cf-RNA. Result: In this present study, we established an agarose PCR amplification system, and has significantly improved the amplification sensitivity and efficiency by ~90 fold and 9.29 %, respectively. We applied agarose to sequencing library preparation (named AG-seq) to quantify and characterize cf-RNA in SEM. The number of detected cf-RNAs (3533 vs 598) and coverage of 3' end were significantly increased, and the noise of low abundance gene detection was reduced. The increasing percentage 5' end adenine and alternative splicing (AS) events of short fragments (< 400 bp) were discovered by AG-seq. Further, the profiles and characterizations of cf-RNA in spent cleavage medium (SCM) and spent blastocyst medium (SBM) indicated that 4‐mer end motifs of cf-RNA fragments could remarkably differentiate different embryo development stages. Significance: This study established an efficient and low-cost SEM amplification and library preparation method. Not only that, we successfully described the characterizations of SEM cf-RNA of preimplantation embryo by using AG-seq, including abundance features fragment lengths. AG-seq facilitates the study of cf-RNA as a noninvasive embryo screening biomarker and opens up potential clinical utilities of trace samples.Keywords: cell-free RNA, agarose, spent embryo medium, RNA sequencing, non-invasive detection
Procedia PDF Downloads 92379 Investigation of Preschool Children's Mathematics Concept Acquisition in Terms of Different Variables
Authors: Hilal Karakuş, Berrin Akman
Abstract:
Preschool years are considered as critical years because of shaping the future lives of individuals. All of the knowledge, skills, and concepts are acquired during this period. Also, basis of academic skills is based on this period. As all of the developmental areas are the fastest in that period, the basis of mathematics education should be given in this period, too. Mathematics is seen as a difficult and abstract course by the most people. Therefore, the enjoyable side of mathematics should be presented in a concrete way in this period to avoid any bias of children for mathematics. This study is conducted to examine mathematics concept acquisition of children in terms of different variables. Screening model is used in this study which is carried out in a quantity way. The study group of this research consists of total 300 children, selected from each class randomly in groups of five, who are from public and private preschools in Çankaya, which is district of Ankara, in 2014-2015 academic year and attending children in the nursery classes and preschool institutions are connected to the Ministry of National Education. The study group of the research was determined by stage sampling method. The schools, which formed study group, are chosen by easy sampling method and the children are chosen by simple random method. Research data were collected with Bracken Basic Concept Scale–Revised Form and Child’s Personal Information Form generated by the researcher in order to get information about children and their families. Bracken Basic Concept Scale-Revised Form consists of 11 sub-dimensions (color, letter, number, size, shape, comparison, direction-location, and quantity, individual and social awareness, building- material) and 307 items. Subtests related to the mathematics were used in this research. In the “Child Individual Information Form” there are items containing demographic information as followings: age of children, gender of children, attending preschools educational intuitions for children, school attendance, mother’s and father’s education levels. At the result of the study, while it was found that children’s mathematics skills differ from age, state of attending any preschool educational intuitions , time of attending any preschool educational intuitions, level of education of their mothers and their fathers; it was found that it does not differ by the gender and type of school they attend.Keywords: preschool education, preschool period children, mathematics education, mathematics concept acquisitions
Procedia PDF Downloads 349378 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model
Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You
Abstract:
The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.Keywords: DBSCAN, potential function, speech signal, the UBSS model
Procedia PDF Downloads 135377 A Techno-Economic Simulation Model to Reveal the Relevance of Construction Process Impact Factors for External Thermal Insulation Composite System (ETICS)
Authors: Virgo Sulakatko
Abstract:
The reduction of energy consumption of the built environment has been one of the topics tackled by European Commission during the last decade. Increased energy efficiency requirements have increased the renovation rate of apartment buildings covered with External Thermal Insulation Composite System (ETICS). Due to fast and optimized application process, a large extent of quality assurance is depending on the specific activities of artisans and are often not controlled. The on-site degradation factors (DF) have the technical influence to the façade and cause future costs to the owner. Besides the thermal conductivity, the building envelope needs to ensure the mechanical resistance and stability, fire-, noise-, corrosion and weather protection, and long-term durability. As the shortcomings of the construction phase become problematic after some years, the common value of the renovation is reduced. Previous work on the subject has identified and rated the relevance of DF to the technical requirements and developed a method to reveal the economic value of repair works. The future costs can be traded off to increased the quality assurance during the construction process. The proposed framework is describing the joint simulation of the technical importance and economic value of the on-site DFs of ETICS. The model is providing new knowledge to improve the resource allocation during the construction process by enabling to identify and diminish the most relevant degradation factors and increase economic value to the owner.Keywords: ETICS, construction technology, construction management, life cycle costing
Procedia PDF Downloads 419376 THz Phase Extraction Algorithms for a THz Modulating Interferometric Doppler Radar
Authors: Shaolin Allen Liao, Hual-Te Chien
Abstract:
Various THz phase extraction algorithms have been developed for a novel THz Modulating Interferometric Doppler Radar (THz-MIDR) developed recently by the author. The THz-MIDR differs from the well-known FTIR technique in that it introduces a continuously modulating reference branch, compared to the time-consuming discrete FTIR stepping reference branch. Such change allows real-time tracking of a moving object and capturing of its Doppler signature. The working principle of the THz-MIDR is similar to the FTIR technique: the incoming THz emission from the scene is split by a beam splitter/combiner; one of the beams is continuously modulated by a vibrating mirror or phase modulator and the other split beam is reflected by a reflection mirror; finally both the modulated reference beam and reflected beam are combined by the same beam splitter/combiner and detected by a THz intensity detector (for example, a pyroelectric detector). In order to extract THz phase from the single intensity measurement signal, we have derived rigorous mathematical formulas for 3 Frequency Banded (FB) signals: 1) DC Low-Frequency Banded (LFB) signal; 2) Fundamental Frequency Banded (FFB) signal; and 3) Harmonic Frequency Banded (HFB) signal. The THz phase extraction algorithms are then developed based combinations of 2 or all of these 3 FB signals with efficient algorithms such as Levenberg-Marquardt nonlinear fitting algorithm. Numerical simulation has also been performed in Matlab with simulated THz-MIDR interferometric signal of various Signal to Noise Ratio (SNR) to verify the algorithms.Keywords: algorithm, modulation, THz phase, THz interferometry doppler radar
Procedia PDF Downloads 344375 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study
Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb
Abstract:
The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.Keywords: anthropomorphic phantom, computed tomography, CT-expo, radiation dose
Procedia PDF Downloads 220374 Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm
Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee
Abstract:
Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that considers the uncertainty caused by measurement noise. Enhanced ideal gas molecular movement (EIGMM) is used as the main algorithm for model updating. Ideal gas molecular movement (IGMM) is a multiagent algorithm based on the ideal gas molecular movement. Ideal gas molecules disperse rapidly in different directions and cover all the space inside. This is embedded in the high speed of molecules, collisions between them and with the surrounding barriers. In IGMM algorithm to accomplish the optimal solutions, the initial population of gas molecules is randomly generated and the governing equations related to the velocity of gas molecules and collisions between those are utilized. In this paper, an enhanced version of IGMM, which removes unchanged variables after specified iterations, is developed. The proposed method is implemented on two numerical examples in the field of structural damage detection. The results show that the proposed method can perform well and competitive in PBDD of structures.Keywords: enhanced ideal gas molecular movement (EIGMM), ideal gas molecular movement (IGMM), model updating method, probability-based damage detection (PBDD), uncertainty quantification
Procedia PDF Downloads 277373 A Comprehensive Characterization of Cell-free RNA in Spent Blastocyst Medium and Quality Prediction for Blastocyst
Authors: Huajuan Shi
Abstract:
Background: The biopsy of the preimplantation embryo may increase the potential risk and concern of embryo viability. Clinically discarded spent embryo medium (SEM) has entered the view of researchers, sparking an interest in noninvasive embryo screening. However, one of the major restrictions is the extremelty low quantity of cf-RNA, which is difficult to efficiently and unbiased amplify cf-RNA using traditional methods. Hence, there is urgently need to an efficient and low bias amplification method which can comprehensively and accurately obtain cf-RNA information to truly reveal the state of SEM cf-RNA. Result: In this present study, we established an agarose PCR amplification system, and has significantly improved the amplification sensitivity and efficiency by ~90 fold and 9.29 %, respectively. We applied agarose to sequencing library preparation (named AG-seq) to quantify and characterize cf-RNA in SEM. The number of detected cf-RNAs (3533 vs 598) and coverage of 3' end were significantly increased, and the noise of low abundance gene detection was reduced. The increasing percentage 5' end adenine and alternative splicing (AS) events of short fragments (< 400 bp) were discovered by AG-seq. Further, the profiles and characterizations of cf-RNA in spent cleavage medium (SCM) and spent blastocyst medium (SBM) indicated that 4‐mer end motifs of cf-RNA fragments could remarkably differentiate different embryo development stages. Significance: This study established an efficient and low-cost SEM amplification and library preparation method. Not only that, we successfully described the characterizations of SEM cf-RNA of preimplantation embryo by using AG-seq, including abundance features fragment lengths. AG-seq facilitates the study of cf-RNA as a noninvasive embryo screening biomarker and opens up potential clinical utilities of trace samples.Keywords: cell-free RNA, agarose, spent embryo medium, RNA sequencing, non-invasive detection
Procedia PDF Downloads 64372 An Investigation into the Impact of Brexit on Consumer Perception of Trust in the Food Industry
Authors: Babatope David Omoniyi, Fiona Lalor, Sinead Furey
Abstract:
This ongoing project investigates the impact of Brexit on consumer perceptions of trust in the food industry. Brexit has significantly impacted the food industry, triggering a paradigm shift in the movement of food/agricultural produce, regulations, and cross-border collaborations between Great Britain, Northern Ireland, and the Republic of Ireland. In a world where the dynamics have shifted because of regulatory changes that impact trade and the free movement of foods and agricultural produce between these three countries, monitoring and controlling every stage of the food supply chain have become challenging, increasing the potential for food fraud and food safety incidents. As consumers play a pivotal role in shaping the market, understanding any shifts in trust post-Brexit enables them to navigate the market with confidence and awareness. This study aims to explore the complexities of consumer perceptions, focusing on trust as a cornerstone of consumer confidence in the post-Brexit food landscape. The objectives include comparing trust in official controls pre- and post-Brexit, determining consumer awareness of food fraud, and devising recommendations that reflect the evidence from this primary research regarding consumer trust in food authenticity post-Brexit. The research design follows an exploratory sequential mixed methods approach, incorporating qualitative methods such as focus groups and structured interviews, along with quantitative research through a large-scale survey. Participants from UCD and Ulster University campuses, comprising academic and non-academic staff, students, and researchers, will provide insights into the impact of Brexit on consumer trust. Preliminary findings from focus groups and interviews highlight changes in labelling, reduced quantity and quality of foods in both Northern Ireland and the Republic of Ireland, fewer food choices, and increased food prices since Brexit. The study aims to further investigate and quantify these impacts through a comprehensive large-scale survey involving participants from Northern Ireland and the Republic of Ireland. The results will inform official controls and consumer-facing messaging contributing valuable insights to navigate the evolving post-Brexit food landscape.Keywords: Brexit, consumer trust, food fraud, food authenticity, food safety, food industry
Procedia PDF Downloads 47371 A Critical Discourse Analysis of ‘Youth Radicalisation’: A Case of the Daily Nation Kenya Online Newspaper
Authors: Miraji H. Mohamed
Abstract:
The purpose of this study is to critique ‘radicalisation’ and more particularly ‘youth radicalisation’ by exploring its usage in online newspapers. ‘Radicalisation’ and ‘extremism’ have become the most common terms in terrorism studies since the 9/11 attacks. Regardless of the geographic location, when the word terrorism is used the terms ‘radicalisation’ and ‘extremism’ always follow to attempt to explore the journey of the perpetrators towards violence. These terms have come to represent a discourse of dominantly pejorative traits often used to describe spaces, groups, and processes identified as problematic. Even though ambiguously defined they feature widely in government documents, political statements, news articles, academic research, social media platforms, religious gatherings, and public discussions. Notably, ‘radicalisation’ and ‘extremism’ have been closely conflated with the term youth to form ‘youth radicalisation’ to refer to a discourse of ‘youth at risk’. The three terms largely continue to be used unquestioningly and interchangeably hence the reason why they are placed in single quotation marks to deliberately question their conventional usage. Albeit this comes timely in the Kenyan context where there has been a proliferation of academic and expert research on ‘youth radicalisation’ (used as a neutral label) without considering the political, cultural and socio-historical contexts that inform this label. This study seeks to draw these nuances by employing a genealogical approach that historicises and deconstructs ‘youth radicalisation’; and by applying a Discourse-Historical Approach (DHA) of Critical Discourse Analysis to analyse Kenyan online newspaper - The Daily Nation between 2015 and 2018. By applying the concept of representation to analyse written texts, the study reveals that the use of ‘youth radicalisation’ as a discursive strategy disproportionately affects young people especially those from cultural/ethnic/religious minority groups. Also, the ambiguous use of ‘radicalisation’ and ‘youth radicalisation’ by the media reinforces the discourse of ‘youth at risk’ which has become the major framework underpinning Countering Violent Extremism (CVE) interventions. Similarly, the findings indicate that the uncritical use of ‘youth radicalisation’ has been used to serve political interests; and has become an instrument of policing young people, thus contributing to their cultural shaping. From this, it is evident that the media could thwart rather than assist CVE efforts. By exposing the political nature of the three terms through evidence-based research, this study offers recommendations on how critical reflective reporting by the media could help to make CVE more nuanced.Keywords: discourse, extremism, radicalisation, terrorism, youth
Procedia PDF Downloads 129370 Pilot-free Image Transmission System of Joint Source Channel Based on Multi-Level Semantic Information
Authors: Linyu Wang, Liguo Qiao, Jianhong Xiang, Hao Xu
Abstract:
In semantic communication, the existing joint Source Channel coding (JSCC) wireless communication system without pilot has unstable transmission performance and can not effectively capture the global information and location information of images. In this paper, a pilot-free image transmission system of joint source channel based on multi-level semantic information (Multi-level JSCC) is proposed. The transmitter of the system is composed of two networks. The feature extraction network is used to extract the high-level semantic features of the image, compress the information transmitted by the image, and improve the bandwidth utilization. Feature retention network is used to preserve low-level semantic features and image details to improve communication quality. The receiver also is composed of two networks. The received high-level semantic features are fused with the low-level semantic features after feature enhancement network in the same dimension, and then the image dimension is restored through feature recovery network, and the image location information is effectively used for image reconstruction. This paper verifies that the proposed multi-level JSCC algorithm can effectively transmit and recover image information in both AWGN channel and Rayleigh fading channel, and the peak signal-to-noise ratio (PSNR) is improved by 1~2dB compared with other algorithms under the same simulation conditions.Keywords: deep learning, JSCC, pilot-free picture transmission, multilevel semantic information, robustness
Procedia PDF Downloads 120369 The Automatic Transliteration Model of Images of the Book Hamong Tani Using Statistical Approach
Authors: Agustinus Rudatyo Himamunanto, Anastasia Rita Widiarti
Abstract:
Transliteration using Javanese manuscripts is one of methods to preserve and legate the wealth of literature in the past for the present generation in Indonesia. The transliteration manual process commonly requires philologists and takes a relatively long time. The automatic transliteration process is expected to shorten the time so as to help the works of philologists. The preprocessing and segmentation stage firstly done is used to manage the document images, thus obtaining image script units that will compile input document images free from noise and have the similarity in properties in the thickness, size, and slope. The next stage of characteristic extraction is used to find unique characteristics that will distinguish each Javanese script image. One of characteristics that is used in this research is the number of black pixels in each image units. Each image of Java scripts contained in the data training will undergo the same process similar to the input characters. The system testing was performed with the data of the book Hamong Tani. The book Hamong Tani was selected due to its content, age and number of pages. Those were considered sufficient as a model experimental input. Based on the results of random page automatic transliteration process testing, it was determined that the maximum percentage correctness obtained was 81.53%. The percentage of success was obtained in 32x32 pixel input image size with the 5x5 image window. With regard to the results, it can be concluded that the automatic transliteration model offered is relatively good.Keywords: Javanese script, character recognition, statistical, automatic transliteration
Procedia PDF Downloads 339368 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act
Authors: Maria Jędrzejczak, Patryk Pieniążek
Abstract:
The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.Keywords: data protection law, personal data, AI law, personal data breach
Procedia PDF Downloads 65367 Investigate the Competencies Required for Sustainable Entrepreneurship Development in Agricultural Higher Education
Authors: Ehsan Moradi, Parisa Paikhaste, Amir Alam Beigi, Seyedeh Somayeh Bathaei
Abstract:
The need for entrepreneurial sustainability is as important as the entrepreneurship category itself. By transferring competencies in a sustainable entrepreneurship framework, entrepreneurship education can make a significant contribution to the effectiveness of businesses, especially for start-up entrepreneurs. This study analyzes the essential competencies of students in the development of sustainable entrepreneurship. It is an applied causal study in terms of nature and field in terms of data collection. The main purpose of this research project is to study and explain the dimensions of sustainability entrepreneurship competencies among agricultural students. The statistical population consists of 730 junior and senior undergraduate students of the Campus of Agriculture and Natural Resources, University of Tehran. The sample size was determined to be 120 using the Cochran's formula, and the convenience sampling method was used. Face validity, structure validity, and diagnostic methods were used to evaluate the validity of the research tool and Cronbach's alpha and composite reliability to evaluate its reliability. Structural equation modeling (SEM) was used by the confirmatory factor analysis (CFA) method to prepare a measurement model for data processing. The results showed that seven key dimensions play a role in shaping sustainable entrepreneurial development competencies: systems thinking competence (STC), embracing diversity and interdisciplinary (EDI), foresighted thinking (FTC), normative competence (NC), action competence (AC), interpersonal competence (IC), and strategic management competence (SMC). It was found that acquiring skills in SMC by creating the ability to plan to achieve sustainable entrepreneurship in students through the relevant mechanisms can improve entrepreneurship in students by adopting a sustainability attitude. While increasing students' analytical ability in the field of social and environmental needs and challenges and emphasizing curriculum updates, AC should pay more attention to the relationship between the curriculum and its content in the form of entrepreneurship culture promotion programs. In the field of EDI, it was found that the success of entrepreneurs in terms of sustainability and business sustainability of start-up entrepreneurs depends on their interdisciplinary thinking. It was also found that STC plays an important role in explaining the relationship between sustainability and entrepreneurship. Therefore, focusing on these competencies in agricultural education to train start-up entrepreneurs can lead to sustainable entrepreneurship in the agricultural higher education system.Keywords: sustainable entrepreneurship, entrepreneurship education, competency, agricultural higher education
Procedia PDF Downloads 144366 Performance Evaluation and Comparison between the Empirical Mode Decomposition, Wavelet Analysis, and Singular Spectrum Analysis Applied to the Time Series Analysis in Atmospheric Science
Authors: Olivier Delage, Hassan Bencherif, Alain Bourdier
Abstract:
Signal decomposition approaches represent an important step in time series analysis, providing useful knowledge and insight into the data and underlying dynamics characteristics while also facilitating tasks such as noise removal and feature extraction. As most of observational time series are nonlinear and nonstationary, resulting of several physical processes interaction at different time scales, experimental time series have fluctuations at all time scales and requires the development of specific signal decomposition techniques. Most commonly used techniques are data driven, enabling to obtain well-behaved signal components without making any prior-assumptions on input data. Among the most popular time series decomposition techniques, most cited in the literature, are the empirical mode decomposition and its variants, the empirical wavelet transform and singular spectrum analysis. With increasing popularity and utility of these methods in wide ranging applications, it is imperative to gain a good understanding and insight into the operation of these algorithms. In this work, we describe all of the techniques mentioned above as well as their ability to denoise signals, to capture trends, to identify components corresponding to the physical processes involved in the evolution of the observed system and deduce the dimensionality of the underlying dynamics. Results obtained with all of these methods on experimental total ozone columns and rainfall time series will be discussed and comparedKeywords: denoising, empirical mode decomposition, singular spectrum analysis, time series, underlying dynamics, wavelet analysis
Procedia PDF Downloads 116365 Performance of Autoclaved Aerated Concrete Containing Recycled Ceramic and Gypsum Waste as Partial Replacement for Sand
Authors: Efil Yusrianto, Noraini Marsi, Noraniah Kassim, Izzati Abdul Manaf, Hafizuddin Hakim Shariff
Abstract:
Today, municipal solid waste (MSW), noise pollution, and attack fire are three ongoing issues for inhabitants of urban including in Malaysia. To solve these issues, eco-friendly autoclaved aerated concrete (AAC) containing recycled ceramic and gypsum waste (CGW) as a partial replacement for sand with different ratios (0%, 5%, 10%, 15%, 20%, and 25% wt) has been prepared. The performance of samples, such as the physical, mechanical, sound absorption coefficient, and direct fire resistance, has been investigated. All samples showed normal color behavior, i.e., grey and free crack. The compressive strength was increased in the range of 6.10% to 29.88%. The maximum value of compressive strength was 2.13MPa for 15% wt of CGW. The positive effect of CGW on the compressive strength of AAC has also been confirmed by crystalline phase and microstructure analysis. The acoustic performances, such as sound absorption coefficients of samples at low frequencies (500Hz), are higher than the reference sample (RS). AAC-CGW samples are categorized as AAC material classes B and C. The fire resistance results showed the physical surface of the samples had a free crack and was not burned during the direct fire at 950ºC for 300s. The results showed that CGW succeeded in enhancing the performance of fresh AAC, such as compressive strength, crystalline phase, sound absorption coefficient, and fire resistance of samples.Keywords: physical, mechanical, acoustic, direct fire resistance performance, autoclaved aerated concrete, recycled ceramic-gypsum waste
Procedia PDF Downloads 138