Search results for: innovative method and tools
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23154

Search results for: innovative method and tools

13464 Resolution Method for Unforeseen Ground Condition Problem Case in Coal Fired Steam Power Plant Project Location Adipala, Indonesia

Authors: Andi Fallahi, Bona Ryan Situmeang

Abstract:

The Construction Industry is notoriously risky. Much of the preparatory paperwork that precedes construction project can be viewed as the formulation of risk allocation between the owner and the Contractor. The Owner is taking the risk that his project will not get built on the schedule that it will not get built for what he has budgeted and that it will not be of the quality he expected. The Contractor Face a multitude of risk. One of them is an unforeseen condition at the construction site. The Owner usually has the upper hand here if the unforeseen condition occurred. Site data contained in Ground Investigation report is often of significant contractual importance in disputes related to the unforeseen ground condition. A ground investigation can never fully disclose all the details of the underground condition (Risk of an unknown ground condition can never be 100% eliminated). Adipala Coal Fired Steam Power Plant (CSFPP) 1 x 660 project is one of the large CSFPP project in Indonesia based on Engineering, Procurement, and Construction (EPC) Contract. Unforeseen Ground Condition it’s responsible by the Contractor has stipulated in the clausal of Contract. In the implementation, there’s indicated unforeseen ground condition at Circulating Water Pump House (CWPH) area which caused the Contractor should be changed the Method of Work that give big impact against Time of Completion and Cost Project. This paper tries to analyze the best way for allocating the risk between The Owner and The Contractor. All parties that allocating of sharing risk fairly can ultimately save time and money for all parties, and get the job done on schedule for the least overall cost.

Keywords: unforeseen ground condition, coal fired steam power plant, circulating water pump house, Indonesia

Procedia PDF Downloads 318
13463 Pyrolysis of Dursunbey Lignite and Pyrolysis Kinetics

Authors: H. Sütçü, C. Efe

Abstract:

In this study, pyrolysis characteristics of Dursunbey-Balıkesir lignite and its pyrolysis kinetics are examined. The pyrolysis experiments carried out at three different heating rates are performed by using thermogravimetric method. Kinetic parameters are calculated by Coats & Redfern kinetic model and the degree of pyrolysis process is determined for each of the heating rate.

Keywords: lignite, thermogravimetric analysis, pyrolysis, kinetics

Procedia PDF Downloads 347
13462 DIF-JACKET: a Thermal Protective Jacket for Firefighters

Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves

Abstract:

Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.

Keywords: firefighters, multilayer system, phase change material, thermal protective clothing

Procedia PDF Downloads 147
13461 Parametric Appraisal of Robotic Arc Welding of Mild Steel Material by Principal Component Analysis-Fuzzy with Taguchi Technique

Authors: Amruta Rout, Golak Bihari Mahanta, Gunji Bala Murali, Bibhuti Bhusan Biswal, B. B. V. L. Deepak

Abstract:

The use of industrial robots for performing welding operation is one of the chief sign of contemporary welding in these days. The weld joint parameter and weld process parameter modeling is one of the most crucial aspects of robotic welding. As weld process parameters affect the weld joint parameters differently, a multi-objective optimization technique has to be utilized to obtain optimal setting of weld process parameter. In this paper, a hybrid optimization technique, i.e., Principal Component Analysis (PCA) combined with fuzzy logic has been proposed to get optimal setting of weld process parameters like wire feed rate, welding current. Gas flow rate, welding speed and nozzle tip to plate distance. The weld joint parameters considered for optimization are the depth of penetration, yield strength, and ultimate strength. PCA is a very efficient multi-objective technique for converting the correlated and dependent parameters into uncorrelated and independent variables like the weld joint parameters. Also in this approach, no need for checking the correlation among responses as no individual weight has been assigned to responses. Fuzzy Inference Engine can efficiently consider these aspects into an internal hierarchy of it thereby overcoming various limitations of existing optimization approaches. At last Taguchi method is used to get the optimal setting of weld process parameters. Therefore, it has been concluded the hybrid technique has its own advantages which can be used for quality improvement in industrial applications.

Keywords: robotic arc welding, weld process parameters, weld joint parameters, principal component analysis, fuzzy logic, Taguchi method

Procedia PDF Downloads 170
13460 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example

Authors: Alena Nesterenko, Svetlana Petrikova

Abstract:

Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.

Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation

Procedia PDF Downloads 194
13459 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning

Authors: Shayan Mohajer Hamidi

Abstract:

Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.

Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning

Procedia PDF Downloads 57
13458 Action Research for School Development

Authors: Beate Weyland

Abstract:

The interdisciplinary laboratory EDEN, Educational Environments with Nature, born in 2020 at the Faculty of Education of the Free University of Bolzano, is working on a research path initiated in 2012 on the relationship between pedagogy and architecture in the design process of school buildings. Between 2016 and 2018, advisory support activity for schools was born, which combined the need to qualify the physical spaces of the school with the need to update teaching practices and develop school organization with the aim of improving pupils' and teachers' sense of well-being. The goal of accompanying the development of school communities through research-training paths concerns the process of designing together pedagogical-didactic and architectural environments in which to stage the educational relationship, involving professionals from education, educational research, architecture and design, and local administration. Between 2019 and 2024, more than 30 schools and educational communities throughout Italy have entered into research-training agreements with the university, focusing increasingly on the need to create new spaces and teaching methods capable of imagining educational spaces as places of well-being and where cultural development can be presided over. The paper will focus on the presentation of the research path and on the mixed methods used to support schools and educational communities: identification of the research question, development of the research objective, experimentation, and data collection for analysis and reflection. School and educational communities are involved in a participative and active manner. The quality of the action-research work is enriched by a special focus on the relationship with plants and nature in general. Plants are seen as mediators of processes that unhinge traditional didactics and invite teachers, students, parents, and administrators to think about the quality of learning spaces and relationships based on well-being. The contribution is characterized by a particular focus on research methodologies and tools developed together with teachers to answer the issues raised and to measure the impact of the actions undertaken.

Keywords: school development, learning space, wellbeing, plants and nature

Procedia PDF Downloads 25
13457 Response Surface Methodology for the Optimization of Radioactive Wastewater Treatment with Chitosan-Argan Nutshell Beads

Authors: Fatima Zahra Falah, Touria El. Ghailassi, Samia Yousfi, Ahmed Moussaif, Hasna Hamdane, Mouna Latifa Bouamrani

Abstract:

The management and treatment of radioactive wastewater pose significant challenges to environmental safety and public health. This study presents an innovative approach to optimizing radioactive wastewater treatment using a novel biosorbent: chitosan-argan nutshell beads. By employing Response Surface Methodology (RSM), we aimed to determine the optimal conditions for maximum removal efficiency of radioactive contaminants. Chitosan, a biodegradable and non-toxic biopolymer, was combined with argan nutshell powder to create composite beads. The argan nutshell, a waste product from argan oil production, provides additional adsorption sites and mechanical stability to the biosorbent. The beads were characterized using Fourier Transform Infrared Spectroscopy (FTIR), Scanning Electron Microscopy (SEM), and X-ray Diffraction (XRD) to confirm their structure and composition. A three-factor, three-level Box-Behnken design was utilized to investigate the effects of pH (3-9), contact time (30-150 minutes), and adsorbent dosage (0.5-2.5 g/L) on the removal efficiency of radioactive isotopes, primarily focusing on cesium-137. Batch adsorption experiments were conducted using synthetic radioactive wastewater with known concentrations of these isotopes. The RSM analysis revealed that all three factors significantly influenced the adsorption process. A quadratic model was developed to describe the relationship between the factors and the removal efficiency. The model's adequacy was confirmed through analysis of variance (ANOVA) and various diagnostic plots. Optimal conditions for maximum removal efficiency were pH 6.8, a contact time of 120 minutes, and an adsorbent dosage of 0.8 g/L. Under these conditions, the experimental removal efficiency for cesium-137 was 94.7%, closely matching the model's predictions. Adsorption isotherms and kinetics were also investigated to elucidate the mechanism of the process. The Langmuir isotherm and pseudo-second-order kinetic model best described the adsorption behavior, indicating a monolayer adsorption process on a homogeneous surface. This study demonstrates the potential of chitosan-argan nutshell beads as an effective and sustainable biosorbent for radioactive wastewater treatment. The use of RSM allowed for the efficient optimization of the process parameters, potentially reducing the time and resources required for large-scale implementation. Future work will focus on testing the biosorbent's performance with real radioactive wastewater samples and investigating its regeneration and reusability for long-term applications.

Keywords: adsorption, argan nutshell, beads, chitosan, mechanism, optimization, radioactive wastewater, response surface methodology

Procedia PDF Downloads 11
13456 Innovation Culture TV “Stars of Science”: 15 Seasons Case Study

Authors: Fouad Mrad, Viviane Zaccour

Abstract:

The accelerated developments in the political, economic, environmental, security, health, and social folders are exhausting planners across the world, especially in Arab countries. The impact of the tension is multifaceted and has resulted in conflicts, wars, migration, and human insecurity. The potential cross-cutting role that science, innovation and technology can play in supporting Arab societies to address these pressing challenges is a serious, unique chance for the people of the region. This opportunity is based on the existing capacity of educated youth and inaccessible talents in the local universities and research centers. It has been accepted that Arab countries have achieved major advancements in the economy, education and social wellbeing since the 70s of the 20th Century. Mainly direct outcome of the oil and other natural resources. The UN Secretary-General, during the Education Summit in Sep 2022, stressed that “Learning continues to underplay skills, including problem-solving, critical thinking and empathy.” Stars of Science by Qatar Foundation was launched in 2009 and has been sustained through 2023. Consistent mission from the start: To mobilize a new generation of Pan-Arab innovators and problem solvers by encouraging youth participation and interest in Science, Technology and Entrepreneurship throughout the Arab world via the program and its social media activities. To make science accessible and attractive to mass audiences by de-mystifying the process of innovation. Harnessing best practices within reality TV to show that science, engineering, and innovation are important in everyday life and can be fun.” Thousands of Participants learned unforgettable lessons; winners changed their lives forever as they learned and earned seed capital; they became drivers of change in their countries and families; millions of viewers were exposed to an innovative experimental process, and culturally, several relevant national institutions adopted the SOS track in their national initiatives. The program exhibited experientially youth self-efficacy as the most distinct core property of human agency, which is an individual's belief in his or her capacity to execute behaviors necessary to produce specific performance attainments. In addition, the program proved that innovations are performed by networks of people with different sets of technological, useful knowledge, skills and competencies introduced by socially shared technological knowledge as a main determinant of economic activities in any economy.

Keywords: science, invention, innovation, Qatar foundation, QSTP, prototyping

Procedia PDF Downloads 67
13455 Image Based Landing Solutions for Large Passenger Aircraft

Authors: Thierry Sammour Sawaya, Heikki Deschacht

Abstract:

In commercial aircraft operations, almost half of the accidents happen during approach or landing phases. Automatic guidance and automatic landings have proven to bring significant safety value added for this challenging landing phase. This is why Airbus and ScioTeq have decided to work together to explore the capability of image-based landing solutions as additional landing aids to further expand the possibility to perform automatic approach and landing to runways where the current guiding systems are either not fitted or not optimum. Current systems for automated landing often depend on radio signals provided by airport ground infrastructure on the airport or satellite coverage. In addition, these radio signals may not always be available with the integrity and performance required for safe automatic landing. Being independent from these radio signals would widen the operations possibilities and increase the number of automated landings. Airbus and ScioTeq are joining their expertise in the field of Computer Vision in the European Program called Clean Sky 2 Large Passenger Aircraft, in which they are leading the IMBALS (IMage BAsed Landing Solutions) project. The ultimate goal of this project is to demonstrate, develop, validate and verify a certifiable automatic landing system guiding an airplane during the approach and landing phases based on an onboard camera system capturing images, enabling automatic landing independent from radio signals and without precision instrument for landing. In the frame of this project, ScioTeq is responsible for the development of the Image Processing Platform (IPP), while Airbus is responsible for defining the functional and system requirements as well as the testing and integration of the developed equipment in a Large Passenger Aircraft representative environment. The aim of this paper will be to describe the system as well as the associated methods and tools developed for validation and verification.

Keywords: aircraft landing system, aircraft safety, autoland, avionic system, computer vision, image processing

Procedia PDF Downloads 88
13454 Delamination Fracture Toughness Benefits of Inter-Woven Plies in Composite Laminates Produced through Automated Fibre Placement

Authors: Jayden Levy, Garth M. K. Pearce

Abstract:

An automated fibre placement method has been developed to build through-thickness reinforcement into carbon fibre reinforced plastic laminates during their production, with the goal of increasing delamination fracture toughness while circumventing the additional costs and defects imposed by post-layup stitching and z-pinning. Termed ‘inter-weaving’, the method uses custom placement sequences of thermoset prepreg tows to distribute regular fibre link regions in traditionally clean ply interfaces. Inter-weaving’s impact on mode I delamination fracture toughness was evaluated experimentally through double cantilever beam tests (ASTM standard D5528-13) on [±15°]9 laminates made from Park Electrochemical Corp. E-752-LT 1/4” carbon fibre prepreg tape. Unwoven and inter-woven automated fibre placement samples were compared to those of traditional laminates produced from standard uni-directional plies of the same material system. Unwoven automated fibre placement laminates were found to suffer a mostly constant 3.5% decrease in mode I delamination fracture toughness compared to flat uni-directional plies. Inter-weaving caused significant local fracture toughness increases (up to 50%), though these were offset by a matching overall reduction. These positive and negative behaviours of inter-woven laminates were respectively found to be caused by fibre breakage and matrix deformation at inter-weave sites, and the 3D layering of inter-woven ply interfaces providing numerous paths of least resistance for crack propagation.

Keywords: AFP, automated fibre placement, delamination, fracture toughness, inter-weaving

Procedia PDF Downloads 174
13453 The Digital Living Archive and the Construction of a Participatory Cultural Memory in the DARE-UIA Project: Digital Environment for Collaborative Alliances to Regenerate Urban Ecosystems in Middle-Sized Cities

Authors: Giulia Cardoni, Francesca Fabbrii

Abstract:

Living archives perform a function of social memory sharing, which contributes to building social bonds, communities, and identities. This potential lies in the ability to live archives to put together an archival function, which allows the conservation and transmission of memory with an artistic, performative and creative function linked to the present. As part of the DARE-UIA (Digital environment for collaborative alliances to regenerate urban ecosystems in middle-sized cities) project the creation of a living digital archive made it possible to create a narrative that would consolidate the cultural memory of the Darsena district of the city of Ravenna. The aim of the project is to stimulate the urban regeneration of a suburban area of a city, enhancing its cultural memory and identity heritage through digital heritage tools. The methodology used involves various digital storytelling actions necessary for the overall narrative using georeferencing systems (GIS), storymaps and 3D reconstructions for a transversal narration of historical content such as personal and institutional historical photos and to enhance the industrial archeology heritage of the neighborhood. The aim is the creation of an interactive and replicable narrative in similar contexts to the Darsena district in Ravenna. The living archive, in which all the digital contents are inserted, finds its manifestation towards the outside in the form of a museum spread throughout the neighborhood, making the contents usable on smartphones via QR codes and totems inserted on-site, creating thematic itineraries spread around the neighborhood. The construction of an interactive and engaging digital narrative has made it possible to enhance the material and immaterial heritage of the neighborhood by recreating the community that has historically always distinguished it.

Keywords: digital living archive, digital storytelling, GIS, 3D, open-air museum, urban regeneration, cultural memory

Procedia PDF Downloads 92
13452 Regret-Regression for Multi-Armed Bandit Problem

Authors: Deyadeen Ali Alshibani

Abstract:

In the literature, the multi-armed bandit problem as a statistical decision model of an agent trying to optimize his decisions while improving his information at the same time. There are several different algorithms models and their applications on this problem. In this paper, we evaluate the Regret-regression through comparing with Q-learning method. A simulation on determination of optimal treatment regime is presented in detail.

Keywords: optimal, bandit problem, optimization, dynamic programming

Procedia PDF Downloads 442
13451 Cloud Based Supply Chain Traceability

Authors: Kedar J. Mahadeshwar

Abstract:

Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.

Keywords: cloud, pharmaceutical, supply chain, tracking

Procedia PDF Downloads 515
13450 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification

Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran

Abstract:

The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.

Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM

Procedia PDF Downloads 239
13449 Optimum Dewatering Network Design Using Firefly Optimization Algorithm

Authors: S. M. Javad Davoodi, Mojtaba Shourian

Abstract:

Groundwater table close to the ground surface causes major problems in construction and mining operation. One of the methods to control groundwater in such cases is using pumping wells. These pumping wells remove excess water from the site project and lower the water table to a desirable value. Although the efficiency of this method is acceptable, it needs high expenses to apply. It means even small improvement in a design of pumping wells can lead to substantial cost savings. In order to minimize the total cost in the method of pumping wells, a simulation-optimization approach is applied. The proposed model integrates MODFLOW as the simulation model with Firefly as the optimization algorithm. In fact, MODFLOW computes the drawdown due to pumping in an aquifer and the Firefly algorithm defines the optimum value of design parameters which are numbers, pumping rates and layout of the designing wells. The developed Firefly-MODFLOW model is applied to minimize the cost of the dewatering project for the ancient mosque of Kerman city in Iran. Repetitive runs of the Firefly-MODFLOW model indicates that drilling two wells with the total rate of pumping 5503 m3/day is the result of the minimization problem. Results show that implementing the proposed solution leads to at least 1.5 m drawdown in the aquifer beneath mosque region. Also, the subsidence due to groundwater depletion is less than 80 mm. Sensitivity analyses indicate that desirable groundwater depletion has an enormous impact on total cost of the project. Besides, in a hypothetical aquifer decreasing the hydraulic conductivity contributes to decrease in total water extraction for dewatering.

Keywords: groundwater dewatering, pumping wells, simulation-optimization, MODFLOW, firefly algorithm

Procedia PDF Downloads 284
13448 Some Extreme Halophilic Microorganisms Produce Extracellular Proteases with Long Lasting Tolerance to Ethanol Exposition

Authors: Cynthia G. Esquerre, Amparo Iris Zavaleta

Abstract:

Extremophiles constitute a potentially valuable source of proteases for the development of biotechnological processes; however, the number of available studies in the literature is limited compared to mesophilic counterparts. Therefore, in this study, Peruvian halophilic microorganisms were characterized to select suitable proteolytic strains that produce active proteases under exigent conditions. Proteolysis was screened using the streak plate method with gelatin or skim milk as substrates. After that, proteolytic microorganisms were selected for phenotypic characterization and screened by a semi-quantitative proteolytic test using a modified method of diffusion agar. Finally, proteolysis was evaluated using partially purified extracts by ice-cold ethanol precipitation and dialysis. All analyses were carried out over a wide range of NaCl concentrations, pH, temperature and substrates. Of a total of 60 strains, 21 proteolytic strains were selected, of these 19 were extreme halophiles and 2 were moderates. Most proteolytic strains demonstrated differences in their biochemical patterns, particularly in sugar fermentation. A total of 14 microorganisms produced extracellular proteases, 13 were neutral, and one was alkaline showing activity up to pH 9.0. Proteases hydrolyzed gelatin as the most specific substrate. In general, catalytic activity was efficient under a wide range of NaCl (1 to 4 M NaCl), temperature (37 to 55 °C) and after an ethanol exposition performed at -20 °C for 24 hours. In conclusion, this study reported 14 candidates extremely halophiles producing extracellular proteases capable of being stable and active on a wide range of NaCl, temperature and even long lasting ethanol exposition.

Keywords: biotechnological processes, ethanol exposition, extracellular proteases, extremophiles

Procedia PDF Downloads 275
13447 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki

Abstract:

Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.

Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol

Procedia PDF Downloads 216
13446 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay

Authors: Anderson Braga Mendes

Abstract:

Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.

Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance

Procedia PDF Downloads 262
13445 Research of Seepage Field and Slope Stability Considering Heterogeneous Characteristics of Waste Piles: A Less Costly Way to Reduce High Leachate Levels and Avoid Accidents

Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong, Gan Lei, Xu Liqun

Abstract:

Due to the characteristics of high-heap and large-volume, the complex layers of waste and the high-water level of leachate, environmental pollution, and slope instability are easily produced. It is therefore of great significance to research the heterogeneous seepage field and stability of landfills. This paper focuses on the heterogeneous characteristics of the landfill piles and analyzes the seepage field and slope stability of the landfill using statistical and numerical analysis methods. The calculated results are compared with the field measurement and literature research data to verify the reliability of the model, which may provide the basis for the design, safe, and eco-friendly operation of the landfill. The main innovations are as follows: (1) The saturated-unsaturated seepage equation of heterogeneous soil is derived theoretically. The heterogeneous landfill is regarded as composed of infinite layers of homogeneous waste, and a method for establishing the heterogeneous seepage model is proposed. Then the formation law of the stagnant water level of heterogeneous landfills is studied. It is found that the maximum stagnant water level of landfills is higher when considering the heterogeneous seepage characteristics, which harms the stability of landfills. (2) Considering the heterogeneity weight and strength characteristics of waste, a method of establishing a heterogeneous stability model is proposed, and it is extended to the three-dimensional stability study. It is found that the distribution of heterogeneous characteristics has a great influence on the stability of landfill slope. During the operation and management of the landfill, the reservoir bank should also be considered while considering the capacity of the landfill.

Keywords: heterogeneous characteristics, leachate levels, saturated-unsaturated seepage, seepage field, slope stability

Procedia PDF Downloads 232
13444 A Real-World Evidence Analysis of Associations between Costs, Quality of Life and Disease-Severity Indicators of Alzheimer’s Disease in Thailand

Authors: Khachen Kongpakwattana, Charungthai Dejthevaporn, Orapitchaya Krairit, Piyameth Dilokthornsakul, Devi Mohan, Nathorn Chaiyakunapruk

Abstract:

Background: Although an increase in the burden of Alzheimer’s disease (AD) is evident worldwide, knowledge of costs and health-related quality of life (HR-QoL) associated with AD in Low- and Middle-Income Countries (LMICs) is still lacking. We, therefore, aimed to collect real-world cost and HR-QoL data, and investigate their associations with multiple disease-severity indicators among AD patients in Thailand. Methods: We recruited AD patients aged ≥ 60 years accompanied by their caregivers at a university-affiliated tertiary hospital. A one-time structured interview was conducted to collect disease-severity indicators, HR-QoL and caregiving information using standardized tools. The hospital’s database was used to retrieve healthcare resource utilization occurred over 6 months preceding the interview date. Costs were annualized and stratified based on cognitive status. Generalized linear models were employed to evaluate determinants of costs and HR-QoL. Results: Among 148 community-dwelling patients, average annual total societal costs of AD care were 8,014 US$ [95% Confidence Interval (95% CI): 7,295 US$ - 8,844 US$] per patient. Total costs of patients with severe stage (9,860 US$; 95% CI: 8,785 US$ - 11,328 US$) were almost twice as high as those of mild stage (5,524 US$; 95% CI: 4,649 US$ - 6,593 US$). The major cost driver was direct medical costs, particularly those incurred by AD prescriptions. Functional status was the strongest determinant for both total costs and patient’s HR-QoL (p-value < 0.001). Conclusions: Our real-world findings suggest the distinct major cost driver which results from expensive AD treatment, emphasizing the demand for country-specific cost evidence. Increases in cognitive and functional status are significantly associated with decreases in total costs of AD care and improvement on patient’s HR-QoL.

Keywords: Alzheimer's disease, associations, costs, disease-severity indicators, health-related quality of life

Procedia PDF Downloads 125
13443 Reaching a Mobile and Dynamic Nose after Rhinoplasty: A Pilot Study

Authors: Guncel Ozturk

Abstract:

Background: Rhinoplasty is the most commonly performed cosmetic operations in plastic surgery. Maneuvers used in rhinoplasty lead to a firm and stiff nasal tip in the early postoperative months. This unnatural stability of the nose may easily cause distortion in the reshaped nose after severe trauma. Moreover, a firm nasal tip may cause difficulties in performing activities such as touching, hugging, or kissing. Decreasing the stability and increasing the mobility of the nasal tip would help rhinoplasty patients to avoid these small but relatively important problems. Methods: We use delivery approach with closed rhinoplasty and changed positions of intranasal incisions to reach a dynamic and mobile nose. A total of 203 patients who had undergone primary closed rhinoplasty in private practice were inspected retrospectively. Posterior strut flap that was connected with connective tissues in the caudal of septum and the medial crurals were formed. Cartilage of the posterior strut graft was left 2 mm thick in the distal part of septum, it was cut vertically, and the connective tissue in the distal part was preserved. Results: The median patient age was 24 (range 17-42) years. The median follow-up period was15.2 (range12-26) months. Patient satisfaction was assessed with the 'Rhinoplasty Outcome Evaluation' (ROE) questionnaire. Twelve months after surgeries, 87.5% of patients reported excellent outcomes, according to ROE. Conclusion: The soft tissue connections between that segment and surrounding structures should be preserved to save the support of the tip while having a mobile tip at the same time with this method. These modifications would access to a mobile, non-stiff, and dynamic nasal tip in the early postoperative months. Further and prospective studies should be performed for supporting this method.

Keywords: closed rhinoplasty, dynamic, mobile, tip

Procedia PDF Downloads 119
13442 Crop Leaf Area Index (LAI) Inversion and Scale Effect Analysis from Unmanned Aerial Vehicle (UAV)-Based Hyperspectral Data

Authors: Xiaohua Zhu, Lingling Ma, Yongguang Zhao

Abstract:

Leaf Area Index (LAI) is a key structural characteristic of crops and plays a significant role in precision agricultural management and farmland ecosystem modeling. However, LAI retrieved from different resolution data contain a scaling bias due to the spatial heterogeneity and model non-linearity, that is, there is scale effect during multi-scale LAI estimate. In this article, a typical farmland in semi-arid regions of Chinese Inner Mongolia is taken as the study area, based on the combination of PROSPECT model and SAIL model, a multiple dimensional Look-Up-Table (LUT) is generated for multiple crops LAI estimation from unmanned aerial vehicle (UAV) hyperspectral data. Based on Taylor expansion method and computational geometry model, a scale transfer model considering both difference between inter- and intra-class is constructed for scale effect analysis of LAI inversion over inhomogeneous surface. The results indicate that, (1) the LUT method based on classification and parameter sensitive analysis is useful for LAI retrieval of corn, potato, sunflower and melon on the typical farmland, with correlation coefficient R2 of 0.82 and root mean square error RMSE of 0.43m2/m-2. (2) The scale effect of LAI is becoming obvious with the decrease of image resolution, and maximum scale bias is more than 45%. (3) The scale effect of inter-classes is higher than that of intra-class, which can be corrected efficiently by the scale transfer model established based Taylor expansion and Computational geometry. After corrected, the maximum scale bias can be reduced to 1.2%.

Keywords: leaf area index (LAI), scale effect, UAV-based hyperspectral data, look-up-table (LUT), remote sensing

Procedia PDF Downloads 432
13441 Managing Company's Reputation during Crisis: An Analysis of Croatia Airlines' Crisis Response Strategy to the Labor Unions' Strike Announcement

Authors: M. Polic, N. Cesarec Salopek

Abstract:

When it comes to crisis, no company, notwithstanding its financial success, power or reputation is immune to the new environment and circumstances emerging from it. The main challenge company faces with during a crisis is to protect its most valuable intangible asset reputation. Crisis has the serious potential to disrupt company’s everyday operations and damage its reputation extremely fast, especially if the company did not anticipate threats that may cause a crisis. Therefore, when a crisis happens, company must directly respond to it, whilst an effective crisis communication can limit consequences arising from the crisis, protect and repair the reputational damage caused to the company. Since every crisis is unique, each one of it requires different crisis response strategy. In July 2018, airline labor unions threatened Croatia Airlines, the state owned flag carrier of Croatia, to hold a strike that would be called into question regular flights and affect more than 7.600 passengers per day. This study explores the differences between crisis response strategies that Croatia Airlines, the state owned flag carrier of Croatia and airline labor unions used during the crisis period within the Situational Crisis Communication Theory (SCCT) by analyzing the content of formal communication tools used by Croatia Airlines and airline labor unions. Moreover, this study shows how Croatia Airlines successfully managed to communicate to the general public the threat that airline labor unions imposed on it and how was it received by the Croatian media. By using the qualitative and quantitative content analysis, the study will reveal the frames that dominated in the media articles during the crisis period. The greatest significance of this study is that it will provide the deeper insight into how transparent and consistent communication, the one that Croatia Airlines used before and during the crisis period, contributed to the decision of the competent court (Zagreb County Court) which prohibited labor unions strike in August 2018.

Keywords: crisis communication, crisis response strategy, Croatia Airlines, labor union, reputation management, situational crisis communication theory, strike

Procedia PDF Downloads 124
13440 Community-Based Reference Interval of Selected Clinical Chemistry Parameters Among Apparently Healthy Adolescents in Mekelle City, Tigrai, Northern Ethiopia

Authors: Getachew Belay Kassahun

Abstract:

Background: Locally established clinical laboratory reference intervals (RIs) are required to interpret laboratory test results for screening, diagnosis, and prognosis. The objective of this study was to establish a reference interval of clinical chemistry parameters among apparently healthy adolescents aged between 12 and 17 years in Mekelle, Tigrai, in the northern part of Ethiopia. Methods: Community-based cross-sectional study was employed from December 2018 to March 2019 in Mekelle City among 172 males and 172 females based on a Multi-stage sampling technique. Blood samples were tested for Fasting blood sugar (FBS), alanine amino transferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), Creatinine, urea, total protein, albumin (ALB), direct and indirect bilirubin (BIL.D and BIL.T) using 25 Bio system clinical chemistry analyzer. Results were analyzed using SPSS version 23 software and based on the Clinical Laboratory Standard Institute (CLSI)/ International Federation of Clinical Chemistry (IFCC) C 28-A3 Guideline which defines the reference interval as the 95% central range of 2.5th and 97.5th percentiles. Mann Whitney U test, descriptive statistics and box and whisker were statistical tools used for analysis. Results: This study observed statistically significant differences between males and females in ALP, ALT, AST, Urea and Creatinine Reference intervals. The established reference intervals for males and females, respectively, were: ALP (U/L) 79.48-492.12 versus 63.56-253.34, ALT (U/L) 4.54-23.69 versus 5.1-20.03, AST 15.7- 39.1 versus 13.3- 28.5, Urea (mg/dL) 9.33-24.99 versus 7.43-23.11, and Creatinine (mg/dL) 0.393-0.957 versus 0.301-0.846. The combined RIs for Total Protein (g/dL) were 6.08-7.85, ALB (g/dL) 4.42-5.46, FBS(mg/dL) 65-110, BIL.D (mg/dL) 0.033-0.532, and BIL.T (mg/dL) 0.106-0.812. Conclusions: The result showed a marked difference between sex and company-derived values for selected clinical chemistry parameters. Thus, the use of age and sex-specific locally established reference intervals for clinical chemistry parameters is recommended.

Keywords: reference interval, adolescent, clinical chemistry, Ethiopia

Procedia PDF Downloads 65
13439 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 74
13438 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 82
13437 Resale Housing Development Board Price Prediction Considering Covid-19 through Sentiment Analysis

Authors: Srinaath Anbu Durai, Wang Zhaoxia

Abstract:

Twitter sentiment has been used as a predictor to predict price values or trends in both the stock market and housing market. The pioneering works in this stream of research drew upon works in behavioural economics to show that sentiment or emotions impact economic decisions. Latest works in this stream focus on the algorithm used as opposed to the data used. A literature review of works in this stream through the lens of data used shows that there is a paucity of work that considers the impact of sentiments caused due to an external factor on either the stock or the housing market. This is despite an abundance of works in behavioural economics that show that sentiment or emotions caused due to an external factor impact economic decisions. To address this gap, this research studies the impact of Twitter sentiment pertaining to the Covid-19 pandemic on resale Housing Development Board (HDB) apartment prices in Singapore. It leverages SNSCRAPE to collect tweets pertaining to Covid-19 for sentiment analysis, lexicon based tools VADER and TextBlob are used for sentiment analysis, Granger Causality is used to examine the relationship between Covid-19 cases and the sentiment score, and neural networks are leveraged as prediction models. Twitter sentiment pertaining to Covid-19 as a predictor of HDB price in Singapore is studied in comparison with the traditional predictors of housing prices i.e., the structural and neighbourhood characteristics. The results indicate that using Twitter sentiment pertaining to Covid19 leads to better prediction than using only the traditional predictors and performs better as a predictor compared to two of the traditional predictors. Hence, Twitter sentiment pertaining to an external factor should be considered as important as traditional predictors. This paper demonstrates the real world economic applications of sentiment analysis of Twitter data.

Keywords: sentiment analysis, Covid-19, housing price prediction, tweets, social media, Singapore HDB, behavioral economics, neural networks

Procedia PDF Downloads 97
13436 Self-Supervised Attributed Graph Clustering with Dual Contrastive Loss Constraints

Authors: Lijuan Zhou, Mengqi Wu, Changyong Niu

Abstract:

Attributed graph clustering can utilize the graph topology and node attributes to uncover hidden community structures and patterns in complex networks, aiding in the understanding and analysis of complex systems. Utilizing contrastive learning for attributed graph clustering can effectively exploit meaningful implicit relationships between data. However, existing attributed graph clustering methods based on contrastive learning suffer from the following drawbacks: 1) Complex data augmentation increases computational cost, and inappropriate data augmentation may lead to semantic drift. 2) The selection of positive and negative samples neglects the intrinsic cluster structure learned from graph topology and node attributes. Therefore, this paper proposes a method called self-supervised Attributed Graph Clustering with Dual Contrastive Loss constraints (AGC-DCL). Firstly, Siamese Multilayer Perceptron (MLP) encoders are employed to generate two views separately to avoid complex data augmentation. Secondly, the neighborhood contrastive loss is introduced to constrain node representation using local topological structure while effectively embedding attribute information through attribute reconstruction. Additionally, clustering-oriented contrastive loss is applied to fully utilize clustering information in global semantics for discriminative node representations, regarding the cluster centers from two views as negative samples to fully leverage effective clustering information from different views. Comparative clustering results with existing attributed graph clustering algorithms on six datasets demonstrate the superiority of the proposed method.

Keywords: attributed graph clustering, contrastive learning, clustering-oriented, self-supervised learning

Procedia PDF Downloads 31
13435 Effects of Ultraviolet Treatment on Microbiological Load and Phenolic Content of Vegetable Juice

Authors: Kubra Dogan, Fatih Tornuk

Abstract:

Due to increasing consumer demand for the high-quality food products and awareness regarding the health benefits of different nutrients in food minimal processing becomes more popular in modern food preservation. To date, heat treatment is often used for inactivation of spoilage microorganisms in foods. However, it may cause significant changes in the quality and nutritional properties of food. In order to overcome the detrimental effects of heat treatment, several alternatives of non-thermal microbial inactivation processes have been investigated. Ultraviolet (UV) inactivation is a promising and feasible method for better quality and longer shelf life as an alternative to heat treatment, which aims to inhibit spoilage and pathogenic microorganisms and to inactivate the enzymes in vegetable juice production. UV-C is a sub-class of UV treatment which shows the highest microcidal effect between 250-270 nm. The wavelength of 254 nm is used for the surface disinfection of certain liquid food products such as vegetable juice. Effects of UV-C treatment on microbiological load and quality parameter of vegetable juice which is a mix of celery, carrot, lemon and orange was investigated. Our results showed that storing of UV-C applied vegetable juice for three months, reduced the count of TMAB by 3.5 log cfu/g and yeast-mold by 2 log cfu/g compared to control sample. Total phenolic content was found to be 514.3 ± 0.6 mg gallic acid equivalent/L, and there wasn’t a significant difference compared to control. The present work suggests that UV-C treatment is an alternative method for disinfection of vegetable juice since it enables adequate microbial inactivation, longer shelf life and has minimal effect on degradation of quality parameters of vegetable juice.

Keywords: heat treatment, phenolic content, shelf life, ultraviolet (UV-C), vegetable juice

Procedia PDF Downloads 199