Search results for: the difficult post-earthquake reconstruction in Italy
3366 Reconstruction of Holographic Dark Energy in Chameleon Brans-Dicke Cosmology
Authors: Surajit Chattopadhyay
Abstract:
Accelerated expansion of the current universe is well-established in the literature. Dark energy and modified gravity are two approaches to account for this accelerated expansion. In the present work, we consider scalar field models of dark energy, namely, tachyon and DBI essence in the framework of chameleon Brans-Dicke cosmology. The equation of state parameter is reconstructed and the subsequent cosmological implications are studied. We examined the stability for the obtained solutions of the crossing of the phantom divide under a quantum correction of massless conformally invariant fields and we have seen that quantum correction could be small when the phantom crossing occurs and the obtained solutions of the phantom crossing could be stable under the quantum correction. In the subsequent phase, we have established a correspondence between the NHDE model and the quintessence, the DBI-essence and the tachyon scalar field models in the framework of chameleon Brans–Dicke cosmology. We reconstruct the potentials and the dynamics for these three scalar field models we have considered. The reconstructed potentials are found to increase with the evolution of the universe and in a very late stage they are observed to decay.Keywords: dark energy, holographic principle, modified gravity, reconstruction
Procedia PDF Downloads 4123365 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 1903364 Influence of Optical Fluence Distribution on Photoacoustic Imaging
Authors: Mohamed K. Metwally, Sherif H. El-Gohary, Kyung Min Byun, Seung Moo Han, Soo Yeol Lee, Min Hyoung Cho, Gon Khang, Jinsung Cho, Tae-Seong Kim
Abstract:
Photoacoustic imaging (PAI) is a non-invasive and non-ionizing imaging modality that combines the absorption contrast of light with ultrasound resolution. Laser is used to deposit optical energy into a target (i.e., optical fluence). Consequently, the target temperature rises, and then thermal expansion occurs that leads to generating a PA signal. In general, most image reconstruction algorithms for PAI assume uniform fluence within an imaging object. However, it is known that optical fluence distribution within the object is non-uniform. This could affect the reconstruction of PA images. In this study, we have investigated the influence of optical fluence distribution on PA back-propagation imaging using finite element method. The uniform fluence was simulated as a triangular waveform within the object of interest. The non-uniform fluence distribution was estimated by solving light propagation within a tissue model via Monte Carlo method. The results show that the PA signal in the case of non-uniform fluence is wider than the uniform case by 23%. The frequency spectrum of the PA signal due to the non-uniform fluence has missed some high frequency components in comparison to the uniform case. Consequently, the reconstructed image with the non-uniform fluence exhibits a strong smoothing effect.Keywords: finite element method, fluence distribution, Monte Carlo method, photoacoustic imaging
Procedia PDF Downloads 3773363 An Alternative Framework of Multi-Resolution Nested Weighted Essentially Non-Oscillatory Schemes for Solving Euler Equations with Adaptive Order
Authors: Zhenming Wang, Jun Zhu, Yuchen Yang, Ning Zhao
Abstract:
In the present paper, an alternative framework is proposed to construct a class of finite difference multi-resolution nested weighted essentially non-oscillatory (WENO) schemes with an increasingly higher order of accuracy for solving inviscid Euler equations. These WENO schemes firstly obtain a set of reconstruction polynomials by a hierarchy of nested central spatial stencils, and then recursively achieve a higher order approximation through the lower-order precision WENO schemes. The linear weights of such WENO schemes can be set as any positive numbers with a requirement that their sum equals one and they will not pollute the optimal order of accuracy in smooth regions and could simultaneously suppress spurious oscillations near discontinuities. Numerical results obtained indicate that these alternative finite-difference multi-resolution nested WENO schemes with different accuracies are very robust with low dissipation and use as few reconstruction stencils as possible while maintaining the same efficiency, achieving the high-resolution property without any equivalent multi-resolution representation. Besides, its finite volume form is easier to implement in unstructured grids.Keywords: finite-difference, WENO schemes, high order, inviscid Euler equations, multi-resolution
Procedia PDF Downloads 1453362 The Application of Collision Damage Analysis in Reconstruction of Sedan-Scooter Accidents
Authors: Chun-Liang Wu, Kai-Ping Shaw, Cheng-Ping Yu, Wu-Chien Chien, Hsiao-Ting Chen, Shao-Huang Wu
Abstract:
Objective: This study analyzed three criminal judicial cases. We applied the damage analysis of the two vehicles to verify other evidence, such as dashboard camera records of each accident, reconstruct the scenes, and pursue the truth. Methods: Evidence analysis, the method is to collect evidence and the reason for the results in judicial procedures, then analyze the involved damage evidence to verify other evidence. The collision damage analysis method is to inspect the damage to the vehicles and utilize the principles of tool mark analysis, Newtonian physics, and vehicle structure to understand the relevant factors when the vehicles collide. Results: Case 1: Sedan A turned right at the T junction and collided with Scooter B, which was going straight on the left road. The dashboard camera records showed that the left side of Sedan A’s front bumper collided with the body of Scooter B and rider B. After the analysis of the study, the truth was that the front of the left side of Sedan A impacted the right pedal of Scooter B and the right lower limb of rider B. Case 2: Sedan C collided with Scooter D on the left road at the crossroads. The dashboard camera record showed that the left side of the Sedan C’s front bumper collided with the body of Scooter D and rider D. After the analysis of the study, the truth was that the left side of the Sedan C impacted the left side of the car body and the front wheel of Scooter D and rider D. Case 3: Sedan E collided with Scooter F on the right road at the crossroads. The dashboard camera record showed that the right side of the Sedan E’s front bumper collided with the body of Scooter F and rider F. After the analysis of the study, the truth was that the right side of the front bumper and the right side of the Sedan F impacted the Scooter. Conclusion: The application of collision damage analysis in the reconstruction of a sedan-scooter collision could discover the truth and provide the basis for judicial justice. The cases and methods could be the reference for the road safety policy.Keywords: evidence analysis, collision damage analysis, accident reconstruction, sedan-scooter collision, dashboard camera records
Procedia PDF Downloads 783361 The Outcome of Early Balance Exercises and Agility Training in Sports Rehabilitation for Patients Post Anterior Cruciate Ligament (ACL) Reconstruction
Authors: S. M. A. Ismail, M. I. Ibrahim, H. Masdar, F. M. Effendi, M. F. Suhaimi, A. Suun
Abstract:
Introduction: It is generally known that the rehabilitation process is as important as the reconstruction surgery. Several literature has focused on how early the rehabilitation modalities can be initiated after the surgery to ensure a safe return of patients to sports or at least regaining the pre-injury level of function following an ACL reconstruction. Objectives: The main objective is to study and evaluate the outcome of early balance exercises and agility training in sports rehabilitation for patients post ACL reconstruction. To compare between early balance exercises and agility training as intervention and control. (material or non-material). All of them were recruited for material exercise (balance exercises and agility training with strengthening) and strengthening only rehabilitation protocol (non-material). Followed the prospective intervention trial. Materials and Methods: Post-operative ACL reconstruction patients performed in Selayang and Sg Buloh Hospitals from 2012 to 2014 were selected for this study. They were taken from Malaysian Knee Ligament Registry (MKLR) and all patients had single bundle reconstruction with autograft hamstring tendon (semitendinosus and gracilis). ACL injury from any type of sports were included. Subjects performed various type of physical activity for rehabilitation in every 18 week for a different type of rehab activity. All subject attended all 18 sessions of rehabilitation exercises and evaluation was done during the first, 9th and 18th session. Evaluation format were based on clinical assessment (anterior drawer, Lachmann, pivot shift, laxity with rolimeter, the end point and thigh circumference) and scoring (Lysholm Knee scoring and Tegner Activity Level scale). Rehabilitation protocol initiated from 24 week after the surgery. Evaluation format were based on clinical assessment (anterior drawer, Lachmann, pivot shift, laxity with rolimeter, the end point and thigh circumference) and scoring (Lysholm Knee scoring and Tegner Activity Level scale). Results and Discussion: 100 patients were selected of which 94 patients are male and 6 female. Age range is 18 to 54 year with the average of 28 years old for included 100 patients. All patients are evaluated after 24 week after the surgery. 50 of them were recruited for material exercise (balance exercises and agility training with strengthening) and 50 for strengthening only rehabilitation protocol (non-material). Demographically showed 85% suffering sports injury mainly from futsal and football. 39 % of them have abnormal BMI (26 – 38) and involving of the left knee. 100% of patient had the basic radiographic x-ray of knee and 98% had MRI. All patients had negative anterior drawer’s, Lachman test and Pivot shift test during the post ACL reconstruction after the complete rehabilitation. There was 95 subject sustained grade I injury, 5 of grade II and 0 of grade III with 90% of them had soft end-point. Overall they scored badly on presentation with 53% of Lysholm score (poor) and Tegner activity score level 3/10. After completing 9 weeks of exercises, of material group 90% had grade I laxity, 75% with firm end-point, Lysholm score 71% (fair) and Tegner activity level 5/10 comparing non-material group who had 62% of grade I laxity , 54% of firm end-point, Lyhslom score 62 % (poor) and Tegner activity level 4/10. After completed 18 weeks of exercises, of material group maintained 90% grade I laxity with 100 % with firm end-point, Lysholm score increase 91% (excellent) and Tegner activity level 7/10 comparing non-material group who had 69% of grade I laxity but maintained 54% of firm end-point, Lysholm score 76% (fair) and Tegner activity level 5/10. These showed the improvement were achieved fast on material group who have achieved satisfactory level after 9th cycle of exercises 75% (15/20) comparing non-material group who only achieved 54% (7/13) after completed 18th session. Most of them were grade I. These concepts are consolidated into our approach to prepare patients for return to play including field testing and maintenance training. Conclusions: The basic approach in ACL rehabilitation is to ensure return to sports at post-operative 6 month. Grade I and II laxity has favourable and early satisfactory outcome base on clinical assessment and Lysholm and Tegner scoring point. Reduction of laxity grading indicates satisfactory outcome. Firm end-point showed the adequacy of rehabilitation before starting previous sports game. Material exercise (balance exercises and agility training with strengthening) were beneficial and reliable in order to achieve favourable and early satisfactory outcome comparing strengthening only (non-material).We have identified that rehabilitation protocol varies between different patients. Therefore future post ACL reconstruction rehabilitation guidelines should look into focusing on rehabilitation techniques instead of time.Keywords: post anterior cruciate ligament (ACL) reconstruction, single bundle, hamstring tendon, sports rehabilitation, balance exercises, agility balance
Procedia PDF Downloads 2553360 Designing and Making Sustainable Architectural Clothing Inspired by Reconstruction of Bam’s Bazaar
Authors: Marzieh Khaleghi Baygi, Maryam Khaleghy Baygy
Abstract:
The main aim of this project was designing and making sustainable architectural wearable dress inspired by reconstruction project of Bam’s Bazar in Iran. To achieve the goals of this project, Bam Bazar became the architectural reference. A mixed research method (including applied, qualitative and case studies methods) was used. After research, data gathering and considering related intellectual, mental and cultural background, the garment was modeled by using 3ds Max's modeling tools and Marvelous. After making the pattern, the wearable architecture was built and an architectural and historical building converted to a clothing. The implementation of sustainable architectural clothing, took seventeen months. The result of this project was a cloth in a new form that had been worn on its architect body. The comparison between present project and previous research were focusing on the same subjects (architectural clothing) shows some dramatic differentiations, including, the architect, designer and executive of this project was the same person who was the main researcher. Also, in this research, special attention was paid to the sustainability, volume and forms. Most projects in this subject (especially pervious related Iranian research) relied on painting and not on the volumes and forms. The sustainable immovable architecture had worn on its architect, became a cloth on a human's body that was moving.Keywords: wearable architecture, clothing, bam bazar, space, sustainability
Procedia PDF Downloads 613359 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry
Authors: Dhanuj M. Gandikota
Abstract:
Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry
Procedia PDF Downloads 1023358 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations
Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh
Abstract:
Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy
Procedia PDF Downloads 963357 Using ANN in Emergency Reconstruction Projects Post Disaster
Authors: Rasha Waheeb, Bjorn Andersen, Rafa Shakir
Abstract:
Purpose The purpose of this study is to avoid delays that occur in emergency reconstruction projects especially in post disaster circumstances whether if they were natural or manmade due to their particular national and humanitarian importance. We presented a theoretical and practical concepts for projects management in the field of construction industry that deal with a range of global and local trails. This study aimed to identify the factors of effective delay in construction projects in Iraq that affect the time and the specific quality cost, and find the best solutions to address delays and solve the problem by setting parameters to restore balance in this study. 30 projects were selected in different areas of construction were selected as a sample for this study. Design/methodology/approach This study discusses the reconstruction strategies and delay in time and cost caused by different delay factors in some selected projects in Iraq (Baghdad as a case study).A case study approach was adopted, with thirty construction projects selected from the Baghdad region, of different types and sizes. Project participants from the case projects provided data about the projects through a data collection instrument distributed through a survey. Mixed approach and methods were applied in this study. Mathematical data analysis was used to construct models to predict delay in time and cost of projects before they started. The artificial neural networks analysis was selected as a mathematical approach. These models were mainly to help decision makers in construction project to find solutions to these delays before they cause any inefficiency in the project being implemented and to strike the obstacles thoroughly to develop this industry in Iraq. This approach was practiced using the data collected through survey and questionnaire data collection as information form. Findings The most important delay factors identified leading to schedule overruns were contractor failure, redesigning of designs/plans and change orders, security issues, selection of low-price bids, weather factors, and owner failures. Some of these are quite in line with findings from similar studies in other countries/regions, but some are unique to the Iraqi project sample, such as security issues and low-price bid selection. Originality/value we selected ANN’s analysis first because ANN’s was rarely used in project management , and never been used in Iraq to finding solutions for problems in construction industry. Also, this methodology can be used in complicated problems when there is no interpretation or solution for a problem. In some cases statistical analysis was conducted and in some cases the problem is not following a linear equation or there was a weak correlation, thus we suggested using the ANN’s because it is used for nonlinear problems to find the relationship between input and output data and that was really supportive.Keywords: construction projects, delay factors, emergency reconstruction, innovation ANN, post disasters, project management
Procedia PDF Downloads 1653356 The Effectiveness of Kinesio Taping in Enhancing Early Post-Operative Outcomes Inpatients after Total Knee Replacement or Anterior Cruciate Ligament Reconstruction
Authors: B. A. Alwahaby
Abstract:
Background: The number of Total Knee Replacement (TKR) and Anterior Cruciate Ligament Reconstruction (ACLR) performed every year is increasing. The main aim of physiotherapy early recovery rehabilitation after these surgeries is to control pain and edema and regain Range of Motion (ROM) and physical activity. All of these outcomes need to be managed by safe and effective modalities. Kinesiotaping (KT) is an elastic non-invasive therapeutic tape that has become recognised in different physiotherapy situation as injury prevention, rehabilitation, and performance enhancement and been used with different conditions. However, there is still clinical doubt regarding the effectiveness of KT due to inconclusive supporting evidence. The aim of this systematic review is to collate all the available evidence on the effectiveness of KT in the early rehabilitation of ACLR and TKR patients and analyse whether the use of KT combined with standard rehabilitation would facilitate recovery of postoperative outcome than standard rehabilitation alone. Methodology: A systematic review was conducted. Medline, EMBASE, Scopus, AMED PEDro, CINAHL, and Web of Science databases were searched. Each study was assessed for inclusion and methodological quality appraisal was undertaken by two reviewers using the JBI critical appraisal tools. The studies were then synthesised qualitatively due to heterogeneity between studies. Results: Five moderate to low quality RCTs were located. All five studies demonstrated statistically significant improvements in pain, swelling, ROM, and functional outcomes (p < 0.05). Between group comparison, KT combined with standardised rehabilitation were shown to be significantly more effective than standardised rehabilitation alone for pain and swelling (p < 0.05). However, there were inconstant findings for ROM, and no statistically significant differences reported between groups for functional outcomes (p > 0.05). Conclusion: Research in the area is generally low quality; however, there is consistent evidence to support the use of KT combined with standardised post-operative rehabilitation for reducing pain and swelling. There is also some evidence that KT may be effective in combination with standardised rehabilitation to regain knee extension ROM faster than standardised rehabilitation alone, but further primary research is required to confirm this.Keywords: anterior cruciate ligament reconstruction, ACLR, kinesio taping, KT, postoperative, total knee replacement, TKR
Procedia PDF Downloads 1223355 Inversion of Gravity Data for Density Reconstruction
Authors: Arka Roy, Chandra Prakash Dubey
Abstract:
Inverse problem generally used for recovering hidden information from outside available data. Vertical component of gravity field we will be going to use for underneath density structure calculation. Ill-posing nature is main obstacle for any inverse problem. Linear regularization using Tikhonov formulation are used for appropriate choice of SVD and GSVD components. For real time data handle, signal to noise ratios should have to be less for reliable solution. In our study, 2D and 3D synthetic model with rectangular grid are used for gravity field calculation and its corresponding inversion for density reconstruction. Fine grid also we have considered to hold any irregular structure. Keeping in mind of algebraic ambiguity factor number of observation point should be more than that of number of data point. Picard plot is represented here for choosing appropriate or main controlling Eigenvalues for a regularized solution. Another important study is depth resolution plot (DRP). DRP are generally used for studying how the inversion is influenced by regularizing or discretizing. Our further study involves real time gravity data inversion of Vredeforte Dome South Africa. We apply our method to this data. The results include density structure is in good agreement with known formation in that region, which puts an additional support of our method.Keywords: depth resolution plot, gravity inversion, Picard plot, SVD, Tikhonov formulation
Procedia PDF Downloads 2123354 Protection from Risks of Natural Disasters and Social and Economic Support to the Native Population
Authors: Maria Angela Bedini, Fabio Bronzini
Abstract:
The risk of natural disasters affects all the countries of the world, whether it refers to seismic events or tsunamis or hydrogeological disasters. In most cases, the risk can be considered in its three components: hazard, exposure, vulnerability (and urban vulnerability). The aim of this paper is to evaluate how the Italian scientific community has related the contribution of these three components, superimposing the three different maps that summarize the fundamental structure of the risk. Based on the three components considered, the study applies the Regional Planning methodology on the three phases of the risk protection and mitigation process: the prevention phase, the emergency intervention phase, the post-disaster phase. The paper illustrates the Italian experience of the pre-during-post-earthquake intervention. Main results: The study deepens these aspects in the belief that “a historical center” and an “island” can present similar problems at the international level, both in the phase of prevention (earthquake, tsunamis, hydrogeological disasters), in emergency phase (protocols and procedures of intervention) and in the post-disaster phase. The conclusions of the research identify the need to plan in advance how to deal with the post-disaster phase and consider it a priority with respect to the simple reconstruction of destroyed buildings. In fact the main result of the post-disaster intervention must be the return and the social and economic support of the indigenous population, and not only the construction of new housing and equipment. In this sense, the results of the research show that the elderly inhabitants of a historic center can be compared to the indigenous population of an atoll of fishermen, as both constitute the most important resource: the human resource. Their return in conditions of security testifies, with their presence, the culture, customs, and values rooted in the history of a people.Keywords: post-disaster interventions, risk of natural disasters in Italy and abroad, seismic events in Italy, social and economic protection and support for the native population of historical centers
Procedia PDF Downloads 1003353 Development of an Indoor Drone Designed for the Needs of the Creative Industries
Authors: V. Santamarina Campos, M. de Miguel Molina, S. Kröner, B. de Miguel Molina
Abstract:
With this contribution, we want to show how the AiRT system could change the future way of working of a part of the creative industry and what new economic opportunities could arise for them. Remotely Piloted Aircraft Systems (RPAS), also more commonly known as drones, are now essential tools used by many different companies for their creative outdoor work. However, using this very flexible applicable tool indoor is almost impossible, since safe navigation cannot be guaranteed by the operator due to the lack of a reliable and affordable indoor positioning system which ensures a stable flight, among other issues. Here we present our first results of a European project, which consists of developing an indoor drone for professional footage especially designed for the creative industries. One of the main achievements of this project is the successful implication of the end-users in the overall design process from the very beginning. To ensure safe flight in confined spaces, our drone incorporates a positioning system based on ultra-wide band technology, an RGB-D (depth) camera for 3D environment reconstruction and the possibility to fully pre-program automatic flights. Since we also want to offer this tool for inexperienced pilots, we have always focused on user-friendly handling of the whole system throughout the entire process.Keywords: virtual reality, 3D reconstruction, indoor positioning system, RPAS, remotely piloted aircraft systems, aerial film, intelligent navigation, advanced safety measures, creative industries
Procedia PDF Downloads 1963352 Applying Dictogloss Technique to Improve Auditory Learners’ Writing Skills in Second Language Learning
Authors: Aji Budi Rinekso
Abstract:
There are some common problems that are often faced by students in writing. The problems are related to macro and micro skills of writing, such as incorrect spellings, inappropriate diction, grammatical errors, random ideas, and irrelevant supporting sentences. Therefore, it is needed a teaching technique that can solve those problems. Dictogloss technique is a teaching technique that involves listening practices. So, it is a suitable teaching technique for students with auditory learning style. Dictogloss technique comprises of four basic steps; (1) warm up, (2) dictation, (3) reconstruction and (4) analysis and correction. Warm up is when students find out about topics and do some preparatory vocabulary works. Then, dictation is when the students listen to texts read at normal speed by a teacher. The text is read by the teacher twice where at the first reading the students only listen to the teacher and at the second reading the students listen to the teacher again and take notes. Next, reconstruction is when the students discuss the information from the text read by the teacher and start to write a text. Lastly, analysis and correction are when the students check their writings and revise them. Dictogloss offers some advantages in relation to the efforts of improving writing skills. Through the use of dictogloss technique, students can solve their problems both on macro skills and micro skills. Easier to generate ideas and better writing mechanics are the benefits of dictogloss.Keywords: auditory learners, writing skills, dictogloss technique, second language learning
Procedia PDF Downloads 1433351 Freedom, Thought, and the Will: A Philosophical Reconstruction of Muhammad Iqbal’s Conception of Human Agency
Authors: Anwar ul Haq
Abstract:
Muhammad Iqbal was arguably the most significant South Asian Islamic philosopher of the last two centuries. While he is the most revered philosopher of the region, particularly in Pakistan, he is probably the least studied philosopher outside the region. The paper offers a philosophical reconstruction of Iqbal’s view of human agency; it has three sections. Section 1 focuses on Iqbal’s starting point of reflection in practical philosophy (inspired by Kant): our consciousness of ourselves as free agents. The paper brings out Iqbal’s continuity with Kant but also his divergence, in particular his non-Kantian view that we possess a non-sensory intuition of ourselves as free personal causes. It also offer an argument on Iqbal’s behalf for this claim, which is meant as a defense against a Kantian objection to the possibility of intuition of freedom and a skeptic’s challenge to the possibility of freedom in general. Remaining part of the paper offers a reconstruction of Iqbal’s two preconditions of the possibility of free agency. Section 2 discusses the first precondition, namely, the unity of consciousness involved in thought (this is a precondition of agency whether or not it is free). The unity has two aspects, a quantitative (or numerical) aspect and a qualitative (or rational) one. Section 2 offers a defense of these two aspects of the unity of consciousness presupposed by agency by focusing, with Iqbal, on the case of inference.Section 3 discusses a second precondition of the possibility of free agency, that thought and will must be identical in a free agent. Iqbal offers this condition in relief against Bergson’s view. Bergson (on Iqbal’s reading of him) argues that freedom of the will is possible only if the will’s ends are entirely its own and are wholly undetermined by anything from without, not even by thought. Iqbal observes that Bergson’s position ends in an insurmountable dualism of will and thought. Bergson’s view, Iqbal argues in particular, rests on an untenable conception of what an end consists in. An end, correctly understood, is framed by a thinking faculty, the intellect, and not by an extra-rational faculty. The present section outlines Iqbal’s argument for this claim, which rests on the premise that ends possess a certain unity which is intrinsic to particular ends and holds together different ends, and this unity is none other than the quantitative and qualitative unity of a thinking consciousness but in its practical application. Having secured the rational origin of ends, Iqbal argues that a free will must be identical with thought, or else it will be determined from without and won’t be free on that account. Freedom of the self is not a freedom from thought but a freedom in thought: it involves the ability to live a thoughtful life.Keywords: iqbal, freedom, will, self
Procedia PDF Downloads 703350 A Clustering Algorithm for Massive Texts
Authors: Ming Liu, Chong Wu, Bingquan Liu, Lei Chen
Abstract:
Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections.Keywords: vector reconstruction, large-scale text clustering, partial tuning sub-process, overall tuning sub-process
Procedia PDF Downloads 4353349 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition
Procedia PDF Downloads 233348 Solar Power Forecasting for the Bidding Zones of the Italian Electricity Market with an Analog Ensemble Approach
Authors: Elena Collino, Dario A. Ronzio, Goffredo Decimi, Maurizio Riva
Abstract:
The rapid increase of renewable energy in Italy is led by wind and solar installations. The 2017 Italian energy strategy foresees a further development of these sustainable technologies, especially solar. This fact has resulted in new opportunities, challenges, and different problems to deal with. The growth of renewables allows to meet the European requirements regarding energy and environmental policy, but these types of sources are difficult to manage because they are intermittent and non-programmable. Operationally, these characteristics can lead to instability on the voltage profile and increasing uncertainty on energy reserve scheduling. The increasing renewable production must be considered with more and more attention especially by the Transmission System Operator (TSO). The TSO, in fact, every day provides orders on energy dispatch, once the market outcome has been determined, on extended areas, defined mainly on the basis of power transmission limitations. In Italy, six market zone are defined: Northern-Italy, Central-Northern Italy, Central-Southern Italy, Southern Italy, Sardinia, and Sicily. An accurate hourly renewable power forecasting for the day-ahead on these extended areas brings an improvement both in terms of dispatching and reserve management. In this study, an operational forecasting tool of the hourly solar output for the six Italian market zones is presented, and the performance is analysed. The implementation is carried out by means of a numerical weather prediction model, coupled with a statistical post-processing in order to derive the power forecast on the basis of the meteorological projection. The weather forecast is obtained from the limited area model RAMS on the Italian territory, initialized with IFS-ECMWF boundary conditions. The post-processing calculates the solar power production with the Analog Ensemble technique (AN). This statistical approach forecasts the production using a probability distribution of the measured production registered in the past when the weather scenario looked very similar to the forecasted one. The similarity is evaluated for the components of the solar radiation: global (GHI), diffuse (DIF) and direct normal (DNI) irradiation, together with the corresponding azimuth and zenith solar angles. These are, in fact, the main factors that affect the solar production. Considering that the AN performance is strictly related to the length and quality of the historical data a training period of more than one year has been used. The training set is made by historical Numerical Weather Prediction (NWP) forecasts at 12 UTC for the GHI, DIF and DNI variables over the Italian territory together with corresponding hourly measured production for each of the six zones. The AN technique makes it possible to estimate the aggregate solar production in the area, without information about the technologic characteristics of the all solar parks present in each area. Besides, this information is often only partially available. Every day, the hourly solar power forecast for the six Italian market zones is made publicly available through a website.Keywords: analog ensemble, electricity market, PV forecast, solar energy
Procedia PDF Downloads 1583347 Improving Temporal Correlations in Empirical Orthogonal Function Expansions for Data Interpolating Empirical Orthogonal Function Algorithm
Authors: Ping Bo, Meng Yunshan
Abstract:
Satellite-derived sea surface temperature (SST) is a key parameter for many operational and scientific applications. However, the disadvantage of SST data is a high percentage of missing data which is mainly caused by cloud coverage. Data Interpolating Empirical Orthogonal Function (DINEOF) algorithm is an EOF-based technique for reconstructing the missing data and has been widely used in oceanographic field. The reconstruction of SST images within a long time series using DINEOF can cause large discontinuities and one solution for this problem is to filter the temporal covariance matrix to reduce the spurious variability. Based on the previous researches, an algorithm is presented in this paper to improve the temporal correlations in EOF expansion. Similar with the previous researches, a filter, such as Laplacian filter, is implemented on the temporal covariance matrix, but the temporal relationship between two consecutive images which is used in the filter is considered in the presented algorithm, for example, two images in the same season are more likely correlated than those in the different seasons, hence the latter one is less weighted in the filter. The presented approach is tested for the monthly nighttime 4-km Advanced Very High Resolution Radiometer (AVHRR) Pathfinder SST for the long-term period spanning from 1989 to 2006. The results obtained from the presented algorithm are compared to those from the original DINEOF algorithm without filtering and from the DINEOF algorithm with filtering but without taking temporal relationship into account.Keywords: data interpolating empirical orthogonal function, image reconstruction, sea surface temperature, temporal filter
Procedia PDF Downloads 3243346 Historiography of Wood Construction in Portugal
Authors: João Gago dos Santos, Paulo Pereira Almeida
Abstract:
The present study intends to deepen and understand the reasons that led to the decline and disappearance of wooden construction systems in Portugal, for that reason, its use in history must be analyzed. It is observed that this material was an integral part of the construction systems in Europe and Portugal for centuries, and it is possible to conclude that its decline happens with the appearance of hybrid construction and later with the emergence and development of reinforced concrete technology. It is also verified that wood as a constructive element, and for that reason, an element of development had great importance in national construction, with its peak being the Pombaline period, after the 1755 earthquake. In this period, the great scarcity of materials in the metropolis led to the import wood from Brazil for the reconstruction of Lisbon. This period is linked to an accentuated exploitation of forests, resulting in laws and royal decrees aimed at protecting them, guaranteeing the continued existence of profitable forests, crucial to the reconstruction effort. The following period, with the gradual loss of memory of the catastrophe, resulted in a construction that was weakened structurally as a response to a time of real estate speculation and great urban expansion. This was the moment that precluded the inexistence of the use of wood in construction. At the beginning of the 20th century and in the 30s and 40s, with the appearance and development of reinforced concrete, it became part of the great structures of the state, and it is considered a versatile material capable of resolving issues throughout the national territory. It is at this point that the wood falls into disuse and practically disappears from the new works produced.Keywords: construction history, construction in portugal, construction systems, wood construction
Procedia PDF Downloads 1233345 Time Estimation of Return to Sports Based on Classification of Health Levels of Anterior Cruciate Ligament Using a Convolutional Neural Network after Reconstruction Surgery
Authors: Zeinab Jafari A., Ali Sharifnezhad B., Mohammad Razi C., Mohammad Haghpanahi D., Arash Maghsoudi
Abstract:
Background and Objective: Sports-related rupture of the anterior cruciate ligament (ACL) and following injuries have been associated with various disorders, such as long-lasting changes in muscle activation patterns in athletes, which might last after ACL reconstruction (ACLR). The rupture of the ACL might result in abnormal patterns of movement execution, extending the treatment period and delaying athletes’ return to sports (RTS). As ACL injury is especially prevalent among athletes, the lengthy treatment process and athletes’ absence from sports are of great concern to athletes and coaches. Thus, estimating safe time of RTS is of crucial importance. Therefore, using a deep neural network (DNN) to classify the health levels of ACL in injured athletes, this study aimed to estimate the safe time for athletes to return to competitions. Methods: Ten athletes with ACLR and fourteen healthy controls participated in this study. Three health levels of ACL were defined: healthy, six-month post-ACLR surgery and nine-month post-ACLR surgery. Athletes with ACLR were tested six and nine months after the ACLR surgery. During the course of this study, surface electromyography (sEMG) signals were recorded from five knee muscles, namely Rectus Femoris (RF), Vastus Lateralis (VL), Vastus Medialis (VM), Biceps Femoris (BF), Semitendinosus (ST), during single-leg drop landing (SLDL) and forward hopping (SLFH) tasks. The Pseudo-Wigner-Ville distribution (PWVD) was used to produce three-dimensional (3-D) images of the energy distribution patterns of sEMG signals. Then, these 3-D images were converted to two-dimensional (2-D) images implementing the heat mapping technique, which were then fed to a deep convolutional neural network (DCNN). Results: In this study, we estimated the safe time of RTS by designing a DCNN classifier with an accuracy of 90 %, which could classify ACL into three health levels. Discussion: The findings of this study demonstrate the potential of the DCNN classification technique using sEMG signals in estimating RTS time, which will assist in evaluating the recovery process of ACLR in athletes.Keywords: anterior cruciate ligament reconstruction, return to sports, surface electromyography, deep convolutional neural network
Procedia PDF Downloads 783344 Retrospective Reconstruction of Time Series Data for Integrated Waste Management
Authors: A. Buruzs, M. F. Hatwágner, A. Torma, L. T. Kóczy
Abstract:
The development, operation and maintenance of Integrated Waste Management Systems (IWMS) affects essentially the sustainable concern of every region. The features of such systems have great influence on all of the components of sustainability. In order to reach the optimal way of processes, a comprehensive mapping of the variables affecting the future efficiency of the system is needed such as analysis of the interconnections among the components and modelling of their interactions. The planning of a IWMS is based fundamentally on technical and economical opportunities and the legal framework. Modelling the sustainability and operation effectiveness of a certain IWMS is not in the scope of the present research. The complexity of the systems and the large number of the variables require the utilization of a complex approach to model the outcomes and future risks. This complex method should be able to evaluate the logical framework of the factors composing the system and the interconnections between them. The authors of this paper studied the usability of the Fuzzy Cognitive Map (FCM) approach modelling the future operation of IWMS’s. The approach requires two input data set. One is the connection matrix containing all the factors affecting the system in focus with all the interconnections. The other input data set is the time series, a retrospective reconstruction of the weights and roles of the factors. This paper introduces a novel method to develop time series by content analysis.Keywords: content analysis, factors, integrated waste management system, time series
Procedia PDF Downloads 3253343 Computational Fluid Dynamic Modeling of Mixing Enhancement by Stimulation of Ferrofluid under Magnetic Field
Authors: Neda Azimi, Masoud Rahimi, Faezeh Mohammadi
Abstract:
Computational fluid dynamics (CFD) simulation was performed to investigate the effect of ferrofluid stimulation on hydrodynamic and mass transfer characteristics of two immiscible liquid phases in a Y-micromixer. The main purpose of this work was to develop a numerical model that is able to simulate hydrodynamic of the ferrofluid flow under magnetic field and determine its effect on mass transfer characteristics. A uniform external magnetic field was applied perpendicular to the flow direction. The volume of fluid (VOF) approach was used for simulating the multiphase flow of ferrofluid and two-immiscible liquid flows. The geometric reconstruction scheme (Geo-Reconstruct) based on piecewise linear interpolation (PLIC) was used for reconstruction of the interface in the VOF approach. The mass transfer rate was defined via an equation as a function of mass concentration gradient of the transported species and added into the phase interaction panel using the user-defined function (UDF). The magnetic field was solved numerically by Fluent MHD module based on solving the magnetic induction equation method. CFD results were validated by experimental data and good agreements have been achieved, which maximum relative error for extraction efficiency was about 7.52 %. It was showed that ferrofluid actuation by a magnetic field can be considered as an efficient mixing agent for liquid-liquid two-phase mass transfer in microdevices.Keywords: CFD modeling, hydrodynamic, micromixer, ferrofluid, mixing
Procedia PDF Downloads 1963342 A Sociological Study of the Potential Role of Retired Soldiers in the Post War Development and Reconstruction in Sri Lanka
Authors: Amunupura Kiriwandeiye Gedara, Asintha Saminda Gnanaratne
Abstract:
The security forces can be described as a workforce that goes beyond the role of ensuring the national security and contributes to the development process of the country. Soldiers are following combatant training courses during their tenure, they are equipped with a variety of vocational training courses to satisfy the needs of the army, to equip them with vocational training capabilities to achieve the development and reconstruction goals of the country as well as for the betterment of society in the event of emergencies. But with retirement, their relationship with the military is severed, and they are responsible for the future of their lives. The main purpose of this study was to examine how such professional capabilities can contribute to the development of the country, the current socio-economic status of the retired soldiers, and the current application of the vocational training skills they have mastered in the army to develop and rebuild the country in an effective manner. After analyzing the available research literature related to this field, a conceptual framework was developed and according to qualitative research methodology, and data obtained from Case studies and interviews are analyzed by using thematic analysis. Factors influencing early retirement include a lack of understanding of benefits, delays in promotions, not being properly evaluated for work, getting married on hasty decisions, and not having enough time to spend on family and household chores. Most of the soldiers are not aware about various programs and benefits available to retirees. They do not have a satisfactory attitude towards the retirement guidance they receive from the army at the time of retirement. Also, due to the lack of understanding about how to use their vocational capabilities successfully pursue their retirement life, the majority of people are employed in temporary jobs, and some are successful in post-retirement life due to their successful use of training received. Some live on pensions without engaging in any income-generating activities, and those who retire after 12 years of service are facing severe economic hardships as they do not get pensions. Although they have received training in various fields, they do not use them for their benefit due to lack of proper guidance. Although the government implements programs, they are not clearly aware of them. Barriers to utilization of training include an absence of a system to identify the professional skills of retired soldiers, interest in civil society affairs, exploration of opportunities in the civil and private sectors, and politicization of services. If they are given the opportunity, they will be able to contribute to the development and reconstruction process. The findings of the study further show that it has many social, economic, political, and psychological benefits not only for individuals but also for a country. Entrepreneurship training for all retired soldiers, improving officers' understanding, streamlining existing mechanisms, creating new mechanisms, setting up a separate unit for retirees, and adapting them to civil society, private and non-governmental contributions, and training courses can be identified as potential means to improve the current situation.Keywords: development, reconstruction, retired soldiers, vocational capabilities
Procedia PDF Downloads 1333341 Sensor Network Structural Integration for Shape Reconstruction of Morphing Trailing Edge
Authors: M. Ciminello, I. Dimino, S. Ameduri, A. Concilio
Abstract:
Improving aircraft's efficiency is one of the key elements of Aeronautics. Modern aircraft possess many advanced functions, such as good transportation capability, high Mach number, high flight altitude, and increasing rate of climb. However, no aircraft has a possibility to reach all of this optimized performance in a single airframe configuration. The aircraft aerodynamic efficiency varies considerably depending on the specific mission and on environmental conditions within which the aircraft must operate. Structures that morph their shape in response to their surroundings may at first seem like the stuff of science fiction, but take a look at nature and lots of examples of plants and animals that adapt to their environment would arise. In order to ensure both the controllable and the static robustness of such complex structural systems, a monitoring network is aimed at verifying the effectiveness of the given control commands together with the elastic response. In order to achieve this kind of information, the use of FBG sensors network is, in this project, proposed. The sensor network is able to measure morphing structures shape which may show large, global displacements due to non-standard architectures and materials adopted. Chord -wise variations may allow setting and chasing the best layout as a function of the particular and transforming reference state, always targeting best aerodynamic performance. The reason why an optical sensor solution has been selected is that while keeping a few of the contraindication of the classical systems (like cabling, continuous deployment, and so on), fibre optic sensors may lead to a dramatic reduction of the wires mass and weight thanks to an extreme multiplexing capability. Furthermore, the use of the ‘light’ as ‘information carrier’, permits dealing with nimbler, non-shielded wires, and avoids any kind of interference with the on-board instrumentation. The FBG-based transducers, herein presented, aim at monitoring the actual shape of adaptive trailing edge. Compared to conventional systems, these transducers allow more fail-safe measurements, by taking advantage of a supporting structure, hosting FBG, whose properties may be tailored depending on the architectural requirements and structural constraints, acting as strain modulator. The direct strain may, in fact, be difficult because of the large deformations occurring in morphing elements. A modulation transducer is then necessary to keep the measured strain inside the allowed range. In this application, chord-wise transducer device is a cantilevered beam sliding trough the spars and copying the camber line of the ATE ribs. FBG sensors array position are dimensioned and integrated along the path. A theoretical model describing the system behavior is implemented. To validate the design, experiments are then carried out with the purpose of estimating the functions between rib rotation and measured strain.Keywords: fiber optic sensor, morphing structures, strain sensor, shape reconstruction
Procedia PDF Downloads 3293340 Analysis of Wheel Lock up Effects on Skidding Distance for Heavy Vehicles
Authors: Mahdieh Zamzamzadeh, Ahmad Abdullah Saifizul, Rahizar Ramli
Abstract:
The road accidents involving heavy vehicles have been showing worrying trends and, year after year, have increased the concern and awareness levels on safety of roads and transportations especially in developing countries like Malaysia. Statistics of road crashes continue to show that there are many contributing factors on the capability of a heavy vehicle to stop on safe distance and ultimately prevent traffic crashes. However, changes in the road condition due to weather variations and the vehicle dynamic specifications such as loading conditions and speed are the main risk factors because they will affect a heavy vehicle’s braking performance due to losing control and not being able to stop the vehicle, and in many cases will cause wheel lock up and accordingly skidding. Predicting heavy vehicle skidding distance is crucial for accident reconstruction and roadside safety engineers. Despite this, formal tools to study heavy vehicle skidding distance before stopping completely are totally limited, and most researchers have only considered braking distance in their studies. As a possible new tool, this work presents the iterative use of vehicle dynamic simulations to study heavy vehicle-roadway interaction in order to predict wheel lock up effects on skidding distance and safety. This research addresses the influence of the vehicle and road conditions on skidding distance after wheel lock up and presents a precise analysis of skidding phenomenon. The vehicle speed, vehicle loading condition and road friction parameters were all varied in a simulation-based analysis. In order to simulate the wheel lock up situation, a heavy vehicle model was constructed and simulated using multibody vehicle dynamics simulation software, and careful analysis was made on the conditions which caused the skidding distance to increase or decrease through a method using to predict skidding distance as part of braking distance. By applying many simulations, the results were quite revealing relation between the heavy vehicles loading condition, various sets of speed and road coefficient of friction and their interaction effect on the skidding distance. A number of results are presented which illustrate how the heavy vehicle overloading can seriously affect the skidding distance. Moreover, the results of simulation give the skid mark length, which is a necessary input data during accident reconstruction involving emergency braking.Keywords: accident reconstruction, Braking, heavy vehicle, skidding distance, skid mark, wheel lock up
Procedia PDF Downloads 4983339 Forensics Linguistics and Phonetics: The Analysis of Language to Support Investigations
Authors: Andreas Aceranti, Simonetta Vernocchi, Marco Colorato, Kaoutar Filahi
Abstract:
This study was inspired by the necessity of giving forensic linguistics and phonetics more and more importance and the intention to explore those topics in an attempt to understand what the role of these disciplines really is in investigations of any nature. The goal is to analyze what are the achievements that those subjects have been able to reach, and what contribution they gave to the legal world; the analysis and study of those topics are supported by the recounting of real cases that have included forensic and phonetic linguistics. One of the most relevant cases is that of the Unabomber, an investigation that brought to light the importance and highlighted the importance this matter can have in difficult and time-consuming cases such as the one we have here. We also focus on the areas of expertise of those new branches of applied linguistics, focusing on what is the use of this new discipline in Italy and abroad and showing what could be the possible improvements that the Italian state could apply in order to be able to catch up with countries like Great Britain.Keywords: forensic linguistic, forensic phonetics, investigation, criminalistics
Procedia PDF Downloads 933338 Moral Rights: Judicial Evidence Insufficiency in the Determination of the Truth and Reasoning in Brazilian Morally Charged Cases
Authors: Rainner Roweder
Abstract:
Theme: The present paper aims to analyze the specificity of the judicial evidence linked to the subjects of dignity and personality rights, otherwise known as moral rights, in the determination of the truth and formation of the judicial reasoning in cases concerning these areas. This research is about the way courts in Brazilian domestic law search for truth and handles evidence in cases involving moral rights that are abundant and important in Brazil. The main object of the paper is to analyze the effectiveness of the evidence in the formation of judicial conviction in matters related to morally controverted rights, based on the Brazilian, and as a comparison, the Latin American legal systems. In short, the rights of dignity and personality are moral. However, the evidential legal system expects a rational demonstration of moral rights that generate judicial conviction or persuasion. Moral, in turn, tends to be difficult or impossible to demonstrate in court, generating the problem considered in this paper, that is, the study of the moral demonstration problem as proof in court. In this sense, the more linked to moral, the more difficult to be demonstrated in court that right is, expanding the field of judicial discretion, generating legal uncertainty. More specifically, the new personality rights, such as gender, and their possibility of alteration, further amplify the problem being essentially an intimate manner, which does not exist in the objective, rational evidential system, as normally occurs in other categories, such as contracts. Therefore, evidencing this legal category in court, with the level of security required by the law, is a herculean task. It becomes virtually impossible to use the same evidentiary system when judging the rights researched here; therefore, it generates the need for a new design of the evidential task regarding the rights of the personality, a central effort of the present paper. Methodology: Concerning the methodology, the Method used in the Investigation phase was Inductive, with the use of the comparative law method; in the data treatment phase, the Inductive Method was also used. Doctrine, Legislative, and jurisprudential comparison was the technique research used. Results: In addition to the peculiar characteristics of personality rights that are not found in other rights, part of them are essentially linked to morale and are not objectively verifiable by design, and it is necessary to use specific argumentative theories for their secure confirmation, such as interdisciplinary support. The traditional pragmatic theory of proof, for having an obvious objective character, when applied in the rights linked to the morale, aggravates decisionism and generates legal insecurity, being necessary its reconstruction for morally charged cases, with the possible use of the “predictive theory” ( and predictive facts) through algorithms in data collection and treatment.Keywords: moral rights, proof, pragmatic proof theory, insufficiency, Brazil
Procedia PDF Downloads 1093337 Islamic Finance: What is the Outlook for Italy?
Authors: Paolo Pietro Biancone
Abstract:
The spread of Islamic financial instruments is an opportunity to offer integration for the immigrant population and to attract, through the specific products, the richness of sovereign funds from the "Arab" countries. However, it is important to consider the possibility of comparing a traditional finance model, which in recent times has given rise to many doubts, with an "alternative" finance model, where the ethical aspect arising from religious principles is very important.Keywords: banks, Europe, Islamic finance, Italy
Procedia PDF Downloads 270