Search results for: feature expanding.
71 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation
Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang
Abstract:
Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation
Procedia PDF Downloads 6870 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 8269 Field-Testing a Digital Music Notebook
Authors: Rena Upitis, Philip C. Abrami, Karen Boese
Abstract:
The success of one-on-one music study relies heavily on the ability of the teacher to provide sufficient direction to students during weekly lessons so that they can successfully practice from one lesson to the next. Traditionally, these instructions are given in a paper notebook, where the teacher makes notes for the students after describing a task or demonstrating a technique. The ability of students to make sense of these notes varies according to their understanding of the teacher’s directions, their motivation to practice, their memory of the lesson, and their abilities to self-regulate. At best, the notes enable the student to progress successfully. At worst, the student is left rudderless until the next lesson takes place. Digital notebooks have the potential to provide a more interactive and effective bridge between music lessons than traditional pen-and-paper notebooks. One such digital notebook, Cadenza, was designed to streamline and improve teachers’ instruction, to enhance student practicing, and to provide the means for teachers and students to communicate between lessons. For example, Cadenza contains a video annotator, where teachers can offer real-time guidance on uploaded student performances. Using the checklist feature, teachers and students negotiate the frequency and type of practice during the lesson, which the student can then access during subsequent practice sessions. Following the tenets of self-regulated learning, goal setting and reflection are also featured. Accordingly, the present paper addressed the following research questions: (1) How does the use of the Cadenza digital music notebook engage students and their teachers?, (2) Which features of Cadenza are most successful?, (3) Which features could be improved?, and (4) Is student learning and motivation enhanced with the use of the Cadenza digital music notebook? The paper describes the results 10 months of field-testing of Cadenza, structured around the four research questions outlined. Six teachers and 65 students took part in the study. Data were collected through video-recorded lesson observations, digital screen captures, surveys, and interviews. Standard qualitative protocols for coding results and identifying themes were employed to analyze the results. The results consistently indicated that teachers and students embraced the digital platform offered by Cadenza. The practice log and timer, the real-time annotation tool, the checklists, the lesson summaries, and the commenting features were found to be the most valuable functions, by students and teachers alike. Teachers also reported that students progressed more quickly with Cadenza, and received higher results in examinations than those students who were not using Cadenza. Teachers identified modifications to Cadenza that would make it an even more powerful way to support student learning. These modifications, once implemented, will move the tool well past its traditional notebook uses to new ways of motivating students to practise between lessons and to communicate with teachers about their learning. Improvements to the tool called for by the teachers included the ability to duplicate archived lessons, allowing for split screen viewing, and adding goal setting to the teacher window. In the concluding section, proposed modifications and their implications for self-regulated learning are discussed.Keywords: digital music technologies, electronic notebooks, self-regulated learning, studio music instruction
Procedia PDF Downloads 25468 Impact of Informal Institutions on Development: Analyzing the Socio-Legal Equilibrium of Relational Contracts in India
Authors: Shubhangi Roy
Abstract:
Relational Contracts (informal understandings not enforceable by law) are a common feature of most economies. However, their dominance is higher in developing countries. Such informality of economic sectors is often co-related to lower economic growth. The aim of this paper is to investigate whether informal arrangements i.e. relational contracts are a cause or symptom of lower levels of economic and/or institutional development. The methodology followed involves an initial survey of 150 test subjects in Northern India. The subjects are all members of occupations where they frequently transact ensuring uniformity in transaction volume. However, the subjects are from varied socio-economic backgrounds to ensure sufficient variance in transaction values allowing us to understand the relationship between the amount of money involved to the method of transaction used, if any. Questions asked are quantitative and qualitative with an aim to observe both the behavior and motivation behind such behavior. An overarching similarity observed during the survey across all subjects’ responses is that in an economy like India with pervasive corruption and delayed litigation, economy participants have created alternative social sanctions to deal with non-performers. In a society that functions predominantly on caste, class and gender classifications, these sanctions could, in fact, be more cumbersome for a potential rule-breaker than the legal ramifications. It, therefore, is a symptom of weak formal regulatory enforcement and dispute settlement mechanism. Additionally, the study bifurcates such informal arrangements into two separate systems - a) when it exists in addition to and augments a legal framework creating an efficient socio-legal equilibrium or; b) in conflict with the legal system in place. This categorization is an important step in regulating informal arrangements. Instead of considering the entire gamut of such arrangements as counter-development, it helps decision-makers understand when to dismantle (latter) and when to pivot around existing informal systems (former). The paper hypothesizes that those social arrangements that support the formal legal frameworks allow for cheaper enforcement of regulations with lower enforcement costs burden on the state mechanism. On the other hand, norms which contradict legal rules will undermine the formal framework. Law infringement, in presence of these norms, will have no impact on the reputation of the business or individual outside of the punishment imposed under the law. It is especially exacerbated in the Indian legal system where enforcement of penalties for non-performance of contracts is low. In such a situation, the social norm will be adhered to more strictly by the individuals rather than the legal norms. This greatly undermines the role of regulations. The paper concludes with recommendations that allow policy-makers and legal systems to encourage the former category of informal arrangements while discouraging norms that undermine legitimate policy objectives. Through this investigation, we will be able to expand our understanding of tools of market development beyond regulations. This will allow academics and policymakers to harness social norms for less disruptive and more lasting growth.Keywords: distribution of income, emerging economies, relational contracts, sample survey, social norms
Procedia PDF Downloads 16567 The Effect of Rheological Properties and Spun/Meltblown Fiber Characteristics on “Hotmelt Bleed through” Behavior in High Speed Textile Backsheet Lamination Process
Authors: Kinyas Aydin, Fatih Erguney, Tolga Ceper, Serap Ozay, Ipar N. Uzun, Sebnem Kemaloglu Dogan, Deniz Tunc
Abstract:
In order to meet high growth rates in baby diaper industry worldwide, the high-speed textile backsheet lamination lines have recently been introduced to the market for non-woven/film lamination applications. It is a process where two substrates are bonded to each other via hotmelt adhesive (HMA). Nonwoven (NW) lamination system basically consists of 4 components; polypropylene (PP) nonwoven, polyethylene (PE) film, HMA and applicator system. Each component has a substantial effect on the process efficiency of continuous line and final product properties. However, for a precise subject cover, we will be addressing only the main challenges and possible solutions in this paper. The NW is often produced by spunbond method (SSS or SMS configuration) and has a 10-12 gsm (g/m²) basis weight. The NW rolls can have a width and length up to 2.060 mm and 30.000 linear meters, respectively. The PE film is the 2ⁿᵈ component in TBS lamination, which is usually a 12-14 gsm blown or cast breathable film. HMA is a thermoplastic glue (mostly rubber based) that can be applied in a large range of viscosity ranges. The main HMA application technology in TBS lamination is the slot die application in which HMA is spread on the top of the NW along the whole width at high temperatures in the melt form. Then, the NW is passed over chiller rolls with a certain open time depending on the line speed. HMAs are applied at certain levels in order to provide a proper de-lamination strength in cross and machine directions to the entire structure. Current TBS lamination line speed and width can be as high as 800 m/min and 2100 mm, respectively. They also feature an automated web control tension system for winders and unwinders. In order to run a continuous trouble-free mass production campaign on the fast industrial TBS lines, rheological properties of HMAs and micro-properties of NWs can have adverse effects on the line efficiency and continuity. NW fiber orientation and fineness, as well as spun/melt blown composition fabric micro-level properties, are the significant factors to affect the degree of “HMA bleed through.” As a result of this problem, frequent line stops are observed to clean the glue that is being accumulated on the chiller rolls, which significantly reduces the line efficiency. HMA rheology is also important and to eliminate any bleed through the problem; one should have a good understanding of rheology driven potential complications. So, the applied viscosity/temperature should be optimized in accordance with the line speed, line width, NW characteristics and the required open time for a given HMA formulation. In this study, we will show practical aspects of potential preventative actions to minimize the HMA bleed through the problem, which may stem from both HMA rheological properties and NW spun melt/melt blown fiber characteristics.Keywords: breathable, hotmelt, nonwoven, textile backsheet lamination, spun/melt blown
Procedia PDF Downloads 35966 Supercritical Water Gasification of Organic Wastes for Hydrogen Production and Waste Valorization
Authors: Laura Alvarez-Alonso, Francisco Garcia-Carro, Jorge Loredo
Abstract:
Population growth and industrial development imply an increase in the energy demands and the problems caused by emissions of greenhouse effect gases, which has inspired the search for clean sources of energy. Hydrogen (H₂) is expected to play a key role in the world’s energy future by replacing fossil fuels. The properties of H₂ make it a green fuel that does not generate pollutants and supplies sufficient energy for power generation, transportation, and other applications. Supercritical Water Gasification (SCWG) represents an attractive alternative for the recovery of energy from wastes. SCWG allows conversion of a wide range of raw materials into a fuel gas with a high content of hydrogen and light hydrocarbons through their treatment at conditions higher than those that define the critical point of water (temperature of 374°C and pressure of 221 bar). Methane used as a transport fuel is another important gasification product. The number of different uses of gas and energy forms that can be produced depending on the kind of material gasified and type of technology used to process it, shows the flexibility of SCWG. This feature allows it to be integrated with several industrial processes, as well as power generation systems or waste-to-energy production systems. The final aim of this work is to study which conditions and equipment are the most efficient and advantageous to explore the possibilities to obtain streams rich in H₂ from oily wastes, which represent a major problem both for the environment and human health throughout the world. In this paper, the relative complexity of technology needed for feasible gasification process cycles is discussed with particular reference to the different feedstocks that can be used as raw material, different reactors, and energy recovery systems. For this purpose, a review of the current status of SCWG technologies has been carried out, by means of different classifications based on key features as the feed treated or the type of reactor and other apparatus. This analysis allows to improve the technology efficiency through the study of model calculations and its comparison with experimental data, the establishment of kinetics for chemical reactions, the analysis of how the main reaction parameters affect the yield and composition of products, or the determination of the most common problems and risks that can occur. The results of this work show that SCWG is a promising method for the production of both hydrogen and methane. The most significant choices of design are the reactor type and process cycle, which can be conveniently adopted according to waste characteristics. Regarding the future of the technology, the design of SCWG plants is still to be optimized to include energy recovery systems in order to reduce costs of equipment and operation derived from the high temperature and pressure conditions that are necessary to convert water to the SC state, as well as to find solutions to remove corrosion and clogging of components of the reactor.Keywords: hydrogen production, organic wastes, supercritical water gasification, system integration, waste-to-energy
Procedia PDF Downloads 14765 Creating Moments and Memories: An Evaluation of the Starlight 'Moments' Program for Palliative Children, Adolescents and Their Families
Authors: C. Treadgold, S. Sivaraman
Abstract:
The Starlight Children's Foundation (Starlight) is an Australian non-profit organisation that delivers programs, in partnership with health professionals, to support children, adolescents, and their families who are living with a serious illness. While supporting children and adolescents with life-limiting conditions has always been a feature of Starlight's work, providing a dedicated program, specifically targeting and meeting the needs of the paediatric palliative population, is a recent area of focus. Recognising the challenges in providing children’s palliative services, Starlight initiated a research and development project to better understand and meet the needs of this group. The aim was to create a program which enhances the wellbeing of children, adolescents, and their families receiving paediatric palliative care in their community through the provision of on-going, tailored, positive experiences or 'moments'. This paper will present the results of the formative evaluation of this unique program, highlighting the development processes and outcomes of the pilot. The pilot was designed using an innovation methodology, which included a number of research components. There was a strong belief that it needed to be delivered in partnership with a dedicated palliative care team, helping to ensure the best interests of the family were always represented. This resulted in Starlight collaborating with both the Victorian Paediatric Palliative Care Program (VPPCP) at the Royal Children's Hospital, Melbourne, and the Sydney Children's Hospital Network (SCHN) to pilot the 'Moments' program. As experts in 'positive disruption', with a long history of collaborating with health professionals, Starlight was well placed to deliver a program which helps children, adolescents, and their families to experience moments of joy, connection and achieve their own sense of accomplishment. Building on Starlight’s evidence-based approach and experience in creative service delivery, the program aims to use the power of 'positive disruption' to brighten the lives of this group and create important memories. The clinical and Starlight team members collaborate to ensure that the child and family are at the centre of the program. The design of each experience is specific to their needs and ensures the creation of positive memories and family connection. It aims for each moment to enhance quality of life. The partnership with the VPPCP and SCHN has allowed the program to reach families across metropolitan and regional locations. In late 2019 a formative evaluation of the pilot was conducted utilising both quantitative and qualitative methodologies to document both the delivery and outcomes of the program. Central to the evaluation was the interviews conducted with both clinical teams and families in order to gain a comprehensive understanding of the impact of and satisfaction with the program. The findings, which will be shared in this presentation, provide practical insight into the delivery of the program, the key elements for its success with families, and areas which could benefit from additional research and focus. It will use stories and case studies from the pilot to highlight the impact of the program and discuss what opportunities, challenges, and learnings emerged.Keywords: children, families, memory making, pediatric palliative care, support
Procedia PDF Downloads 9964 Raman Spectroscopy of Fossil-like Feature in Sooke #1 from Vancouver Island
Authors: J. A. Sawicki, C. Ebrahimi
Abstract:
The first geochemical, petrological, X-ray diffraction, Raman, Mössbauer, and oxygen isotopic analyses of very intriguing 13-kg Sooke #1 stone covered in 70% of its surface with black fusion crust, found in and recovered from Sooke Basin, near Juan de Fuca Strait, in British Columbia, were reported as poster #2775 at LPSC52 in March. Our further analyses reported in poster #6305 at 84AMMS in August and comparisons with the Mössbauer spectra of Martian meteorite MIL03346 and Martian rocks in Gusev Crater reported by Morris et al. suggest that Sooke #1 find could be a stony achondrite of Martian polymict breccia type ejected from early watery Mars. Here, the Raman spectra of a carbon-rich ~1-mm² fossil-like white area identified in this rock on a surface of polished cut have been examined in more detail. The low-intensity 532 nm and 633 nm beams of the InviaRenishaw microscope were used to avoid any destructive effects. The beam was focused through the microscope objective to a 2 m spot on a sample, and backscattered light collected through this objective was recorded with CCD detector. Raman spectra of dark areas outside fossil have shown bands of clinopyroxene at 320, 660, and 1020 cm-1 and small peaks of forsteritic olivine at 820-840 cm-1, in agreement with results of X-ray diffraction and Mössbauer analyses. Raman spectra of the white area showed the broad band D at ~1310 cm-1 consisting of main mode A1g at 1305 cm⁻¹, E2g mode at 1245 cm⁻¹, and E1g mode at 1355 cm⁻¹ due to stretching diamond-like sp3 bonds in diamond polytype lonsdaleite, as in Ovsyuk et al. study. The band near 1600 cm-1 mostly consists of D2 band at 1620 cm-1 and not of the narrower G band at 1583 cm⁻¹ due to E2g stretching in planar sp2 bonds that are fundamental building blocks of carbon allotropes graphite and graphene. In addition, the broad second-order Raman bands were observed with 532 nm beam at 2150, ~2340, ~2500, 2650, 2800, 2970, 3140, and ~3300 cm⁻¹ shifts. Second-order bands in diamond and other carbon structures are ascribed to the combinations of bands observed in the first-order region: here 2650 cm⁻¹ as 2D, 2970 cm⁻¹ as D+G, and 3140 cm⁻¹ as 2G ones. Nanodiamonds are abundant in the Universe, found in meteorites, interplanetary dust particles, comets, and carbon-rich stars. The diamonds in meteorites are presently intensely investigated using Raman spectroscopy. Such particles can be formed by CVD process and during major impact shocks at ~1000-2300 K and ~30-40 GPa. It cannot be excluded that the fossil discovered in Sooke #1 could be a remnant of an alien carbon organism that transformed under shock impact to nanodiamonds. We trust that for the benefit of research in astro-bio-geology of meteorites, asteroids, Martian rocks, and soil, this find deserves further, more thorough investigations. If possible, the Raman SHERLOCK spectrometer operating on the Perseverance Rover should also search for such objects in the Martian rocks.Keywords: achondrite, nanodiamonds, lonsdaleite, raman spectra
Procedia PDF Downloads 15163 Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance
Authors: George Zhou, Yunchan Chen, Candace Chien
Abstract:
Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.Keywords: arteriovenous fistula, blood flow sounds, metadata encoding, deep learning
Procedia PDF Downloads 8862 Fort Conger: A Virtual Museum and Virtual Interactive World for Exploring Science in the 19th Century
Authors: Richard Levy, Peter Dawson
Abstract:
Ft. Conger, located in the Canadian Arctic was one of the most remote 19th-century scientific stations. Established in 1881 on Ellesmere Island, a wood framed structure established a permanent base from which to conduct scientific research. Under the charge of Lt. Greely, Ft. Conger was one of 14 expeditions conducted during the First International Polar Year (FIPY). Our research project “From Science to Survival: Using Virtual Exhibits to Communicate the Significance of Polar Heritage Sites in the Canadian Arctic” focused on the creation of a virtual museum website dedicated to one of the most important polar heritage site in the Canadian Arctic. This website was developed under a grant from Virtual Museum of Canada and enables visitors to explore the fort’s site from 1875 to the present, http://fortconger.org. Heritage sites are often viewed as static places. A goal of this project was to present the change that occurred over time as each new group of explorers adapted the site to their needs. The site was first visited by British explorer George Nares in 1875 – 76. Only later did the United States government select this site for the Lady Franklin Bay Expedition (1881-84) with research to be conducted under the FIPY (1882 – 83). Still later Robert Peary and Matthew Henson attempted to reach the North Pole from Ft. Conger in 1899, 1905 and 1908. A central focus of this research is on the virtual reconstruction of the Ft. Conger. In the summer of 2010, a Zoller+Fröhlich Imager 5006i and Minolta Vivid 910 laser scanner were used to scan terrain and artifacts. Once the scanning was completed, the point clouds were registered and edited to form the basis of a virtual reconstruction. A goal of this project has been to allow visitors to step back in time and explore the interior of these buildings with all of its artifacts. Links to text, historic documents, animations, panorama images, computer games and virtual labs provide explanations of how science was conducted during the 19th century. A major feature of this virtual world is the timeline. Visitors to the website can begin to explore the site when George Nares, in his ship the HMS Discovery, appeared in the harbor in 1875. With the emergence of Lt Greely’s expedition in 1881, we can track the progress made in establishing a scientific outpost. Still later in 1901, with Peary’s presence, the site is transformed again, with the huts having been built from materials salvaged from Greely’s main building. Still later in 2010, we can visit the site during its present state of deterioration and learn about the laser scanning technology which was used to document the site. The Science and Survival at Fort Conger project represents one of the first attempts to use virtual worlds to communicate the historical and scientific significance of polar heritage sites where opportunities for first-hand visitor experiences are not possible because of remote location.Keywords: 3D imaging, multimedia, virtual reality, arctic
Procedia PDF Downloads 42061 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 29760 Burial Findings in Prehistory Qatar: Archaeological Perspective
Authors: Sherine El-Menshawy
Abstract:
Death, funerary beliefs and customs form an essential feature of belief systems and practices in many cultures. It is evident that during the pre-historical periods, various techniques of corpses burial and funerary rituals were conducted. Occasionally, corpses were merely buried in the sand, or in a grave where the body is placed in a contracted position- with knees drawn up under the chin and hands normally lying before the face- with mounds of sand, marking the grave or the bodies were burnt. However, common practice, that was demonstrable in the archaeological record, was burial. The earliest graves were very simple consisting of a shallow circular or oval pits in the ground. The current study focuses on the material culture at Qatar during the pre-historical period, specifically their funerary architecture and burial practices. Since information about burial customs and funerary practices in Qatar prehistory is both scarce and fragmentary, the importance of such study is to answer research questions related to funerary believes and burial habits during the early stages of civilization transformations at prehistory Qatar compared with Mesopotamia, since chronologically, the earliest pottery discovered in Qatar belongs to prehistoric Ubaid culture of Mesopotamia, that was collected from the excavations. This will lead to deep understanding of life and social status in pre-historical period at Qatar. The research also explores the relationship between pre-history Qatar funerary traditions and those of neighboring cultures in the Mesopotamia and Ancient Egypt, with the aim of ascertaining the distinctive aspects of pre-history Qatar culture, the reception of classical culture and the role it played in the creation of local cultural identities in the Near East. Methodologies of this study based on published books and articles in addition to unpublished reports of the Danish excavation team that excavated in and around Doha, Qatar archaeological sites from the 50th. The study is also constructed on compared material related to burial customs found in Mesopotamia. Therefore this current research: (i) Advances knowledge of the burial customs of the ancient people who inhabited Qatar, a study which is unknown recently to scholars, the study though will apply deep understanding of the history of ancient Qatar and its culture and values with an aim to share this invaluable human heritage. (ii) The study is of special significance for the field of studies, since evidence derived from the current study has great value for the study of living conditions, social structure, religious beliefs and ritual practices. (iii) Excavations brought to light burials of different categories. The graves date to the bronze and Iron ages. Their structure varies between mounds above the ground or burials below the ground level. Evidence comes from sites such as Al-Da’asa, Ras Abruk, and Al-Khor. Painted Ubaid sherds of Mesopotamian culture have been discovered in Qatar from sites such as Al-Da’asa, Ras Abruk, and Bir Zekrit. In conclusion, there is no comprehensive study which has been done and lack of general synthesis of information about funerary practices is problematic. Therefore, the study will fill in the gaps in the area.Keywords: archaeological, burial, findings, prehistory, Qatar
Procedia PDF Downloads 15059 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations
Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos
Abstract:
Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest
Procedia PDF Downloads 17758 Numerical Model of Crude Glycerol Autothermal Reforming to Hydrogen-Rich Syngas
Authors: A. Odoom, A. Salama, H. Ibrahim
Abstract:
Hydrogen is a clean source of energy for power production and transportation. The main source of hydrogen in this research is biodiesel. Glycerol also called glycerine is a by-product of biodiesel production by transesterification of vegetable oils and methanol. This is a reliable and environmentally-friendly source of hydrogen production than fossil fuels. A typical composition of crude glycerol comprises of glycerol, water, organic and inorganic salts, soap, methanol and small amounts of glycerides. Crude glycerol has limited industrial application due to its low purity thus, the usage of crude glycerol can significantly enhance the sustainability and production of biodiesel. Reforming techniques is an approach for hydrogen production mainly Steam Reforming (SR), Autothermal Reforming (ATR) and Partial Oxidation Reforming (POR). SR produces high hydrogen conversions and yield but is highly endothermic whereas POR is exothermic. On the downside, PO yields lower hydrogen as well as large amount of side reactions. ATR which is a fusion of partial oxidation reforming and steam reforming is thermally neutral because net reactor heat duty is zero. It has relatively high hydrogen yield, selectivity as well as limits coke formation. The complex chemical processes that take place during the production phases makes it relatively difficult to construct a reliable and robust numerical model. Numerical model is a tool to mimic reality and provide insight into the influence of the parameters. In this work, we introduce a finite volume numerical study for an 'in-house' lab-scale experiment of ATR. Previous numerical studies on this process have considered either using Comsol or nodal finite difference analysis. Since Comsol is a commercial package which is not readily available everywhere and lab-scale experiment can be considered well mixed in the radial direction. One spatial dimension suffices to capture the essential feature of ATR, in this work, we consider developing our own numerical approach using MATLAB. A continuum fixed bed reactor is modelled using MATLAB with both pseudo homogeneous and heterogeneous models. The drawback of nodal finite difference formulation is that it is not locally conservative which means that materials and momenta can be generated inside the domain as an artifact of the discretization. Control volume, on the other hand, is locally conservative and suites very well problems where materials are generated and consumed inside the domain. In this work, species mass balance, Darcy’s equation and energy equations are solved using operator splitting technique. Therefore, diffusion-like terms are discretized implicitly while advection-like terms are discretized explicitly. An upwind scheme is adapted for the advection term to ensure accuracy and positivity. Comparisons with the experimental data show very good agreements which build confidence in our modeling approach. The models obtained were validated and optimized for better results.Keywords: autothermal reforming, crude glycerol, hydrogen, numerical model
Procedia PDF Downloads 14057 Statistical Models and Time Series Forecasting on Crime Data in Nepal
Authors: Dila Ram Bhandari
Abstract:
Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.Keywords: time series analysis, forecasting, ARIMA, machine learning
Procedia PDF Downloads 16456 A Designing 3D Model: Castle of the Mall-Dern
Authors: Nanadcha Sinjindawong
Abstract:
This article discusses the design process of a community mall called Castle of The Mall-dern. The concept behind this mall is to combine elements of a medieval castle with modern architecture. The author aims to create a building that fits into the surroundings while also providing users with the vibes of the ancient era. The total area used for the mall is 4,000 square meters, with three floors. The first floor is 1,500 square meters, the second floor is 1,750 square meters, and the third floor is 750 square meters. Research Aim: The aim of this research is to design a community mall that sells ancient clothes and accessories, and to combine sustainable architectural design with the ideas of ancient architecture in an urban area with convenient transportation. Methodology: The research utilizes qualitative research methods in architectural design. The process begins with calculating the given area and dividing it into different zones. The author then sketches and draws the plan of each floor, adding the necessary rooms based on the floor areas mentioned earlier. The program "SketchUp" is used to create an online 3D model of the community mall, and a physical model is built for presentation purposes on A1 paper, explaining all the details. Findings: The result of this research is a community mall with various amenities. The first floor includes retail shops, clothing stores, a food center, and a service zone. Additionally, there is an indoor garden with a fountain and a tree for relaxation. The second and third floors feature a void in the middle, with a few stores, cafes, restaurants, and studios on the second floor. The third floor is home to the administration and security control room, as well as a community gathering area designed as a public library with a café inside. Theoretical Importance: This research contributes to the field of sustainable architectural design by combining ancient architectural ideas with modern elements. It showcases the potential for creating buildings that blend historical aesthetics with contemporary functionality. Data Collection and Analysis Procedures: The data for this research is collected through a combination of area calculation, sketching, and building a 3D model. The analysis involves evaluating the design based on the allocated area, zoning, and functional requirements for a community mall. Question Addressed: The research addresses the question of how to design a community mall with a theme of ancient Medieval and Victorian eras. It explores how to combine sustainable architectural design principles with historical aesthetics to create a functional and visually appealing space. Conclusion: In conclusion, this research successfully designs a community mall called “Castle of The Mall-dern” that incorporates elements of Medieval and Victorian architecture. The building encompasses various zones, including retail shops, restaurants, community gathering areas, and service zones. It also features an interior garden and a public library within the mall. The research contributes to the field of sustainable architectural design by showcasing the potential for combining ancient architectural ideas with modern elements in an urban setting.Keywords: 3D model, community mall, modern architecture, medieval architecture
Procedia PDF Downloads 10755 Separation of Urinary Proteins with Sodium Dodecyl Sulphate Polyacrylamide Gel Electrophoresis in Patients with Secondary Nephropathies
Authors: Irena Kostovska, Katerina Tosheska Trajkovska, Svetlana Cekovska, Julijana Brezovska Kavrakova, Hristina Ampova, Sonja Topuzovska, Ognen Kostovski, Goce Spasovski, Danica Labudovic
Abstract:
Background: Proteinuria is an important feature of secondary nephropathies. The quantitative and qualitative analysis of proteinuria plays an important role in determining the types of proteinuria (glomerular, tubular and mixed), in the diagnosis and prognosis of secondary nephropathies. The damage of the glomerular basement membrane is responsible for a proteinuria characterized by the presence of large amounts of protein with high molecular weights such as albumin (69 kilo Daltons-kD), transferrin (78 kD) and immunoglobulin G (150 kD). An insufficiency of proximal tubular function is the cause of a proteinuria characterized by the presence of proteins with low molecular weight (LMW), such as retinol binding protein (21 kD) and α1-microglobulin (31 kD). In some renal diseases, a mixed glomerular and tubular proteinuria is frequently seen. Sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) is the most widely used method of analyzing urine proteins for clinical purposes. The main aim of the study is to determine the type of proteinuria in the most common secondary nephropathies such as diabetic, hypertensive nephropathy and preeclampsia. Material and methods: In this study were included 90 subjects: subjects with diabetic nephropathy (n=30), subjects with hypertensive nephropahty (n=30) and pregnant women with preeclampsia (n=30). We divided all subjects according to UM/CR into three subgroups: macroalbuminuric (UM/CR >300 mg/g), microalbuminuric (UM/CR 30-300 mg/g) and normolabuminuric (UM/CR<30 mg/g). In all subjects, we measured microalbumin and creatinine in urine with standard biochemical methods. Separation of urinary proteins was performed by SDS-PAGE, in several stages: linear gel preparation (4-22%), treatment of urinary samples before their application on the gel, electrophoresis, gel fixation, coloring with Coomassie blue, and identification of the separated protein fractions based on standards with exactly known molecular weight. Results: According to urinary microalbumin/creatinin ratio in group of subject with diabetic nephropathy, nine patients were macroalbuminuric, while 21 subject were microalbuminuric. In group of subjects with hypertensive nephropathy, we found macroalbuminuria (n=4), microalbuminuria (n=20) and normoalbuminuria (n=6). All pregnant women with preeclampsia were macroalbuminuric. Electrophoretic separation of urinary proteins showed that in macroalbuminric patients with diabetic nephropathy 56% have mixed proteinuria, 22% have glomerular proteinuria and 22% have tubular proteinuria. In subgroup of subjects with diabetic nephropathy and microalbuminuria, 52% have glomerular proteinuria, 8% have tubular proteinuria, and 40% of subjects have normal electrophoretic findings. All patients with maroalbuminuria and hypertensive nephropathy have mixed proteinuria. In subgroup of patients with microalbuminuria and hypertensive nephropathy, we found: 32% with mixed proteinuria, 27% with normal findings, 23% with tubular, and 18% with glomerular proteinuria. In all normoalbuminruic patiens with hypertensive nephropathy, we detected normal electrophoretic findings. In group of subjects pregnant women with preeclampsia, we found: 81% with mixed proteinuria, 13% with glomerular, and 8% with tubular proteinuria. Conclusion: By SDS PAGE method, we detected that in patients with secondary nephropathies the most common type of proteinuria is mixed proteinuria, indicating both loss of glomerular permeability and tubular function. We can conclude that SDS PAGE is high sensitive method for detection of renal impairment in patients with secondary nephropathies.Keywords: diabetic nephropathy, preeclampsia, hypertensive nephropathy, SDS PAGE
Procedia PDF Downloads 14354 Deep Learning in Chest Computed Tomography to Differentiate COVID-19 from Influenza
Authors: Hongmei Wang, Ziyun Xiang, Ying liu, Li Yu, Dongsheng Yue
Abstract:
Intro: The COVID-19 (Corona Virus Disease 2019) has greatly changed the global economic, political and financial ecology. The mutation of the coronavirus in the UK in December 2020 has brought new panic to the world. Deep learning was performed on Chest Computed tomography (CT) of COVID-19 and Influenza and describes their characteristics. The predominant features of COVID-19 pneumonia was ground-glass opacification, followed by consolidation. Lesion density: most lesions appear as ground-glass shadows, and some lesions coexist with solid lesions. Lesion distribution: the focus is mainly on the dorsal side of the periphery of the lung, with the lower lobe of the lungs as the focus, and it is often close to the pleura. Other features it has are grid-like shadows in ground glass lesions, thickening signs of diseased vessels, air bronchi signs and halo signs. The severe disease involves whole bilateral lungs, showing white lung signs, air bronchograms can be seen, and there can be a small amount of pleural effusion in the bilateral chest cavity. At the same time, this year's flu season could be near its peak after surging throughout the United States for months. Chest CT for Influenza infection is characterized by focal ground glass shadows in the lungs, with or without patchy consolidation, and bronchiole air bronchograms are visible in the concentration. There are patchy ground-glass shadows, consolidation, air bronchus signs, mosaic lung perfusion, etc. The lesions are mostly fused, which is prominent near the hilar and two lungs. Grid-like shadows and small patchy ground-glass shadows are visible. Deep neural networks have great potential in image analysis and diagnosis that traditional machine learning algorithms do not. Method: Aiming at the two major infectious diseases COVID-19 and influenza, which are currently circulating in the world, the chest CT of patients with two infectious diseases is classified and diagnosed using deep learning algorithms. The residual network is proposed to solve the problem of network degradation when there are too many hidden layers in a deep neural network (DNN). The proposed deep residual system (ResNet) is a milestone in the history of the Convolutional neural network (CNN) images, which solves the problem of difficult training of deep CNN models. Many visual tasks can get excellent results through fine-tuning ResNet. The pre-trained convolutional neural network ResNet is introduced as a feature extractor, eliminating the need to design complex models and time-consuming training. Fastai is based on Pytorch, packaging best practices for in-depth learning strategies, and finding the best way to handle diagnoses issues. Based on the one-cycle approach of the Fastai algorithm, the classification diagnosis of lung CT for two infectious diseases is realized, and a higher recognition rate is obtained. Results: A deep learning model was developed to efficiently identify the differences between COVID-19 and influenza using chest CT.Keywords: COVID-19, Fastai, influenza, transfer network
Procedia PDF Downloads 14253 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 16952 An Efficient Process Analysis and Control Method for Tire Mixing Operation
Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park
Abstract:
Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process
Procedia PDF Downloads 26551 Office Workspace Design for Policewomen in Assam, India: Applications for Developing Countries
Authors: Shilpi Bora, Abhirup Chatterjee, Debkumar Chakrabarti
Abstract:
Organizations of all the sectors around the world are increasingly revisiting their workplace strategies with due concern for women working therein. Limited office space and rigid work arrangements contribute to lesser job satisfaction and greater work impoundments for any organization. Flexible workspace strategies are indispensable to accommodate the progressive rise of modular workstations and involvement of women. Today’s generation of employees deserves malleable office environments with employee-friendly job conditions and strategies. The workplace nowadays stands on rapid organizational changes in progressive and flexible work culture. Occupational well-being practices need to keep pace with the rapid changes in office-based work. Working at the office (workspace) with awkward postures or for long periods can cause pain, discomfort, and injury. The world is stirring towards the era of globalization and progress. The 4000 women police personnel constitute less than one per cent of the total police strength of India. Lots of innovative fields are growing fast, and it is important that we should accommodate women in those arenas. The timeworn trends should be set apart to set out for fresh opportunities and possibilities of development and success through more involvement of women in the workplace. The notion of women policing is gaining position throughout the world, and various countries are putting solemn efforts to mainstream women in policing. As the role of women policing in a society is budding, and thus it is also notable that the accessibility of women at general police stations should be considered. Accordingly, the impact of workspace at police station on the employee productivity has been widely deliberated as a crucial contributor to employee satisfaction leading to better functional motivation. Thus the present research aimed to look into the office workstation design of police station with reference to womanhood specific issues to uplift occupational wellbeing of the policewomen. Personal interview and individual responses collected through administering to a subjective assessment questionnaire on thirty women police as well as to have their views on these issues by purposive non-probability sampling of women police personnel of different ranks posted in Guwahati, Assam, India. Scrutiny of the collected data revealed that office design has a substantial impact on the policewomen job satisfaction in the police station. In this study, the workspace was designed in such a way that the set of factors would impact on the individual to ensure increased productivity. Office design such as furniture, noise, temperature, lighting and spatial arrangement were considered. The primary feature which affected the productivity of policewomen was the furniture used in the workspace, which was found to disturb the everyday and overall productivity of policewomen. Therefore, it was recommended to have proper and adequate ergonomics design intervention to improve the office design for better performance. This type of study is today’s need-of-the-hour to empower women and facilitate their inner talent to come up in service of the nation. The office workspace design also finds critical importance at several other occupations also – where office workstation needs further improvement.Keywords: office workspace design, policewomen, womanhood concerns at workspace, occupational wellbeing
Procedia PDF Downloads 22550 Cultural Competence in Palliative Care
Authors: Mariia Karizhenskaia, Tanvi Nandani, Ali Tafazoli Moghadam
Abstract:
Hospice palliative care (HPC) is one of the most complicated philosophies of care in which physical, social/cultural, and spiritual aspects of human life are intermingled with an undeniably significant role in every aspect. Among these dimensions of care, culture possesses an outstanding position in the process and goal determination of HPC. This study shows the importance of cultural elements in the establishment of effective and optimized structures of HPC in the Canadian healthcare environment. Our systematic search included Medline, Google Scholar, and St. Lawrence College Library, considering original, peer-reviewed research papers published from 1998 to 2023 to identify recent national literature connecting culture and palliative care delivery. The most frequently presented feature among the articles is the role of culture in the efficiency of the HPC. It has been shown frequently that including the culturespecific parameters of each nation in this system of care is vital for its success. On the other hand, ignorance about the exclusive cultural trends in a specific location has been accompanied by significant failure rates. Accordingly, implementing a culture-wise adaptable approach is mandatory for multicultural societies. The following outcome of research studies in this field underscores the importance of culture-oriented education for healthcare staff. Thus, all the practitioners involved in HPC will recognize the importance of traditions, religions, and social habits for processing the care requirements. Cultural competency training is a telling sample of the establishment of this strategy in health care that has come to the aid of HPC in recent years. Another complexity of the culturized HPC nowadays is the long-standing issue of racialization. Systematic and subconscious deprivation of minorities has always been an adversity of advanced levels of care. The last part of the constellation of our research outcomes is comprised of the ethical considerations of culturally driven HPC. This part is the most sophisticated aspect of our topic because almost all the analyses, arguments, and justifications are subjective. While there was no standard measure for ethical elements in clinical studies with palliative interventions, many research teams endorsed applying ethical principles for all the involved patients. Notably, interpretations and projections of ethics differ in varying cultural backgrounds. Therefore, healthcare providers should always be aware of the most respectable methodologies of HPC on a case-by-case basis. Cultural training programs have been utilized as one of the main tactics to improve the ability of healthcare providers to address the cultural needs and preferences of diverse patients and families. In this way, most of the involved health care practitioners will be equipped with cultural competence. Considerations for ethical and racial specifications of the clients of this service will boost the effectiveness and fruitfulness of the HPC. Canadian society is a colorful compilation of multiple nationalities; accordingly, healthcare clients are diverse, and this divergence is also translated into HPC patients. This fact justifies the importance of studying all the cultural aspects of HPC to provide optimal care on this enormous land.Keywords: cultural competence, end-of-life care, hospice, palliative care
Procedia PDF Downloads 7449 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance
Authors: Ammar Alali, Mahmoud Abughaban
Abstract:
Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe
Procedia PDF Downloads 22848 Toward the Decarbonisation of EU Transport Sector: Impacts and Challenges of the Diffusion of Electric Vehicles
Authors: Francesca Fermi, Paola Astegiano, Angelo Martino, Stephanie Heitel, Michael Krail
Abstract:
In order to achieve the targeted emission reductions for the decarbonisation of the European economy by 2050, fundamental contributions are required from both energy and transport sectors. The objective of this paper is to analyse the impacts of a largescale diffusion of e-vehicles, either battery-based or fuel cells, together with the implementation of transport policies aiming at decreasing the use of motorised private modes in order to achieve greenhouse gas emission reduction goals, in the context of a future high share of renewable energy. The analysis of the impacts and challenges of future scenarios on transport sector is performed with the ASTRA (ASsessment of TRAnsport Strategies) model. ASTRA is a strategic system-dynamic model at European scale (EU28 countries, Switzerland and Norway), consisting of different sub-modules related to specific aspects: the transport system (e.g. passenger trips, tonnes moved), the vehicle fleet (composition and evolution of technologies), the demographic system, the economic system, the environmental system (energy consumption, emissions). A key feature of ASTRA is that the modules are linked together: changes in one system are transmitted to other systems and can feed-back to the original source of variation. Thanks to its multidimensional structure, ASTRA is capable to simulate a wide range of impacts stemming from the application of transport policy measures: the model addresses direct impacts as well as second-level and third-level impacts. The simulation of the different scenarios is performed within the REFLEX project, where the ASTRA model is employed in combination with several energy models in a comprehensive Modelling System. From the transport sector perspective, some of the impacts are driven by the trend of electricity price estimated from the energy modelling system. Nevertheless, the major drivers to a low carbon transport sector are policies related to increased fuel efficiency of conventional drivetrain technologies, improvement of demand management (e.g. increase of public transport and car sharing services/usage) and diffusion of environmentally friendly vehicles (e.g. electric vehicles). The final modelling results of the REFLEX project will be available from October 2018. The analysis of the impacts and challenges of future scenarios is performed in terms of transport, environmental and social indicators. The diffusion of e-vehicles produces a consistent reduction of future greenhouse gas emissions, although the decarbonisation target can be achieved only with the contribution of complementary transport policies on demand management and supporting the deployment of low-emission alternative energy for non-road transport modes. The paper explores the implications through time of transport policy measures on mobility and environment, underlying to what extent they can contribute to a decarbonisation of the transport sector. Acknowledgements: The results refer to the REFLEX project which has received grants from the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 691685.Keywords: decarbonisation, greenhouse gas emissions, e-mobility, transport policies, energy
Procedia PDF Downloads 15347 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.Keywords: cross-validation, importance sampling, information criteria, predictive accuracy
Procedia PDF Downloads 39246 Lifting Body Concepts for Unmanned Fixed-Wing Transport Aircrafts
Authors: Anand R. Nair, Markus Trenker
Abstract:
Lifting body concepts were conceived as early as 1917 and patented by Roy Scroggs. It was an idea of using the fuselage as a lift producing body with no or small wings. Many of these designs were developed and even flight tested between 1920’s to 1970’s, but it was not pursued further for commercial flight as at lower airspeeds, such a configuration was incapable to produce sufficient lift for the entire aircraft. The concept presented in this contribution is combining the lifting body design along with a fixed wing to maximise the lift produced by the aircraft. Conventional aircraft fuselages are designed to be aerodynamically efficient, which is to minimise the drag; however, these fuselages produce very minimal or negligible lift. For the design of an unmanned fixed wing transport aircraft, many of the restrictions which are present for commercial aircraft in terms of fuselage design can be excluded, such as windows for the passengers/pilots, cabin-environment systems, emergency exits, and pressurization systems. This gives new flexibility to design fuselages which are unconventionally shaped to contribute to the lift of the aircraft. The two lifting body concepts presented in this contribution are targeting different applications: For a fast cargo delivery drone, the fuselage is based on a scaled airfoil shape with a cargo capacity of 500 kg for euro pallets. The aircraft has a span of 14 m and reaches 1500 km at a cruising speed of 90 m/s. The aircraft could also easily be adapted to accommodate pilot and passengers with modifications to the internal structures, but pressurization is not included as the service ceiling envisioned for this type of aircraft is limited to 10,000 ft. The next concept to be investigated is called a multi-purpose drone, which incorporates a different type of lifting body and is a much more versatile aircraft as it will have a VTOL capability. The aircraft will have a wingspan of approximately 6 m and flight speeds of 60 m/s within the same service ceiling as the fast cargo delivery drone. The multi-purpose drone can be easily adapted for various applications such as firefighting, agricultural purposes, surveillance, and even passenger transport. Lifting body designs are not a new concept, but their effectiveness in terms of cargo transportation has not been widely investigated. Due to their enhanced lift producing capability, lifting body designs enable the reduction of the wing area and the overall weight of the aircraft. This will, in turn, reduce the thrust requirement and ultimately the fuel consumption. The various designs proposed in this contribution will be based on the general aviation category of aircrafts and will be focussed on unmanned methods of operation. These unmanned fixed-wing transport drones will feature appropriate cargo loading/unloading concepts which can accommodate large size cargo for efficient time management and ease of operation. The various designs will be compared in performance to their conventional counterpart to understand their benefits/shortcomings in terms of design, performance, complexity, and ease of operation. The majority of the performance analysis will be carried out using industry relevant standards in computational fluid dynamics software packages.Keywords: lifting body concept, computational fluid dynamics, unmanned fixed-wing aircraft, cargo drone
Procedia PDF Downloads 24645 Cuban's Supply Chains Development Model: Qualitative and Quantitative Impact on Final Consumers
Authors: Teresita Lopez Joy, Jose A. Acevedo Suarez, Martha I. Gomez Acosta, Ana Julia Acevedo Urquiaga
Abstract:
Current trends in business competitiveness indicate the need to manage businesses as supply chains and not in isolation. The use of strategies aimed at maximum satisfaction of customers in a network and based on inter-company cooperation; contribute to obtaining successful joint results. In the Cuban economic context, the development of productive linkages to achieve integrated management of supply chains is considering a key aspect. In order to achieve this jump, it is necessary to develop acting capabilities in the entities that make up the chains through a systematic procedure that allows arriving at a management model in consonance with the environment. The objective of the research focuses on: designing a model and procedure for the development of integrated management of supply chains in economic entities. The results obtained are: the Model and the Procedure for the Development of the Supply Chains Integrated Management (MP-SCIM). The Model is based on the development of logistics in the network actors, the joint work between companies, collaborative planning and the monitoring of a main indicator according to the end customers. The application Procedure starts from the well-founded need for development in a supply chain and focuses on training entrepreneurs as doers. The characterization and diagnosis is done to later define the design of the network and the relationships between the companies. It takes into account the feedback as a method of updating the conditions and way to focus the objectives according to the final customers. The MP-SCIM is the result of systematic work with a supply chain approach in companies that have consolidated as coordinators of their network. The cases of the edible oil chain and explosives for construction sector reflect results of more remarkable advances since they have applied this approach for more than 5 years and maintain it as a general strategy of successful development. The edible oil trading company experienced a jump in sales. In 2006, the company started the analysis in order to define the supply chain, apply diagnosis techniques, define problems and implement solutions. The involvement of the management and the progressive formation of performance capacities in the personnel allowed the application of tools according to the context. The company that coordinates the explosives chain for construction sector shows adequate training with independence and opportunity in the face of different situations and variations of their business environment. The appropriation of tools and techniques for the analysis and implementation of proposals is a characteristic feature of this case. The coordinating entity applies integrated supply chain management to its decisions based on the timely training of the necessary action capabilities for each situation. Other cases of study and application that validate these tools are also detailed in this paper, and they highlight the results of generalization in the quantitative and qualitative improvement according to the final clients. These cases are: teaching literature in universities, agricultural products of local scope and medicine supply chains.Keywords: integrated management, logistic system, supply chain management, tactical-operative planning
Procedia PDF Downloads 15344 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry
Authors: Nadia Belu, Laurenţiu Mihai Ionescu, Agnieszka Misztal
Abstract:
The automotive industry is one of the most important industries in the world that concerns not only the economy, but also the world culture. In the present financial and economic context, this field faces new challenges posed by the current crisis, companies must maintain product quality, deliver on time and at a competitive price in order to achieve customer satisfaction. Two of the most recommended techniques of quality management by specific standards of the automotive industry, in the product development, are Failure Mode and Effects Analysis (FMEA) and Control Plan. FMEA is a methodology for risk management and quality improvement aimed at identifying potential causes of failure of products and processes, their quantification by risk assessment, ranking of the problems identified according to their importance, to the determination and implementation of corrective actions related. The companies use Control Plans realized using the results from FMEA to evaluate a process or product for strengths and weaknesses and to prevent problems before they occur. The Control Plans represent written descriptions of the systems used to control and minimize product and process variation. In addition Control Plans specify the process monitoring and control methods (for example Special Controls) used to control Special Characteristics. In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.Keywords: automotive industry, FMEA, control plan, automotive technology
Procedia PDF Downloads 40643 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 8842 Effect of Inoculation with Consortia of Plant-Growth Promoting Bacteria on Biomass Production of the Halophyte Salicornia ramosissima
Authors: Maria João Ferreira, Natalia Sierra-Garcia, Javier Cremades, Carla António, Ana M. Rodrigues, Helena Silva, Ângela Cunha
Abstract:
Salicornia ramosissima, a halophyte that grows naturally in coastal areas of the northern hemisphere, is often considered the most promising halophyte candidate for extensive crop cultivation and saline agriculture practices. The expanding interest in this plant surpasses its use as gourmet food and includes their potential application as a source of bioactive compounds for the pharmaceutical industry. Despite growing well in saline soils, sustainable and ecologically friendly techniques to enhance crop production and the nutritional value of this plant are still needed. The root microbiome of S. ramosissima proved to be a source of taxonomically diverse plant growth-promoting bacteria (PGPB). Halotolerant strains of Bacillus, Salinicola, Pseudomonas, and Brevibacterium, among other genera, exhibit a broad spectrum of plant-growth promotion traits [e.g., 3-indole acetic acid (IAA), 1-aminocyclopropane-1-carboxylic acid (ACC) deaminase, siderophores, phosphate solubilization, Nitrogen fixation] and express a wide range of extracellular enzyme activities. In this work, three plant growth-promoting bacteria strains (Brevibacterium casei EB3, Pseudomonas oryzihabitans RL18, and Bacillus aryabhattai SP20) isolated from the rhizosphere and the endosphere of S. ramosissima roots from different saltmarshes along the Portuguese coast were inoculated in S. ramosissima seeds. Plants germinated from inoculated seeds were grown for three months in pots filled with a mixture of perlite and estuarine sediment (1:1) in greenhouse conditions and later transferred to a growth chamber, where they were maintained two months with controlled photoperiod, temperature, and humidity. Pots were placed on trays containing the irrigation solution (Hoagland’s solution 20% added with 10‰ marine salt). Before reaching the flowering stage, plants were collected, and the fresh and dry weight of aerial parts was determined. Non-inoculated seeds were used as a negative control. Selected dried stems from the most promising treatments were later analyzed by GC-TOF-MS for primary metabolite composition. The efficiency of inoculation and persistence of the inoculum was assessed by Next Generation Sequencing. Inoculations with single strain EB3 and co-inoculations with EB3+RL18 and EB3+RL18+SP20 (All treatment) resulted in significantly higher biomass production (fresh and dry weight) compared to non-inoculated plants. Considering fresh weight alone, inoculation with isolates SP20 and RL18 also caused a significant positive effect. Combined inoculation with the consortia SP20+EB3 or SP20+RL18 did not significantly improve biomass production. The analysis of the profile of primary metabolites will provide clues on the mechanisms by which the growth-enhancement effect of the inoculants operates in the plants. These results sustain promising prospects for the use of rhizospheric and endophytic PGPB as biofertilizers, reducing environmental impacts and operational costs of agrochemicals and contributing to the sustainability and cost-effectiveness of saline agriculture. Acknowledgments: This work was supported by project Rhizomis PTDC/BIA-MIC/29736/2017 financed by Fundação para a Ciência e Tecnologia (FCT) through the Regional Operational Program of the Center (02/SAICT/2017) with FEDER funds (European Regional Development Fund, FNR, and OE) and by FCT through CESAM (UIDP/50017/2020 + UIDB/50017/2020), LAQV-REQUIMTE (UIDB/50006/2020). We also acknowledge FCT/FSE for the financial support to Maria João Ferreira through a PhD grant (PD/BD/150363/2019). We are grateful to Horta dos Peixinhos for their help and support during sampling and seed collection. We also thank Glória Pinto for her collaboration providing us the use of the growth chambers during the final months of the experiment and Enrique Mateos-Naranjo and Jennifer Mesa-Marín of the Departamento de Biología Vegetal y Ecología, the University of Sevilla for their advice regarding the growth of salicornia plants in greenhouse conditions.Keywords: halophytes, PGPB, rhizosphere engineering, biofertilizers, primary metabolite profiling, plant inoculation, Salicornia ramosissima
Procedia PDF Downloads 160