Search results for: uncertain lead times and processing times
492 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 82491 Interdigitated Flexible Li-Ion Battery by Aerosol Jet Printing
Authors: Yohann R. J. Thomas, Sébastien Solan
Abstract:
Conventional battery technology includes the assembly of electrode/separator/electrode by standard techniques such as stacking or winding, depending on the format size. In that type of batteries, coating or pasting techniques are only used for the electrode process. The processes are suited for large scale production of batteries and perfectly adapted to plenty of application requirements. Nevertheless, as the demand for both easier and cost-efficient production modes, flexible, custom-shaped and efficient small sized batteries is rising. Thin-film, printable batteries are one of the key areas for printed electronics. In the frame of European BASMATI project, we are investigating the feasibility of a new design of lithium-ion battery: interdigitated planar core design. Polymer substrate is used to produce bendable and flexible rechargeable accumulators. Direct fully printed batteries lead to interconnect the accumulator with other electronic functions for example organic solar cells (harvesting function), printed sensors (autonomous sensors) or RFID (communication function) on a common substrate to produce fully integrated, thin and flexible new devices. To fulfill those specifications, a high resolution printing process have been selected: Aerosol jet printing. In order to fit with this process parameters, we worked on nanomaterials formulation for current collectors and electrodes. In addition, an advanced printed polymer-electrolyte is developed to be implemented directly in the printing process in order to avoid the liquid electrolyte filling step and to improve safety and flexibility. Results: Three different current collectors has been studied and printed successfully. An ink of commercial copper nanoparticles has been formulated and printed, then a flash sintering was applied to the interdigitated design. A gold ink was also printed, the resulting material was partially self-sintered and did not require any high temperature post treatment. Finally, carbon nanotubes were also printed with a high resolution and well defined patterns. Different electrode materials were formulated and printed according to the interdigitated design. For cathodes, NMC and LFP were efficaciously printed. For anodes, LTO and graphite have shown to be good candidates for the fully printed battery. The electrochemical performances of those materials have been evaluated in a standard coin cell with lithium-metal counter electrode and the results are similar with those of a traditional ink formulation and process. A jellified plastic crystal solid state electrolyte has been developed and showed comparable performances to classical liquid carbonate electrolytes with two different materials. In our future developments, focus will be put on several tasks. In a first place, we will synthesize and formulate new specific nano-materials based on metal-oxyde. Then a fully printed device will be produced and its electrochemical performance will be evaluated.Keywords: high resolution digital printing, lithium-ion battery, nanomaterials, solid-state electrolytes
Procedia PDF Downloads 251490 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 110489 Timely Palliative Screening and Interventions in Oncology
Authors: Jaci Marie Mastrandrea, Rosario Haro
Abstract:
Background: The National Comprehensive Cancer Network (NCCN) recommends that healthcare institutions have established processes for integrating palliative care (PC) into cancer treatment and that all cancer patients be screened for PC needs upon initial diagnosis as well as throughout the entire continuum of care (National Comprehensive Cancer Network, 2021). Early PC screening and intervention is directly associated with improved patient outcomes. The Sky Lakes Cancer Treatment Center (SLCTC) is an institution that has access to PC services yet does not have protocols in place for identifying patients with palliative needs or a standardized referral process. The aim of this quality improvement project was to improve early access to PC services by establishing a standardized screening and referral process for outpatient oncology patients. Method: The sample population included all adult patients with an oncology diagnosis who presented to the SLCTC for treatment during the project timeline. The “Palliative and Supportive Needs Assessment'' (PSNA) screening tool was developed from validated, evidence-based PC referral criteria. The tool was initially implemented using paper forms, and data was collected over a period of eight weeks. Patients were screened by nurses on the SLCTC oncology treatment team. Nurses responsible for screening patients received an educational inservice prior to implementation. Patients with a PSNA score of three or higher received an educational handout on the topic of PC and education about PC and symptom management. A score of five or higher indicates that PC referral is strongly recommended, and the patient’s EHR is flagged for the oncology provider to review orders for PC referral. The PSNA tool was approved by Sky Lakes administration for full integration into Epic-Beacon. The project lead collaborated with the Sky Lakes’ information systems team and representatives from Epic on the tool’s aesthetic and functionality within the Epic system. SLCTC nurses and physicians were educated on how to document the PSNA within Epic and where to view results. Results: Prior to the implementation of the PSNA screening tool, the SLCTC had zero referrals to PC in the past year, excluding referrals to hospice. Data was collected from the completed screening assessments of 100 patients under active treatment at the SLCTC. Seventy-three percent of patients met criteria for PC referral with a score greater than or equal to three. Of those patients who met referral criteria, 53.4% (39 patients) were referred for a palliative and supportive care consultation. Patients that were not referred to PC upon meeting criteria were flagged in EPIC for re-screening within one to three months. Patients with lung cancer, chronic hematologic malignancies, breast cancer, and gastrointestinal malignancy most frequently met the criteria for PC referral and scored highest overall on the scale of 0-12. Conclusion: The implementation of a standardized PC screening tool at the SLCTC significantly increased awareness of PC needs among cancer patients in the outpatient setting. Additionally, data derived from this quality improvement project supports the national recommendation for PC to be an integral component of cancer treatment across the entire continuum of care.Keywords: oncology, palliative and supportive care, symptom management, outpatient oncology, palliative screening tool
Procedia PDF Downloads 112488 Biomaterials Solutions to Medical Problems: A Technical Review
Authors: Ashish Thakur
Abstract:
This technical paper was written in view of focusing the biomaterials and its various applications in modern industries. Author tires to elaborate not only the medical, infect plenty of application in other industries. The scope of the research area covers the wide range of physical, biological and chemical sciences that underpin the design of biomaterials and the clinical disciplines in which they are used. A biomaterial is now defined as a substance that has been engineered to take a form which, alone or as part of a complex system, is used to direct, by control of interactions with components of living systems, the course of any therapeutic or diagnostic procedure. Biomaterials are invariably in contact with living tissues. Thus, interactions between the surface of a synthetic material and biological environment must be well understood. This paper reviews the benefits and challenges associated with surface modification of the metals in biomedical applications. The paper also elaborates how the surface characteristics of metallic biomaterials, such as surface chemistry, topography, surface charge, and wettability, influence the protein adsorption and subsequent cell behavior in terms of adhesion, proliferation, and differentiation at the biomaterial–tissue interface. The chapter also highlights various techniques required for surface modification and coating of metallic biomaterials, including physicochemical and biochemical surface treatments and calcium phosphate and oxide coatings. In this review, the attention is focused on the biomaterial-associated infections, from which the need for anti-infective biomaterials originates. Biomaterial-associated infections differ markedly for epidemiology, aetiology and severity, depending mainly on the anatomic site, on the time of biomaterial application, and on the depth of the tissues harbouring the prosthesis. Here, the diversity and complexity of the different scenarios where medical devices are currently utilised are explored, providing an overview of the emblematic applicative fields and of the requirements for anti-infective biomaterials. In addition to this, chapter introduces nanomedicine and the use of both natural and synthetic polymeric biomaterials, focuses on specific current polymeric nanomedicine applications and research, and concludes with the challenges of nanomedicine research. Infection is currently regarded as the most severe and devastating complication associated to the use of biomaterials. Osteoporosis is a worldwide disease with a very high prevalence in humans older than 50. The main clinical consequences are bone fractures, which often lead to patient disability or even death. A number of commercial biomaterials are currently used to treat osteoporotic bone fractures, but most of these have not been specifically designed for that purpose. Many drug- or cell-loaded biomaterials have been proposed in research laboratories, but very few have received approval for commercial use. Polymeric nanomaterial-based therapeutics plays a key role in the field of medicine in treatment areas such as drug delivery, tissue engineering, cancer, diabetes, and neurodegenerative diseases. Advantages in the use of polymers over other materials for nanomedicine include increased functionality, design flexibility, improved processability, and, in some cases, biocompatibility.Keywords: nanomedicine, tissue, infections, biomaterials
Procedia PDF Downloads 264487 Functionalizing Gold Nanostars with Ninhydrin as Vehicle Molecule for Biomedical Applications
Authors: Swati Mishra
Abstract:
In recent years, there has been an explosion in Gold NanoParticle (GNP) research, with a rapid increase in publications in diverse fields, including imaging, bioengineering, and molecular biology. GNPs exhibit unique physicochemical properties, including surface plasmon resonance (SPR) and bind amine and thiol groups, allowing surface modification and use in biomedical applications. Nanoparticle functionalization is the subject of intense research at present, with rapid progress being made towards developing biocompatible, multi-functional particles. In the present study, the photochemical method has been done to functionalize various-shaped GNPs like nanostars by the molecules like ninhydrin. Ninhydrin is bactericidal, virucidal, fungicidal, antigen-antibody reactive, and used in fingerprint technology in forensics. The GNPs functionalized with ninhydrin efficiently will bind to the amino acids on the target protein, which is of eminent importance during the pandemic, especially where long-term treatments of COVID- 19 bring many side effects of the drugs. The photochemical method is adopted as it provides low thermal load, selective reactivity, selective activation, and controlled radiation in time, space, and energy. The GNPs exhibit their characteristic spectrum, but a distinctly blue or redshift in the peak will be observed after UV irradiation, ensuring efficient ninhydrin binding. Now, the bound ninhydrin in the GNP carrier, upon chemically reacting with any amino acid, will lead to the formation of Rhumann purple. A common method of GNP production includes citrate reduction of Au [III] derivatives such as aurochloric acid (HAuCl4) in water to Au [0] through a one-step synthesis of size-tunable GNPs. The following reagents are prepared to validate the approach. Reagent A solution 1 is0.0175 grams ninhydrin in 5 ml Millipore water Reagent B 30 µl of HAuCl₄.3H₂O in 3 ml of solution 1 Reagent C 1 µl of gold nanostars in 3 ml of solution 1 Reagent D 6 µl of cetrimonium bromide (CTAB) in 3 ml of solution1 ReagentE 1 µl of gold nanostars in 3 ml of ethanol ReagentF 30 µl of HAuCl₄.₃H₂O in 3 ml of ethanol ReagentG 30 µl of HAuCl₄.₃H₂O in 3 ml of solution 2 ReagentH solution 2 is0.0087 grams ninhydrin in 5 ml Millipore water ReagentI 30 µl of HAuCl₄.₃H₂O in 3 ml of water The reagents were irradiated at 254 nm for 15 minutes, followed by their UV Visible spectroscopy. The wavelength was selected based on the one reported for excitation of a similar molecule Pthalimide. It was observed that the solution B and G deviate around 600 nm, while C peaks distinctively at 567.25 nm and 983.9 nm. Though it is tough to say about the chemical reaction happening, butATR-FTIR of reagents will ensure that ninhydrin is not forming Rhumann purple in the absence of amino acids. Therefore, these experiments, we achieved the functionalization of gold nanostars with ninhydrin corroborated by the deviation in the spectrum obtained in a mixture of GNPs and ninhydrin irradiated with UV light. It prepares them as a carrier molecule totake up amino acids for targeted delivery or germicidal action.Keywords: gold nanostars, ninhydrin, photochemical method, UV visible specgtroscopy
Procedia PDF Downloads 148486 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer
Authors: Binder Hans
Abstract:
Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas
Procedia PDF Downloads 148485 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack
Authors: Varun Agarwal
Abstract:
Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images
Procedia PDF Downloads 130484 Numerical Study of Leisure Home Chassis under Various Loads by Using Finite Element Analysis
Authors: Asem Alhnity, Nicholas Pickett
Abstract:
The leisure home industry is experiencing an increase in sales due to the rise in popularity of staycations. However, there is also a demand for improvements in thermal and structural behaviour from customers. Existing standards and codes of practice outline the requirements for leisure home design. However, there is a lack of expertise in applying Finite Element Analysis (FEA) to complex structures in this industry. As a result, manufacturers rely on standardized design approaches, which often lead to excessively engineered or inadequately designed products. This study aims to address this issue by investigating the impact of the habitation structure on chassis performance in leisure homes. The aim of this research is to comprehensively analyse the impact of the habitation structure on chassis performance in leisure homes. By employing FEA on the entire unit, including both the habitation structure and the chassis, this study seeks to develop a novel framework for designing and analysing leisure homes. The objectives include material reduction, enhancing structural stability, resolving existing design issues, and developing innovative modular and wooden chassis designs. The methodology used in this research is quantitative in nature. The study utilizes FEA to analyse the performance of leisure home chassis under various loads. The analysis procedures involve running the FEA simulations on the numerical model of the leisure home chassis. Different load scenarios are applied to assess the stress and deflection performance of the chassis under various conditions. FEA is a numerical method that allows for accurate analysis of complex systems. The research utilizes flexible mesh sizing to calculate small deflections around doors and windows, with large meshes used for macro deflections. This approach aims to minimize run-time while providing meaningful stresses and deflections. Moreover, it aims to investigate the limitations and drawbacks of the popular approach of applying FEA only to the chassis and replacing the habitation structure with a distributed load. The findings of this study indicate that the popular approach of applying FEA only to the chassis and replacing the habitation structure with a distributed load overlooks the strengthening generated from the habitation structure. By employing FEA on the entire unit, it is possible to optimize stress and deflection performance while achieving material reduction and enhanced structural stability. The study also introduces innovative modular and wooden chassis designs, which show promising weight reduction compared to the existing heavily fabricated lattice chassis. In conclusion, this research provides valuable insights into the impact of the habitation structure on chassis performance in leisure homes. By employing FEA on the entire unit, the study demonstrates the importance of considering the strengthening generated from the habitation structure in chassis design. The research findings contribute to advancements in material reduction, structural stability, and overall performance optimization. The novel framework developed in this study promotes sustainability, cost-efficiency, and innovation in leisure home design.Keywords: static homes, caravans, motor homes, holiday homes, finite element analysis (FEA)
Procedia PDF Downloads 101483 A Qualitative Study of Experienced Early Childhood Teachers Resolving Workplace Challenges with Character Strengths
Authors: Michael J. Haslip
Abstract:
Character strength application improves performance and well-being in adults across industries, but the potential impact of character strength training among early childhood educators is mostly unknown. To explore how character strengths are applied by early childhood educators at work, a qualitative study was completed alongside professional development provided to a group of in-service teachers of children ages 0-5 in Philadelphia, Pennsylvania, United States. Study participants (n=17) were all female. The majority of participants were non-white, in full-time lead or assistant teacher roles, had at least ten years of experience and a bachelor’s degree. Teachers were attending professional development weekly for 2 hours over a 10-week period on the topic of social and emotional learning and child guidance. Related to this training were modules and sessions on identifying a teacher’s character strength profile using the Values in Action classification of 24 strengths (e.g., humility, perseverance) that have a scientific basis. Teachers were then asked to apply their character strengths to help resolve current workplace challenges. This study identifies which character strengths the teachers reported using most frequently and the nature of the workplace challenges being resolved in this context. The study also reports how difficult these challenges were to the teachers and their success rate at resolving workplace challenges using a character strength application plan. The study also documents how teachers’ own use of character strengths relates to their modeling of these same traits (e.g., kindness, teamwork) for children, especially when the nature of the workplace challenge directly involves the children, such as when addressing issues of classroom management and behavior. Data were collected on action plans (reflective templates) which teachers wrote to explain the work challenge they were facing, the character strengths they used to address the challenge, their plan for applying strengths to the challenge, and subsequent results. Content analysis and thematic analysis were used to investigate the research questions using approaches that included classifying, connecting, describing, and interpreting data reported by educators. Findings reveal that teachers most frequently use kindness, leadership, fairness, hope, and love to address a range of workplace challenges, ranging from low to high difficulty, involving children, coworkers, parents, and for self-management. Teachers reported a 71% success rate at fully or mostly resolving workplace challenges using the action plan method introduced during professional development. Teachers matched character strengths to challenges in different ways, with certain strengths being used mostly when the challenge involved children (love, forgiveness), others mostly with adults (bravery, teamwork), and others universally (leadership, kindness). Furthermore, teacher’s application of character strengths at work involved directly modeling character for children in 31% of reported cases. The application of character strengths among early childhood educators may play a significant role in improving teacher well-being, reducing job stress, and improving efforts to model character for young children.Keywords: character strengths, positive psychology, professional development, social-emotional learning
Procedia PDF Downloads 105482 Numerical Study of Homogeneous Nanodroplet Growth
Authors: S. B. Q. Tran
Abstract:
Drop condensation is the phenomenon that the tiny drops form when the oversaturated vapour present in the environment condenses on a substrate and makes the droplet growth. Recently, this subject has received much attention due to its applications in many fields such as thin film growth, heat transfer, recovery of atmospheric water and polymer templating. In literature, many papers investigated theoretically and experimentally in macro droplet growth with the size of millimeter scale of radius. However few papers about nanodroplet condensation are found in the literature especially theoretical work. In order to understand the droplet growth in nanoscale, we perform the numerical simulation work to study nanodroplet growth. We investigate and discuss the role of the droplet shape and monomer diffusion on drop growth and their effect on growth law. The effect of droplet shape is studied by doing parametric studies of contact angle and disjoining pressure magnitude. Besides, the effect of pinning and de-pinning behaviours is also studied. We investigate the axisymmetric homogeneous growth of 10–100 nm single water nanodroplet on a substrate surface. The main mechanism of droplet growth is attributed to the accumulation of laterally diffusing water monomers, formed by the absorption of water vapour in the environment onto the substrate. Under assumptions of quasi-steady thermodynamic equilibrium, the nanodroplet evolves according to the augmented Young–Laplace equation. Using continuum theory, we model the dynamics of nanodroplet growth including the coupled effects of disjoining pressure, contact angle and monomer diffusion with the assumption of constant flux of water monomers at the far field. The simulation result is validated by comparing with the published experimental result. For the case of nanodroplet growth with constant contact angle, our numerical results show that the initial droplet growth is transient by monomer diffusion. When the flux at the far field is small, at the beginning, the droplet grows by the diffusion of initially available water monomers on the substrate and after that by the flux at the far field. In the steady late growth rate of droplet radius and droplet height follow a power law of 1/3, which is unaffected by the substrate disjoining pressure and contact angle. However, it is found that the droplet grows faster in radial direction than high direction when disjoining pressure and contact angle increase. The simulation also shows the information of computational domain effect in the transient growth period. When the computational domain size is larger, the mass coming in the free substrate domain is higher. So the mass coming in the droplet is also higher. The droplet grows and reaches the steady state faster. For the case of pinning and de-pinning droplet growth, the simulation shows that the disjoining pressure does not affect the droplet radius growth law 1/3 in steady state. However the disjoining pressure modifies the growth rate of the droplet height, which then follows a power law of 1/4. We demonstrate how spatial depletion of monomers could lead to a growth arrest of the nanodroplet, as observed experimentally.Keywords: augmented young-laplace equation, contact angle, disjoining pressure, nanodroplet growth
Procedia PDF Downloads 273481 Designating and Evaluating a Healthy Eating Model at the Workplace: A Practical Strategy for Preventing Non-Communicable Diseases in Aging
Authors: Mahnaz Khalafehnilsaz, Rozina Rahnama
Abstract:
Introduction: The aging process has been linked to a wide range of non-communicable diseases that cause a loss of health-related quality of life. This process can be worsened if an active and healthy lifestyle is not followed by adults, especially in the workplace. This setting not only may create a sedentary lifestyle but will lead to obesity and overweight in the long term and create unhealthy and inactive aging. In addition, eating habits are always known to be associated with active aging. Therefore, it is very valuable to know the eating patterns of people at work in order to detect and prevent diseases in the coming years. This study aimed to design and test a model to improve eating habits among employees at an industrial complex as a practical strategy. Material and method: The present research was a mixed-method study with a subsequent exploratory design which was carried out in two phases, qualitative and quantitative, in 2018 year. In the first step, participants were selected by purposive sampling (n=34) to ensure representation of different job roles; hours worked, gender, grade, and age groups, and semi-structured interviews were used. All interviews were conducted in the workplace and were audio recorded, transcribed verbatim, and analyzed using the Strauss and Corbin approach. The interview question was, “what were their experiences of eating at work, and how could these nutritional habits affect their health in old age.” Finally, a total of 1500 basic codes were oriented at the open coding step, and they were merged together to create the 17 classes, and six concepts and a conceptual model were designed. The second phase of the study was conducted in the form of a cross-sectional study. After verification of the research tool, the developed questionnaire was examined in a group of employees. In order to test the conceptual model of the study, a total of 500 subjects were included in psychometry. Findings: Six main concepts have been known, including 1. undesirable control of stress, 2. lack of eating knowledge, 3. effect of the social network, 4. lack of motivation for healthy habits, 5. environmental-organizational intensifier, 6. unhealthy eating behaviors. The core concept was “Motivation Loss to do preventive behavior.” The main constructs of the motivational-based model for the promotion of eating habits are “modification and promote of eating habits,” increase of knowledge and competency, convey of healthy nutrition behavior culture and effecting of behavioral model especially in older age, desirable of control stress. Conclusion: A key factor for unhealthy eating behavior at the workplace is a lack of motivation, which can be an obstacle to conduct preventive behaviors at work that can affect the healthy aging process in the long term. The motivational-based model could be considered an effective conceptual framework and instrument for designing interventions for the promotion to create healthy and active aging.Keywords: aging, eating habits, older age, workplace
Procedia PDF Downloads 101480 Design of a Low-Cost, Portable, Sensor Device for Longitudinal, At-Home Analysis of Gait and Balance
Authors: Claudia Norambuena, Myissa Weiss, Maria Ruiz Maya, Matthew Straley, Elijah Hammond, Benjamin Chesebrough, David Grow
Abstract:
The purpose of this project is to develop a low-cost, portable sensor device that can be used at home for long-term analysis of gait and balance abnormalities. One area of particular concern involves the asymmetries in movement and balance that can accompany certain types of injuries and/or the associated devices used in the repair and rehabilitation process (e.g. the use of splints and casts) which can often increase chances of falls and additional injuries. This device has the capacity to monitor a patient during the rehabilitation process after injury or operation, increasing the patient’s access to healthcare while decreasing the number of visits to the patient’s clinician. The sensor device may thereby improve the quality of the patient’s care, particularly in rural areas where access to the clinician could be limited, while simultaneously decreasing the overall cost associated with the patient’s care. The device consists of nine interconnected accelerometer/ gyroscope/compass chips (9-DOF IMU, Adafruit, New York, NY). The sensors attach to and are used to determine the orientation and acceleration of the patient’s lower abdomen, C7 vertebra (lower neck), L1 vertebra (middle back), anterior side of each thigh and tibia, and dorsal side of each foot. In addition, pressure sensors are embedded in shoe inserts with one sensor (ESS301, Tekscan, Boston, MA) beneath the heel and three sensors (Interlink 402, Interlink Electronics, Westlake Village, CA) beneath the metatarsal bones of each foot. These sensors measure the distribution of the weight applied to each foot as well as stride duration. A small microntroller (Arduino Mega, Arduino, Ivrea, Italy) is used to collect data from these sensors in a CSV file. MATLAB is then used to analyze the data and output the hip, knee, ankle, and trunk angles projected on the sagittal plane. An open-source program Processing is then used to generate an animation of the patient’s gait. The accuracy of the sensors was validated through comparison to goniometric measurements (±2° error). The sensor device was also shown to have sufficient sensitivity to observe various gait abnormalities. Several patients used the sensor device, and the data collected from each represented the patient’s movements. Further, the sensors were found to have the ability to observe gait abnormalities caused by the addition of a small amount of weight (4.5 - 9.1 kg) to one side of the patient. The user-friendly interface and portability of the sensor device will help to construct a bridge between patients and their clinicians with fewer necessary inpatient visits.Keywords: biomedical sensing, gait analysis, outpatient, rehabilitation
Procedia PDF Downloads 289479 Study of Formation and Evolution of Disturbance Waves in Annular Flow Using Brightness-Based Laser-Induced Fluorescence (BBLIF) Technique
Authors: Andrey Cherdantsev, Mikhail Cherdantsev, Sergey Isaenkov, Dmitriy Markovich
Abstract:
In annular gas-liquid flow, liquid flows as a film along pipe walls sheared by high-velocity gas stream. Film surface is covered by large-scale disturbance waves which affect pressure drop and heat transfer in the system and are necessary for entrainment of liquid droplets from film surface into the core of gas stream. Disturbance waves are a highly complex and their properties are affected by numerous parameters. One of such aspects is flow development, i.e., change of flow properties with the distance from the inlet. In the present work, this question is studied using brightness-based laser-induced fluorescence (BBLIF) technique. This method enables one to perform simultaneous measurements of local film thickness in large number of points with high sampling frequency. In the present experiments first 50 cm of upward and downward annular flow in a vertical pipe of 11.7 mm i.d. is studied with temporal resolution of 10 kHz and spatial resolution of 0.5 mm. Thus, spatiotemporal evolution of film surface can be investigated, including scenarios of formation, acceleration and coalescence of disturbance waves. The behaviour of disturbance waves' velocity depending on phases flow rates and downstream distance was investigated. Besides measuring the waves properties, the goal of the work was to investigate the interrelation between disturbance waves properties and integral characteristics of the flow such as interfacial shear stress and flow rate of dispersed phase. In particular, it was shown that the initial acceleration of disturbance waves, defined by the value of shear stress, linearly decays with downstream distance. This lack of acceleration which may even lead to deceleration is related to liquid entrainment. Flow rate of disperse phase linearly grows with downstream distance. During entrainment events, liquid is extracted directly from disturbance waves, reducing their mass, area of interaction to the gas shear and, hence, velocity. Passing frequency of disturbance waves at each downstream position was measured automatically with a new algorithm of identification of characteristic lines of individual disturbance waves. Scenarios of coalescence of individual disturbance waves were identified. Transition from initial high-frequency Kelvin-Helmholtz waves appearing at the inlet to highly nonlinear disturbance waves with lower frequency was studied near the inlet using 3D realisation of BBLIF method in the same cylindrical channel and in a rectangular duct with cross-section of 5 mm by 50 mm. It was shown that the initial waves are generally two-dimensional but are promptly broken into localised three-dimensional wavelets. Coalescence of these wavelets leads to formation of quasi two-dimensional disturbance waves. Using cross-correlation analysis, loss and restoration of two-dimensionality of film surface with downstream distance were studied quantitatively. It was shown that all the processes occur closer to the inlet at higher gas velocities.Keywords: annular flow, disturbance waves, entrainment, flow development
Procedia PDF Downloads 251478 Sensor Network Structural Integration for Shape Reconstruction of Morphing Trailing Edge
Authors: M. Ciminello, I. Dimino, S. Ameduri, A. Concilio
Abstract:
Improving aircraft's efficiency is one of the key elements of Aeronautics. Modern aircraft possess many advanced functions, such as good transportation capability, high Mach number, high flight altitude, and increasing rate of climb. However, no aircraft has a possibility to reach all of this optimized performance in a single airframe configuration. The aircraft aerodynamic efficiency varies considerably depending on the specific mission and on environmental conditions within which the aircraft must operate. Structures that morph their shape in response to their surroundings may at first seem like the stuff of science fiction, but take a look at nature and lots of examples of plants and animals that adapt to their environment would arise. In order to ensure both the controllable and the static robustness of such complex structural systems, a monitoring network is aimed at verifying the effectiveness of the given control commands together with the elastic response. In order to achieve this kind of information, the use of FBG sensors network is, in this project, proposed. The sensor network is able to measure morphing structures shape which may show large, global displacements due to non-standard architectures and materials adopted. Chord -wise variations may allow setting and chasing the best layout as a function of the particular and transforming reference state, always targeting best aerodynamic performance. The reason why an optical sensor solution has been selected is that while keeping a few of the contraindication of the classical systems (like cabling, continuous deployment, and so on), fibre optic sensors may lead to a dramatic reduction of the wires mass and weight thanks to an extreme multiplexing capability. Furthermore, the use of the ‘light’ as ‘information carrier’, permits dealing with nimbler, non-shielded wires, and avoids any kind of interference with the on-board instrumentation. The FBG-based transducers, herein presented, aim at monitoring the actual shape of adaptive trailing edge. Compared to conventional systems, these transducers allow more fail-safe measurements, by taking advantage of a supporting structure, hosting FBG, whose properties may be tailored depending on the architectural requirements and structural constraints, acting as strain modulator. The direct strain may, in fact, be difficult because of the large deformations occurring in morphing elements. A modulation transducer is then necessary to keep the measured strain inside the allowed range. In this application, chord-wise transducer device is a cantilevered beam sliding trough the spars and copying the camber line of the ATE ribs. FBG sensors array position are dimensioned and integrated along the path. A theoretical model describing the system behavior is implemented. To validate the design, experiments are then carried out with the purpose of estimating the functions between rib rotation and measured strain.Keywords: fiber optic sensor, morphing structures, strain sensor, shape reconstruction
Procedia PDF Downloads 329477 The Product Innovation Using Nutraceutical Delivery System on Improving Growth Performance of Broiler
Authors: Kitti Supchukun, Kris Angkanaporn, Teerapong Yata
Abstract:
The product innovation using a nutraceutical delivery system on improving the growth performance of broilers is the product planning and development to solve the antibiotics banning policy incurred in the local and global livestock production system. Restricting the use of antibiotics can reduce the quality of chicken meat and increase pathogenic bacterial contamination. Although other alternatives were used to replace antibiotics, the efficacy was inconsistent, reflecting on low chicken growth performance and contaminated products. The product innovation aims to effectively deliver the selected active ingredients into the body. This product is tested on the pharmaceutical lab scale and on the farm-scale for market feasibility in order to create product innovation using the nutraceutical delivery system model. The model establishes the product standardization and traceable quality control process for farmers. The study is performed using mixed methods. Starting with a qualitative method to find the farmers' (consumers) demands and the product standard, then the researcher used the quantitative research method to develop and conclude the findings regarding the acceptance of the technology and product performance. The survey has been sent to different organizations by random sampling among the entrepreneur’s population including integrated broiler farm, broiler farm, and other related organizations. The mixed-method results, both qualitative and quantitative, verify the user and lead users' demands since they provide information about the industry standard, technology preference, developing the right product according to the market, and solutions for the industry problems. The product innovation selected nutraceutical ingredients that can solve the following problems in livestock; bactericidal, anti-inflammation, gut health, antioxidant. The combinations of the selected nutraceutical and nanostructured lipid carriers (NLC) technology aim to improve chemical and pharmaceutical components by changing the structure of active ingredients into nanoparticle, which will be released in the targeted location with accurate concentration. The active ingredients in nanoparticle form are more stable, elicit antibacterial activity against pathogenic Salmonella spp and E.coli, balance gut health, have antioxidant and anti-inflammation activity. The experiment results have proven that the nutraceuticals have an antioxidant and antibacterial activity which also increases the average daily gain (ADG), reduces feed conversion ratio (FCR). The results also show a significant impact on the higher European Performance Index that can increase the farmers' profit when exporting. The product innovation will be tested in technology acceptance management methods from farmers and industry. The production of broiler and commercialization analyses are useful to reduce the importation of animal supplements. Most importantly, product innovation is protected by intellectual property.Keywords: nutraceutical, nano structure lipid carrier, anti-microbial drug resistance, broiler, Salmonella
Procedia PDF Downloads 178476 A Qualitative Study of Newspaper Discourse and Online Discussions of Climate Change in China
Authors: Juan Du
Abstract:
Climate change is one of the most crucial issues of this era, with contentious debates on it among scholars. But there are sparse studies on climate change discourse in China. Including China in the study of climate change is essential for a sociological understanding of climate change. China -- as a developing country and an essential player in tackling climate change -- offers an ideal case for studying climate change for scholars moving beyond developed countries and enriching their understandings of climate change by including diverse social settings. This project contrasts the macro- and micro-level understandings of climate change in China, which helps scholars move beyond a focus on climate skepticism and denialism and enriches sociology of climate change knowledge. The macro-level understanding of climate change is obtained by analyzing over 4,000 newspaper articles from various official outlets in China. State-controlled newspapers play an essential role in transmitting essential and high-quality information and promoting broader public understanding of climate change and its anthropogenic nature. Thus, newspaper articles can be seen as tools employed by governments to mobilize the public in terms of supporting the development of a strategy shift from economy-growth to an ecological civilization. However, media is just one of the significant factors influencing an individual’s climate change concern. Extreme weather events, access to accurate scientific information, elite cues, and movement/countermovement advocacy influence an individual’s perceptions of climate change. Hence, there are differences in the ways that both newspaper articles and the public frame the issues. The online forum is an informative channel for scholars to understand the public’s opinion. The micro-level data comes from Zhihu, which is China’s equivalence of Quora. Users can propose, answer, and comment on questions. This project analyzes the questions related to climate change which have over 20 answers. By open-coding both the macro- and micro-level data, this project will depict the differences between ideology as presented in government-controlled newspapers and how people talk and act with respect to climate change in cyberspace, which may provide an idea about any existing disconnect in public behavior and their willingness to change daily activities to facilitate a greener society. The contemporary Yellow Vest protests in France illustrate that the large gap between governmental policies of climate change mitigation and the public’s understanding may lead to social movement activity and social instability. Effective environmental policy is impossible without the public’s support. Finding existing gaps in understanding may help policy-makers develop effective ways of framing climate change and obtain more supporters of climate change related policies. Overall, this qualitative project provides answers to the following research questions: 1) How do different state-controlled newspapers transmit their ideology on climate change to the public and in what ways? 2) How do individuals frame climate change online? 3) What are the differences between newspapers’ framing and individual’s framing?Keywords: climate change, China, framing theory, media, public’s climate change concern
Procedia PDF Downloads 131475 Designing Agile Product Development Processes by Transferring Mechanisms of Action Used in Agile Software Development
Authors: Guenther Schuh, Michael Riesener, Jan Kantelberg
Abstract:
Due to the fugacity of markets and the reduction of product lifecycles, manufacturing companies from high-wage countries are nowadays faced with the challenge to place more innovative products within even shorter development time on the market. At the same time, volatile customer requirements have to be satisfied in order to successfully differentiate from market competitors. One potential approach to address the explained challenges is provided by agile values and principles. These agile values and principles already proofed their success within software development projects in the form of management frameworks like Scrum or concrete procedure models such as Extreme Programming or Crystal Clear. Those models lead to significant improvements regarding quality, costs and development time and are therefore used within most software development projects. Motivated by the success within the software industry, manufacturing companies have tried to transfer agile mechanisms of action to the development of hardware products ever since. Though first empirical studies show similar effects in the agile development of hardware products, no comprehensive procedure model for the design of development iterations has been developed for hardware development yet due to different constraints of the domains. For this reason, this paper focusses on the design of agile product development processes by transferring mechanisms of action used in agile software development towards product development. This is conducted by decomposing the individual systems 'product development' and 'agile software development' into relevant elements and symbiotically composing the elements of both systems in respect of the design of agile product development processes afterwards. In a first step, existing product development processes are described following existing approaches of the system theory. By analyzing existing case studies from industrial companies as well as academic approaches, characteristic objectives, activities and artefacts are identified within a target-, action- and object-system. In partial model two, mechanisms of action are derived from existing procedure models of agile software development. These mechanisms of action are classified in a superior strategy level, in a system level comprising characteristic, domain-independent activities and their cause-effect relationships as well as in an activity-based element level. Within partial model three, the influence of the identified agile mechanism of action towards the characteristic system elements of product development processes is analyzed. For this reason, target-, action- and object-system of the product development are compared with the strategy-, system- and element-level of agile mechanism of action by using the graph theory. Furthermore, the necessity of existence of activities within iteration can be determined by defining activity-specific degrees of freedom. Based on this analysis, agile product development processes are designed in form of different types of iterations within a last step. By defining iteration-differentiating characteristics and their interdependencies, a logic for the configuration of activities, their form of execution as well as relevant artefacts for the specific iteration is developed. Furthermore, characteristic types of iteration for the agile product development are identified.Keywords: activity-based process model, agile mechanisms of action, agile product development, degrees of freedom
Procedia PDF Downloads 207474 Thermal Securing of Electrical Contacts inside Oil Power Transformers
Authors: Ioan Rusu
Abstract:
In the operation of power transformers of 110 kV/MV from substations, these are traveled by fault current resulting from MV line damage. Defect electrical contacts are heated when they are travelled from fault currents. In the case of high temperatures when 135 °C is reached, the electrical insulating oil in the vicinity of the electrical faults comes into contact with these contacts releases gases, and activates the electrical protection. To avoid auto-flammability of electro-insulating oil, we designed a security system thermal of electrical contact defects by pouring fire-resistant polyurethane foam, mastic or mortar fire inside a cardboard electro-insulating cylinder. From practical experience, in the exploitation of power transformers of 110 kV/MT in oil electro-insulating were recorded some passing disconnecting commanded by the gas protection at internal defects. In normal operation and in the optimal load, nominal currents do not require thermal secure contacts inside electrical transformers, contacts are made at the fabrication according to the projects or to repair by solder. In the case of external short circuits close to the substation, the contacts inside electrical transformers, even if they are well made in sizes of Rcontact = 10‑6 Ω, are subjected to short-circuit currents of the order of 10 kA-20 kA which lead to the dissipation of some significant second-order electric powers, 100 W-400 W, on contact. At some internal or external factors which action on electrical contacts, including electrodynamic efforts at short-circuits, these factors could be degraded over time to values in the range of 10-4 Ω to 10-5 Ω and if the action time of protection is great, on the order of seconds, power dissipation on electrical contacts achieve high values of 1,0 kW to 40,0 kW. This power leads to strong local heating, hundreds of degrees Celsius and can initiate self-ignition and burning oil in the vicinity of electro-insulating contacts with action the gas relay. Degradation of electrical contacts inside power transformers may not be limited for the duration of their operation. In order to avoid oil burn with gas release near electrical contacts, at short-circuit currents 10 kA-20 kA, we have outlined the following solutions: covering electrical contacts in fireproof materials that would avoid direct burn oil at short circuit and transmission of heat from electrical contact along the conductors with heat dissipation gradually over time, in a large volume of cooling. Flame retardant materials are: polyurethane foam, mastic, cement (concrete). In the normal condition of operation of transformer, insulating of conductors coils is with paper and insulating oil. Ignition points of its two components respectively are approximated: 135 °C heat for oil and 200 0C for paper. In the case of a faulty electrical contact, about 10-3 Ω, at short-circuit; the temperature can reach for a short time, a value of 300 °C-400 °C, which ignite the paper and also the oil. By burning oil, there are local gases that disconnect the power transformer. Securing thermal electrical contacts inside the transformer, in cardboard tube with polyurethane foams, mastik or cement, ensures avoiding gas release and also gas protection working.Keywords: power transformer, oil insulatation, electric contacts, Bucholtz relay
Procedia PDF Downloads 158473 Concentration and Stability of Fatty Acids and Ammonium in the Samples from Mesophilic Anaerobic Digestion
Authors: Mari Jaakkola, Jasmiina Haverinen, Tiina Tolonen, Vesa Virtanen
Abstract:
These process monitoring of biogas plant gives valuable information of the function of the process and help to maintain a stable process. The costs of basic monitoring are often much lower than the costs associated with re-establishing a biologically destabilised plant. Reactor acidification through reactor overload is one of the most common reasons for process deterioration in anaerobic digesters. This occurs because of a build-up of volatile fatty acids (VFAs) produced by acidogenic and acetogenic bacteria. VFAs cause pH values to decrease, and result in toxic conditions in the reactor. Ammonia ensures an adequate supply of nitrogen as a nutrient substance for anaerobic biomass and increases system's buffer capacity, counteracting acidification lead by VFA production. However, elevated ammonia concentration is detrimental to the process due to its toxic effect. VFAs are considered the most reliable analytes for process monitoring. To obtain accurate results, sample storage and transportation need to be carefully controlled. This may be a challenge for off-line laboratory analyses especially when the plant is located far away from the laboratory. The aim of this study was to investigate the correlation between fatty acids, ammonium, and bacteria in the anaerobic digestion samples obtained from an industrial biogas factory. The stability of the analytes was studied comparing the results of the on-site analyses performed in the factory site to the results of the samples stored at room temperature and -18°C (up to 30 days) after sampling. Samples were collected in the biogas plant consisting of three separate mesofilic AD reactors (4000 m³ each) where the main feedstock was swine slurry together with a complex mixture of agricultural plant and animal wastes. Individual VFAs, ammonium, and nutrients (K, Ca, Mg) were studied by capillary electrophoresis (CE). Longer chain fatty acids (oleic, hexadecanoic, and stearic acids) and bacterial profiles were studied by GC-MSD (Gas Chromatography-Mass Selective Detector) and 16S rDNA, respectively. On-site monitoring of the analytes was performed by CE. The main VFA in all samples was acetic acid. However, in one reactor sample elevated levels of several individual VFAs and long chain fatty acids were detected. Also bacterial profile of this sample differed from the profiles of other samples. Acetic acid decomposed fast when the sample was stored in a room temperature. All analytes were stable when stored in a freezer. Ammonium was stable even at a room temperature for the whole testing period. One reactor sample had higher concentration of VFAs and long chain fatty acids than other samples. CE was utilized successfully in the on-site analysis of separate VFAs and NH₄ in the biogas production site. Samples should be analysed in the sampling day if stored in RT or freezed for longer storage time. Fermentation reject can be stored (and transported) at ambient temperature at least for one month without loss of NH₄. This gives flexibility to the logistic solutions when reject is used as a fertilizer.Keywords: anaerobic digestion, capillary electrophoresis, ammonium, bacteria
Procedia PDF Downloads 168472 Methods Used to Achieve Airtightness of 0.07 Ach@50Pa for an Industrial Building
Authors: G. Wimmers
Abstract:
The University of Northern British Columbia needed a new laboratory building for the Master of Engineering in Integrated Wood Design Program and its new Civil Engineering Program. Since the University is committed to reducing its environmental footprint and because the Master of Engineering Program is actively involved in research of energy efficient buildings, the decision was made to request the energy efficiency of the Passive House Standard in the Request for Proposals. The building is located in Prince George in Northern British Columbia, a city located at the northern edge of climate zone 6 with an average low between -8 and -10.5 in the winter months. The footprint of the building is 30m x 30m with a height of about 10m. The building consists of a large open space for the shop and laboratory with a small portion of the floorplan being two floors, allowing for a mezzanine level with a few offices as well as mechanical and storage rooms. The total net floor area is 1042m² and the building’s gross volume 9686m³. One key requirement of the Passive House Standard is the airtight envelope with an airtightness of < 0.6 ach@50Pa. In the past, we have seen that this requirement can be challenging to reach for industrial buildings. When testing for air tightness, it is important to test in both directions, pressurization, and depressurization, since the airflow through all leakages of the building will, in reality, happen simultaneously in both directions. A specific detail or situation such as overlapping but not sealed membranes might be airtight in one direction, due to the valve effect, but are opening up when tested in the opposite direction. In this specific project, the advantage was the overall very compact envelope and the good volume to envelope area ratio. The building had to be very airtight and the details for the windows and doors installation as well as all transitions from walls to roof and floor, the connections of the prefabricated wall panels and all penetrations had to be carefully developed to allow for maximum airtightness. The biggest challenges were the specific components of this industrial building, the large bay door for semi-trucks and the dust extraction system for the wood processing machinery. The testing was carried out in accordance with EN 132829 (method A) as specified in the International Passive House Standard and the volume calculation was also following the Passive House guideline resulting in a net volume of 7383m3, excluding all walls, floors and suspended ceiling volumes. This paper will explore the details and strategies used to achieve an airtightness of 0.07 ach@50Pa, to the best of our knowledge the lowest value achieved in North America so far following the test protocol of the International Passive House Standard and discuss the crucial steps throughout the project phases and the most challenging details.Keywords: air changes, airtightness, envelope design, industrial building, passive house
Procedia PDF Downloads 148471 Humanitarian Storytelling through Photographs with and for Resettled Refugees in Wellington
Authors: Ehsan K. Hazaveh
Abstract:
This research project explores creative methods of storytelling through photography to portray a vulnerable and marginalised community: former refugees living in Wellington, New Zealand. The project explores photographic representational techniques that can not only empower and give voice to those communities but also challenge dominant stereotypes about refugees and support humanitarian actions. The aims of this study are to develop insights surrounding issues associated with the photographic representation of refugees and to explore the collaborative construction of possible counter-narratives that might lead to the formulation of a practice framework for representing refugees using photography. In other words, the goal of this study is to explore representational and narrative strategies that frame refugees as active community members and as individuals with specific histories and expertise. These counter-narratives will bring the diversity of refugees to the surface by offering personal stories, contextualising their experience, raising awareness about the plight and human rights of the refugee community in New Zealand, evoking empathy and, therefore, facilitating the process of social change. The study has designed a photographic narrative framework by determining effective methods of photo storytelling, framing, and aesthetic techniques, focusing on different ways of taking, selecting, editing and curating photographs. Photo elicitation interviews have been used to ‘explore’, ‘produce’ and ‘co-curate’ the counter-narrative along with participants. Photo elicitation is a qualitative research method that employs images to evoke data in order to find out how other people experience their world - the researcher shows photographs to the participant and asks open-ended questions to get them to talk about their life experiences and the world around them. The qualitative data have been collected and produced through interactions with four former refugees living in Wellington, New Zealand. In this way, this project offers a unique account of their conditions and basic knowledge about their living experience and their stories. The participants of this study have engaged with PhotoVoice, a photo elicitation methodology that employs photography and storytelling, to share activities, emotions, hopes, and aspects of their lived experiences. PhotoVoice was designed to empower members of marginalised populations. It involves a series of meeting sessions, in which participants share photographs they have taken and discuss stories about the photographs to identify, represent, and enhance the issues important to their lives and communities. Finally, the data provide a basis for systematically producing visual counter-narratives that highlight the experiences of former- refugees. By employing these methods, refugees can represent their world as well as interpret it. The process of developing this research framing has enabled the development of powerful counter-narratives that challenge prevailing stereotypical depictions which in turn have the potential to shape improved humanitarian outcomes, shifts in public attitudes and political perspectives in New Zealand.Keywords: media, photography, refugees, photo-elicitation, storytelling
Procedia PDF Downloads 150470 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks
Authors: Afnan Al-Romi, Iman Al-Momani
Abstract:
The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN
Procedia PDF Downloads 322469 Mobile Learning in Developing Countries: A Synthesis of the Past to Define the Future
Authors: Harriet Koshie Lamptey, Richard Boateng
Abstract:
Mobile learning (m-learning) is a novel approach to knowledge acquisition and dissemination and is gaining global attention. Steady progress in wireless technologies and the portability of communication devices continue to broaden the scope and use of mobiles. With the convergence of Web functionality onto mobile platforms and the affordability and availability of mobile technology, m-learning has the potential of being the next prevalent channel of education in both formal and informal settings. There is substantive literature on developed countries but the state in developing countries (DCs) however appears vague. This paper is a synthesis of extant literature on mobile learning in DCs. The research interest is based on the fact that in DCs, mobile communication and internet connectivity are popular. However, its use in education is under explored. There are some reviews on the state, conceptualizations, trends and teacher education, but to the authors’ knowledge, no study has focused on mobile learning adoption and integration issues. This study examines issues and gaps associated with its adoption and integration in DCs higher education institutions. A qualitative build-up of literature was conducted using articles pooled from electronic databases (Google Scholar and ERIC). To enable criteria for inclusion and incorporate diverse study perspectives, search terms used were m-learning, DCs, higher education institutions, challenges, benefits, impact, gaps and issues. The synthesis revealed that though mobile technology has diffused globally, its pedagogical pursuit in DCs remains quite low. The absence of a mobile Web and the difficulty of resource conversion into mobile format due to lack of funding and technical competence is a stumbling block. Again, the lack of established design and implementation rules to guide the development of m-learning platforms in DCs is a hindrance. The absence of access restrictions on devices poses security threats to institutional systems. Negative perceptions that devices are taking over faculty roles lead to resistance in some situations. Resistance to change can be a hindrance to the acceptance and success of new systems. Lack of interest for m-learning is also attributed to lower technological literacy levels of the underprivileged masses. Scholarly works on m-learning in DCs is yet to mature. Most technological innovations are handed down from developed countries, and this constantly creates a lag for DCs. Lack of theoretical grounding was also identified which reduces the objectivity of study reports. The socio-cultural terrain of DCs results in societies with different views and needs that have been identified as a hindrance to research. Institutional commitment decisions, adequate funding for the necessary infrastructural development as well as multiple stakeholder participation is important for project success. Evidence suggests that while adoption decisions are readily made, successful integration of the concept for its full benefits to be realized is often neglected. Recommendations to findings were made to provide possible remedies to identified issues.Keywords: developing countries, higher education institutions, mobile learning, literature review
Procedia PDF Downloads 225468 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling
Authors: Justyna P. Majewska, Szymon M. Truskolaski
Abstract:
The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.Keywords: agent-based modeling, digitalized services, e-sport, spectators motives
Procedia PDF Downloads 172467 Crystallization Based Resolution of Enantiomeric and Diastereomeric Derivatives of myo-Inositol
Authors: Nivedita T. Patil, M. T. Patil, M. S. Shashidhar, R. G. Gonnade
Abstract:
Cyclitols are cycloalkane polyols which have raise attention since they have numerous biological and pharmaceutical properties. Among these, inositols are important cyclitols, which constitute a group of naturally occurring polyhydric alcohols. Myo, scyllo, allo, neo, D-chiro- are naturally occurring structural isomer of inositol while other four isomers (L-chiro, allo, epi-, and cis-inositol) are derived from myo-inositol by chemical synthesis. Myo-inositol, most abundant isomer, plays an important role in signal transduction process and for the treatment of type 2 diabetes, bacterial infections, stimulation of menstruation, ovulation in polycystic ovary syndrome, improvement of osteogenesis, and in treatment of neurological disorders. Considering the vast application of the derivatives, it becomes important to supply these compounds for further studies in quantitative amounts, but the synthesis of suitably protected chiral inositol derivatives is the key intermediates in most of the synthesis which is difficult. Chiral inositol derivatives could also be of interest to synthetic organic chemists as they could serve as potential starting materials for the synthesis of several natural products and their analogs. Thus, obtaining chiral myo-inositol derivatives in a more eco-friendly way is need for current inositol chemistry. Thus, the resolution of nonracemates by preferential crystallization of enantiomers has not been reported as a method for inositol derivatives. We are optimistic that this work might lead to the development of the two tosylate enantiomers as synthetic chiral pool molecules for organic synthesis. Resolution of racemic 4-O-benzyl 6-O-tosyl myo-inositol 1, 3, 5 orthoformate was successfully achieved on multigram scale by preferential crystallization, which is more scalable, eco-friendly method of separation than other reported methods. The separation of the conglomeric mixture of tosylate was achieved by suspending the mixture in ethyl acetate till the level of saturation is obtained. To this saturated clear solution was added seed crystal of the desired enantiomers. The filtration of the precipitated seed was carried out at its filtration window to get enantiomerically enriched tosylate, and the process was repeated alternatively. These enantiomerically enriched samples were recrystallized to get tosylate as pure enantiomers. The configuration of the resolved enantiomers was determined by converting it to previously reported dibenzyl ether myo-inositol, which is an important precursor for mono- and tetraphosphates. We have also developed a convenient and practical method for the preparation of enantiomeric 4-O and 6-O-allyl myo-inositol orthoesters by resolution of diastereomeric allyl dicamphante orthoesters on multigram scale. These allyl ethers can be converted to other chiral protected myo-inositol derivatives using routine synthetic transformations. The chiral allyl ethers can be obtained in gram quantities, and the methods are amenable to further scale-up due to the simple procedures involved. We believe that the work described enhances the pace of research to understand the intricacies of the myo-inositol cycle as the methods described provide efficient access to enantiomeric phosphoinositols, cyclitols, and their derivatives from the abundantly available myo-inositol as a starting material.Keywords: cyclitols, diastereomers, enantiomers, myo-inositol, preferential crystallization, signal transduction
Procedia PDF Downloads 141466 Evaluation of Redundancy Architectures Based on System on Chip Internal Interfaces for Future Unmanned Aerial Vehicles Flight Control Computer
Authors: Sebastian Hiergeist
Abstract:
It is a common view that Unmanned Aerial Vehicles (UAV) tend to migrate into the civil airspace. This trend is challenging UAV manufacturer in plenty ways, as there come up a lot of new requirements and functional aspects. On the higher application levels, this might be collision detection and avoidance and similar features, whereas all these functions only act as input for the flight control components of the aircraft. The flight control computer (FCC) is the central component when it comes up to ensure a continuous safe flight and landing. As these systems are flight critical, they have to be built up redundantly to be able to provide a Fail-Operational behavior. Recent architectural approaches of FCCs used in UAV systems are often based on very simple microprocessors in combination with proprietary Application-Specific Integrated Circuit (ASIC) or Field Programmable Gate Array (FPGA) extensions implementing the whole redundancy functionality. In the future, such simple microprocessors may not be available anymore as they are more and more replaced by higher sophisticated System on Chip (SoC). As the avionic industry cannot provide enough market power to significantly influence the development of new semiconductor products, the use of solutions from foreign markets is almost inevitable. Products stemming from the industrial market developed according to IEC 61508, or automotive SoCs, according to ISO 26262, can be seen as candidates as they have been developed for similar environments. Current available SoC from the industrial or automotive sector provides quite a broad selection of interfaces like, i.e., Ethernet, SPI or FlexRay, that might come into account for the implementation of a redundancy network. In this context, possible network architectures shall be investigated which could be established by using the interfaces stated above. Of importance here is the avoidance of any single point of failures, as well as a proper segregation in distinct fault containment regions. The performed analysis is supported by the use of guidelines, published by the aviation authorities (FAA and EASA), on the reliability of data networks. The main focus clearly lies on the reachable level of safety, but also other aspects like performance and determinism play an important role and are considered in the research. Due to the further increase in design complexity of recent and future SoCs, also the risk of design errors, which might lead to common mode faults, increases. Thus in the context of this work also the aspect of dissimilarity will be considered to limit the effect of design errors. To achieve this, the work is limited to broadly available interfaces available in products from the most common silicon manufacturer. The resulting work shall support the design of future UAV FCCs by giving a guideline on building up a redundancy network between SoCs, solely using on board interfaces. Therefore the author will provide a detailed usability analysis on available interfaces provided by recent SoC solutions, suggestions on possible redundancy architectures based on these interfaces and an assessment of the most relevant characteristics of the suggested network architectures, like e.g. safety or performance.Keywords: redundancy, System-on-Chip, UAV, flight control computer (FCC)
Procedia PDF Downloads 219465 Changes in Physicochemical Characteristics of a Serpentine Soil and in Root Architecture of a Hyperaccumulating Plant Cropped with a Legume
Authors: Ramez F. Saad, Ahmad Kobaissi, Bernard Amiaud, Julien Ruelle, Emile Benizri
Abstract:
Agromining is a new technology that establishes agricultural systems on ultramafic soils in order to produce valuable metal compounds such as nickel (Ni), with the final aim of restoring a soil's agricultural functions. But ultramafic soils are characterized by low fertility levels and this can limit yields of hyperaccumulators and metal phytoextraction. The objectives of the present work were to test if the association of a hyperaccumulating plant (Alyssum murale) and a Fabaceae (Vicia sativa var. Prontivesa) could induce changes in physicochemical characteristics of a serpentine soil and in root architecture of a hyperaccumulating plant then lead to efficient agromining practices through soil quality improvement. Based on standard agricultural systems, consisting in the association of legumes and another crop such as wheat or rape, a three-month rhizobox experiment was carried out to study the effect of the co-cropping (Co) or rotation (Ro) of a hyperaccumulating plant (Alyssum murale) with a legume (Vicia sativa) and incorporating legume biomass to soil, in comparison with mineral fertilization (FMo), on the structure and physicochemical properties of an ultramafic soil and on root architecture. All parameters measured (biomass, C and N contents, and taken-up Ni) on Alyssum murale conducted in co-cropping system showed the highest values followed by the mineral fertilization and rotation (Co > FMo > Ro), except for root nickel yield for which rotation was better than the mineral fertilization (Ro > FMo). The rhizosphere soil of Alyssum murale in co-cropping had larger soil particles size and better aggregates stability than other treatments. Using geostatistics, co-cropped Alyssum murale showed a greater root surface area spatial distribution. Moreover, co-cropping and rotation-induced lower soil DTPA-extractable nickel concentrations than other treatments, but higher pH values. Alyssum murale co-cropped with a legume showed a higher biomass production, improved soil physical characteristics and enhanced nickel phytoextraction. This study showed that the introduction of a legume into Ni agromining systems could improve yields of dry biomass of the hyperaccumulating plant used and consequently, the yields of Ni. Our strategy can decrease the need to apply fertilizers and thus minimizes the risk of nitrogen leaching and underground water pollution. Co-cropping of Alyssum murale with the legume showed a clear tendency to increase nickel phytoextraction and plant biomass in comparison to rotation treatment and fertilized mono-culture. In addition, co-cropping improved soil physical characteristics and soil structure through larger and more stabilized aggregates. It is, therefore, reasonable to conclude that the use of legumes in Ni-agromining systems could be a good strategy to reduce chemical inputs and to restore soil agricultural functions. Improving the agromining system by the replacement of inorganic fertilizers could simultaneously be a safe way of rehabilitating degraded soils and a method to restore soil quality and functions leading to the recovery of ecosystem services.Keywords: plant association, legumes, hyperaccumulating plants, ultramafic soil physicochemical properties
Procedia PDF Downloads 166464 Urban Enclaves Caused by Migration: Little Aleppo in Ankara, Turkey
Authors: Sezen Aslan, N. Aydan Sat
Abstract:
The society of 21st century constantly faces with complex otherness that emerges in various forms and justifications. Otherness caused by class, race or ethnicity inevitably reflects to urban areas, and in this way, cities are diversified into totally self-centered and closed-off urban enclaves. One of the most important dynamics that creates otherness in contemporary society is migration. Immigration on an international scale is one of the most important events that have reshaped the world, and the number of immigrants in the world is increasing day by day. Forced migration and refugee statements constitute the major part of countries' immigration policies and practices. Domestic problems such as racism, violence, war, censorship and silencing, attitudes contrary to human rights, different cultural or religious identities cause populations to migrate. Immigration is one of the most important reasons for the formation of urban enclaves within cities. Turkey, which was used to face a higher rate of outward migration, has begun to host immigrant groups from foreign countries. 1980s is the breaking point about the issue as a result of internal disturbances in the Middle East. After Iranian, Iraqi and Afghan immigrants, Turkey faces the largest external migration in its story with Syrian population. Turkey has been hosting approximate three million Syrian people after Syrian Civil War which started in 2011. 92% of Syrian refugees are currently living in different urban areas in Turkey instead of camps. Syrian refugees are experiencing a spontaneous spatiality due to the lack of specific settlement and housing policies of the country. This spontaneity is one of the most important factors in the creation of urban enclaves. From this point of view, the aim of this study is to clarify processes that lead the creation of urban enclaves and to explain socio-spatial effects of these urban enclaves to the other parts of the cities. Ankara, which is one of the most registered Syrian hosting Province in Turkey, is selected as a case study area. About 55% of the total Syrian population lives in the Altındağ district in Ankara. They settled specifically in two neighborhoods in Altındağ district, named as Önder and Ulubey. These neighborhoods are old slum areas, and they were evacuated due to urban renewal on the same dates with the migration of the Syrians. Before demolition of these old slums, Syrians are settled into them as tenants. In the first part of the study, a brief explanation of the concept of urban enclave, its occurrence parameters and possible socio-spatial threats, examples from previous immigrant urban enclaves caused internal migration will be given. Emergence of slums, planning history and social processes in the case study area will be described in the second part of the study. The third part will be focused on the Syrian refugees and their socio-spatial relationship in the case study area and in-depth interviews with refugees and spatial analysis will be realized. Suggestions for the future of the case study area and recommendations to prevent immigrant groups from social and spatial exclusion will be discussed in the conclusion part of the study.Keywords: migration, immigration, Syrian refugees, urban enclaves, Ankara
Procedia PDF Downloads 208463 An Observation Approach of Reading Order for Single Column and Two Column Layout Template
Authors: In-Tsang Lin, Chiching Wei
Abstract:
Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.Keywords: document processing, reading order, observation method, layout recognition
Procedia PDF Downloads 181