Search results for: suitable location
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5457

Search results for: suitable location

4737 Fast and Non-Invasive Patient-Specific Optimization of Left Ventricle Assist Device Implantation

Authors: Huidan Yu, Anurag Deb, Rou Chen, I-Wen Wang

Abstract:

The use of left ventricle assist devices (LVADs) in patients with heart failure has been a proven and effective therapy for patients with severe end-stage heart failure. Due to the limited availability of suitable donor hearts, LVADs will probably become the alternative solution for patient with heart failure in the near future. While the LVAD is being continuously improved toward enhanced performance, increased device durability, reduced size, a better understanding of implantation management becomes critical in order to achieve better long-term blood supplies and less post-surgical complications such as thrombi generation. Important issues related to the LVAD implantation include the location of outflow grafting (OG), the angle of the OG, the combination between LVAD and native heart pumping, uniform or pulsatile flow at OG, etc. We have hypothesized that an optimal implantation of LVAD is patient specific. To test this hypothesis, we employ a novel in-house computational modeling technique, named InVascular, to conduct a systematic evaluation of cardiac output at aortic arch together with other pertinent hemodynamic quantities for each patient under various implantation scenarios aiming to get an optimal implantation strategy. InVacular is a powerful computational modeling technique that integrates unified mesoscale modeling for both image segmentation and fluid dynamics with the cutting-edge GPU parallel computing. It first segments the aortic artery from patient’s CT image, then seamlessly feeds extracted morphology, together with the velocity wave from Echo Ultrasound image of the same patient, to the computation model to quantify 4-D (time+space) velocity and pressure fields. Using one NVIDIA Tesla K40 GPU card, InVascular completes a computation from CT image to 4-D hemodynamics within 30 minutes. Thus it has the great potential to conduct massive numerical simulation and analysis. The systematic evaluation for one patient includes three OG anastomosis (ascending aorta, descending thoracic aorta, and subclavian artery), three combinations of LVAD and native heart pumping (1:1, 1:2, and 1:3), three angles of OG anastomosis (inclined upward, perpendicular, and inclined downward), and two LVAD inflow conditions (uniform and pulsatile). The optimal LVAD implantation is suggested through a comprehensive analysis of the cardiac output and related hemodynamics from the simulations over the fifty-four scenarios. To confirm the hypothesis, 5 random patient cases will be evaluated.

Keywords: graphic processing unit (GPU) parallel computing, left ventricle assist device (LVAD), lumped-parameter model, patient-specific computational hemodynamics

Procedia PDF Downloads 119
4736 Voltage and Frequency Regulation Using the Third-Party Mid-Size Battery

Authors: Roghieh A. Biroon, Zoleikha Abdollahi

Abstract:

The recent growth of renewables, e.g., solar panels, batteries, and electric vehicles (EVs) in residential and small commercial sectors, has potential impacts on the stability and operation of power grids. Considering approximately 50 percent share of the residential and the commercial sectors in the electricity demand market, the significance of these impacts, and the necessity of addressing them are more highlighted. Utilities and power system operators should manage the renewable electricity sources integration with power systems in such a way to extract the most possible advantages for the power systems. The most common effect of high penetration level of the renewables is the reverse power flow in the distribution feeders when the customers generate more power than their needs. The reverse power flow causes voltage rise and thermal issues in the power grids. To overcome the voltage rise issues in the distribution system, several techniques have been proposed including reducing transformers short circuit resistance and feeder impedance, installing autotransformers/voltage regulators along the line, absorbing the reactive power by distributed generators (DGs), and limiting the PV and battery sizes. In this study, we consider a medium-scale battery energy storage to manage the power energy and address the aforementioned issues on voltage deviation and power loss increase. We propose an optimization algorithm to find the optimum size and location for the battery. The optimization for the battery location and size is so that the battery maintains the feeder voltage deviation and power loss at a certain desired level. Moreover, the proposed optimization algorithm controls the charging/discharging profile of the battery to absorb the negative power flow from residential and commercial customers in the feeder during the peak time and sell the power back to the system during the off-peak time. The proposed battery regulates the voltage problem in the distribution system while it also can play frequency regulation role in islanded microgrids. This battery can be regulated and controlled by the utilities or a third-party ancillary service provider for the utilities to reduce the power system loss and regulate the distribution feeder voltage and frequency in standard level.

Keywords: ancillary services, battery, distribution system and optimization

Procedia PDF Downloads 116
4735 Development of Vacuum Planar Membrane Dehumidifier for Air-Conditioning

Authors: Chun-Han Li, Tien-Fu Yang, Chen-Yu Chen, Wei-Mon Yan

Abstract:

The conventional dehumidification method in air-conditioning system mostly utilizes a cooling coil to remove the moisture in the air via cooling the supply air down below its dew point temperature. During the process, it needs to reheat the supply air to meet the set indoor condition that consumes a considerable amount of energy and affect the coefficient of performance of the system. If the processes of dehumidification and cooling are separated and operated respectively, the indoor conditions will be more efficiently controlled. Therefore, decoupling the dehumidification and cooling processes in heating, ventilation and air conditioning system is one of the key technologies as membrane dehumidification processes for the next generation. The membrane dehumidification method has the advantages of low cost, low energy consumption, etc. It utilizes the pore size and hydrophilicity of the membrane to transfer water vapor by mass transfer effect. The moisture in the supply air is removed by the potential energy and driving force across the membrane. The process can save the latent load used to condense water, which makes more efficient energy use because it does not involve heat transfer effect. In this work, the performance measurements including the permeability and selectivity of water vapor and air with the composite and commercial membranes were conducted. According to measured data, we can choose the suitable dehumidification membrane for designing the flow channel length and components of the planar dehumidifier. The vacuum membrane dehumidification system was set up to examine the effects of temperature, humidity, vacuum pressure, flow rate, the coefficient of performance and other parameters on the dehumidification efficiency. The results showed that the commercial Nafion membrane has better water vapor permeability and selectivity. They are suitable for filtration with water vapor and air. Meanwhile, Nafion membrane has promising potential in the dehumidification process.

Keywords: vacuum membrane dehumidification, planar membrane dehumidifier, water vapour and air permeability, air conditioning

Procedia PDF Downloads 127
4734 Biomedical Definition Extraction Using Machine Learning with Synonymous Feature

Authors: Jian Qu, Akira Shimazu

Abstract:

OOV (Out Of Vocabulary) terms are terms that cannot be found in many dictionaries. Although it is possible to translate such OOV terms, the translations do not provide any real information for a user. We present an OOV term definition extraction method by using information available from the Internet. We use features such as occurrence of the synonyms and location distances. We apply machine learning method to find the correct definitions for OOV terms. We tested our method on both biomedical type and name type OOV terms, our work outperforms existing work with an accuracy of 86.5%.

Keywords: information retrieval, definition retrieval, OOV (out of vocabulary), biomedical information retrieval

Procedia PDF Downloads 473
4733 Eliminating Cutter-Path Deviation For Five-Axis Nc Machining

Authors: Alan C. Lin, Tsong Der Lin

Abstract:

This study proposes a deviation control method to add interpolation points to numerical control (NC) codes of five-axis machining in order to achieve the required machining accuracy. Specific research issues include: (1) converting machining data between the CL (cutter location) domain and the NC domain, (2) calculating the deviation between the deviated path and the linear path, (3) finding interpolation points, and (4) determining tool orientations for the interpolation points. System implementation with practical examples will also be included to highlight the applicability of the proposed methodology.

Keywords: CAD/CAM, cutter path, five-axis machining, numerical control

Procedia PDF Downloads 406
4732 A Numerical Investigation of Segmental Lining Joints Interactions in Tunnels

Authors: M. H. Ahmadi, A. Mortazavi, H. Zarei

Abstract:

Several authors have described the main mechanism of formation of cracks in the segment lining during the construction of tunnels with tunnel boring machines. A comprehensive analysis of segmental lining joints may help to guarantee a safe construction during Tunneling and serviceable stages. The most frequent types of segment damage are caused by a condition of uneven segment matching due to contact deficiencies. This paper investigated the interaction mechanism of precast concrete lining joints in tunnels. The Discrete Element Method (DEM) was used to analyze a typical segmental lining model consisting of six segment rings. In the analyses, typical segmental lining design parameters of the Ghomrood water conveyance tunnel, Iran were employed in the study. In the conducted analysis, the worst-case scenario of loading faced during the boring of Ghomrood tunnel was considered. This was associated with the existence of a crushed zone dipping at 75 degree at the location of the key segment. In the analysis, moreover, the effect of changes in horizontal stress ratio on the loads on the segment was assessed. The boundary condition associated with K (ratio of the horizontal to the vertical stress) values of 0.5, 1, 1.5 and 2 were applied to the model and separate analysis was conducted for each case. Important parameters such as stress, moments, and displacements were measured at joint locations and the surrounding rock. Accordingly, the segment joint interactions were assessed and analyzed. Moreover, rock mass properties of the Ghomrood in Ghom were adopted. In this study, the load acting on segments joints are included a crushed zone stratum force that intersect tunnel with 75 slopes in the location of the key segment, gravity force of segments and earth pressures. A numerical investigation was used for different coefficients of stress concentration of 0.5, 1, 1.5, 2 and different geological conditions of saturated crushed zone under the critical scenario. The numerical results also demonstrate that maximum bending moments in longitudinal joints occurred for crushed zone with the weaken strengths (Sandstone). Besides that, increasing the load in segment-stratum interfaces affected radial stress in longitudinal joints and finally the opening of joints occurred.

Keywords: joint, interface, segment, contact

Procedia PDF Downloads 245
4731 Application of Analytical Method for Placement of DG Unit for Loss Reduction in Distribution Systems

Authors: G. V. Siva Krishna Rao, B. Srinivasa Rao

Abstract:

The main aim of the paper is to implement a technique using distributed generation in distribution systems to reduce the distribution system losses and to improve voltage profiles. The fuzzy logic technique is used to select the proper location of DG and an analytical method is proposed to calculate the size of DG unit at any power factor. The optimal sizes of DG units are compared with optimal sizes obtained using the genetic algorithm. The suggested method is programmed under Matlab software and is tested on IEEE 33 bus system and the results are presented.

Keywords: DG Units, sizing of DG units, analytical methods, optimum size

Procedia PDF Downloads 457
4730 Finite Element Analysis of Mini-Plate Stabilization of Mandible Fracture

Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski

Abstract:

The aim of the presented investigation is to recognize the possible mechanical issues of mini-plate connection used to treat mandible fractures and to check the impact of different factors for the stresses and displacements within the bone-stabilizer system. The mini-plate osteosynthesis technique is a common type of internal fixation using metal plates connected to the fractured bone parts by a set of screws. The selected two types of plate application methodology used by maxillofacial surgeons were investigated in the work. Those patterns differ in location and number of plates. The bone geometry was modeled on the base of computed tomography scans of hospitalized patient done just after mini-plate application. The solid volume geometry consisting of cortical and cancellous bone was created based on gained cloud of points. Temporomandibular joint and muscle system were simulated to imitate the real masticatory system behavior. Finite elements mesh and analysis were performed by ANSYS software. To simulate realistic connection behavior nonlinear contact conditions were used between the connecting elements and bones. The influence of the initial compression of the connected bone parts or the gap between them was analyzed. Nonlinear material properties of the bone tissues and elastic-plastic model of titanium alloy were used. The three cases of loading assuming the force of magnitude of 100N acting on the left molars, the right molars and the incisors were investigated. Stress distribution within connecting plate shows that the compression of the bone parts in the connection results in high stress concentration in the plate and the screws, however the maximum stress levels do not exceed material (titanium) yield limit. There are no significant differences between negative offset (gap) and no-offset conditions. The location of the external force influences the magnitude of stresses around both the plate and bone parts. Two-plate system gives generally lower von Misses stress under the same loading than the one-plating approach. Von Mises stress distribution within the cortical bone shows reduction of high stress field for the cases without the compression (neutral initial contact). For the initial prestressing there is a visible significant stress increase around the fixing holes at the bottom mini-plate due to the assembly stress. The local stress concentration may be the reason of bone destruction in those regions. The performed calculations prove that the bone-mini-plate system is able to properly stabilize the fractured mandible bone. There is visible strong dependency between the mini-plate location and stress distribution within the stabilizer structure and the surrounding bone tissue. The results (stresses within the bone tissues and within the devices, relative displacements of the bone parts at the interface) corresponding to different models of the connection provide a basis for the mechanical optimization of the mini-plate connections. The results of the performed numerical simulations were compared to clinical observation. They provide information helpful for better understanding of the load transfer in the mandible with the stabilizer and for improving stabilization techniques.

Keywords: finite element modeling, mandible fracture, mini-plate connection, osteosynthesis

Procedia PDF Downloads 228
4729 The Need for Multi-Edge Strategies and Solutions

Authors: Hugh Taylor

Abstract:

Industry analysts project that edge computing will be generating tens of billions in revenue in coming years. It’s not clear, however, if this will actually happen, and who, if anyone, will make it happen. Edge computing is seen as a critical success factor in industries ranging from telecom, enterprise IT and co-location. However, will any of these industries actually step up to make edge computing into a viable technology business? This paper looks at why the edge seems to be in a chasm, on the edge of realization, so to speak, but failing to coalesce into a coherent technology category like the cloud—and how the segment’s divergent industry players can come together to build a viable business at the edge.

Keywords: edge computing, multi-edge strategies, edge data centers, edge cloud

Procedia PDF Downloads 88
4728 Site Investigations and Mitigation Measures of Landslides in Sainj and Tirthan Valley of Kullu District, Himachal Pradesh, India

Authors: Laxmi Versain, R. S. Banshtu

Abstract:

Landslides are found to be the most commonly occurring geological hazards in the mountainous regions of the Himalaya. This mountainous zone is facing large number of seismic turbulences, climatic changes, and topography changes due to increasing urbanization. That eventually has lead several researchers working for best suitable methodologies to infer the ultimate results. Landslide Hazard Zonation has widely come as suitable method to know the appropriate factors that trigger the lansdslide phenomenon on higher reaches. Most vulnerable zones or zones of weaknesses are indentified and safe mitigation measures are to be suggested to mitigate and channelize the study of an effected area. Use of Landslide Hazard Zonation methodology in relative zones of weaknesses depend upon the data available for the particular site. The causative factors are identified and data is made available to infer the results. Factors like seismicity in mountainous region have closely associated to make the zones of thrust and faults or lineaments more vulnerable. Data related to soil, terrain, rainfall, geology, slope, nature of terrain, are found to be varied for various landforms and areas. Thus, the relative causes are to be identified and classified by giving specific weightage to each parameter. Factors which cause the instability of slopes are several and can be grouped to infer the potential modes of failure. The triggering factors of the landslides on the mountains are not uniform. The urbanization has crawled like ladder and emergence of concrete jungles are in a very fast pace on hilly region of Himalayas. The local terrains has largely been modified and hence instability of several zones are triggering at very fast pace. More strategic and pronounced methods are required to reduce the effect of landslide.

Keywords: zonation, LHZ, susceptible, weightages, methodology

Procedia PDF Downloads 178
4727 An Unified Model for Longshore Sediment Transport Rate Estimation

Authors: Aleksandra Dudkowska, Gabriela Gic-Grusza

Abstract:

Wind wave-induced sediment transport is an important multidimensional and multiscale dynamic process affecting coastal seabed changes and coastline evolution. The knowledge about sediment transport rate is important to solve many environmental and geotechnical issues. There are many types of sediment transport models but none of them is widely accepted. It is bacause the process is not fully defined. Another problem is a lack of sufficient measurment data to verify proposed hypothesis. There are different types of models for longshore sediment transport (LST, which is discussed in this work) and cross-shore transport which is related to different time and space scales of the processes. There are models describing bed-load transport (discussed in this work), suspended and total sediment transport. LST models use among the others the information about (i) the flow velocity near the bottom, which in case of wave-currents interaction in coastal zone is a separate problem (ii) critical bed shear stress that strongly depends on the type of sediment and complicates in the case of heterogeneous sediment. Moreover, LST rate is strongly dependant on the local environmental conditions. To organize existing knowledge a series of sediment transport models intercomparisons was carried out as a part of the project “Development of a predictive model of morphodynamic changes in the coastal zone”. Four classical one-grid-point models were studied and intercompared over wide range of bottom shear stress conditions, corresponding with wind-waves conditions appropriate for coastal zone in polish marine areas. The set of models comprises classical theories that assume simplified influence of turbulence on the sediment transport (Du Boys, Meyer-Peter & Muller, Ribberink, Engelund & Hansen). It turned out that the values of estimated longshore instantaneous mass sediment transport are in general in agreement with earlier studies and measurements conducted in the area of interest. However, none of the formulas really stands out from the rest as being particularly suitable for the test location over the whole analyzed flow velocity range. Therefore, based on the models discussed a new unified formula for longshore sediment transport rate estimation is introduced, which constitutes the main original result of this study. Sediment transport rate is calculated based on the bed shear stress and critical bed shear stress. The dependence of environmental conditions is expressed by one coefficient (in a form of constant or function) thus the model presented can be quite easily adjusted to the local conditions. The discussion of the importance of each model parameter for specific velocity ranges is carried out. Moreover, it is shown that the value of near-bottom flow velocity is the main determinant of longshore bed-load in storm conditions. Thus, the accuracy of the results depends less on the sediment transport model itself and more on the appropriate modeling of the near-bottom velocities.

Keywords: bedload transport, longshore sediment transport, sediment transport models, coastal zone

Procedia PDF Downloads 371
4726 Non-Destructive Inspection for Tunnel Lining Concrete with Small Void by Using Ultrasonic

Authors: Yasuyuki Nabeshima

Abstract:

Many tunnels which have been constructed since more than 50 years were existing in Japan. Lining concrete in these tunnels have many problems such as crack, flacking and void. Inner void between lining concrete and rock was very hard to find by outside visual check and hammering test. In this paper, non-destructive inspection by using ultrasonic was applied to investigate inner void. A model concrete with inner void was used as specimen and ultrasonic inspection was applied to specify the location and the size of void. As a result, ultrasonic inspection could accurately find the inner void.

Keywords: tunnel, lining concrete, void, non-destructive inspection, ultrasonic

Procedia PDF Downloads 178
4725 Survey of Communication Technologies for IoT Deployments in Developing Regions

Authors: Namugenyi Ephrance Eunice, Julianne Sansa Otim, Marco Zennaro, Stephen D. Wolthusen

Abstract:

The Internet of Things (IoT) is a network of connected data processing devices, mechanical and digital machinery, items, animals, or people that may send data across a network without requiring human-to-human or human-to-computer interaction. Each component has sensors that can pick up on specific phenomena, as well as processing software and other technologies that can link to and communicate with other systems and/or devices over the Internet or other communication networks and exchange data with them. IoT is increasingly being used in fields other than consumer electronics, such as public safety, emergency response, industrial automation, autonomous vehicles, the Internet of Medical Things (IoMT), and general environmental monitoring. Consumer-based IoT applications, like smart home gadgets and wearables, are also becoming more prevalent. This paper presents the main IoT deployment areas for environmental monitoring in developing regions and the backhaul options suitable for them. A detailed review of each of the list of papers selected for the study is included in section III of this document. The study includes an overview of existing IoT deployments, the underlying communication architectures, protocols, and technologies that support them. This overview shows that Low Power Wireless Area Networks (LPWANs), as summarized in Table 1, are very well suited for monitoring environment architectures designed for remote locations. LoRa technology, particularly the LoRaWAN protocol, has an advantage over other technologies due to its low power consumption, adaptability, and suitable communication range. The prevailing challenges of the different architectures are discussed and summarized in Table 3 of the IV section, where the main problem is the obstruction of communication paths by buildings, trees, hills, etc.

Keywords: communication technologies, environmental monitoring, Internet of Things, IoT deployment challenges

Procedia PDF Downloads 63
4724 Ripening Conditions Suitable for Marketing of Winter Squash ‘Bochang’

Authors: Do Su Park, Sang Jun Park, Cheon Soon Jeong

Abstract:

This study was performed in order to investigate the optimum ripening conditions for the marketing of Squash. Research sample 'Bochang' was grown at Hongcheonin in Gangwon province in August 2014. Ripening the samples were stored under the conditions of 25℃, 30℃, and 35℃ with the humidity RH70 ± 5%. They were checked every 3 days for 21 days. The respiration rate, water loss, hardness, coloration, the contents of soluble solids, starch, total sugar were evaluated after storage. Respiration rate was reduced in all treatments with longer storage period. Water loss was increased in the higher temperature. The 13% water loss was found at 35℃ on 21st storage day. The store initially 25℃ and 30℃ Hardness 47N and the ripening 21 days decreased slightly. On the other hand, in the case of 35℃ showed a large reduction than 25℃ and 30℃. Soluble solid contents were increased with longer ripening period. 30℃ and 35℃ was highest ripening 15 days. In the case of 25℃, it was highest on 21th day. The higher the temperature, the higher the soluble solids content are. 25℃ and 30℃ Coloration was increased rapidly until the ripening 12 days. In case of 35℃, continued increase up to 21 days. 25℃ and 30℃ showed no differences. Meanwhile, in case of 35℃, appearance quality was reduced in Occurrence of yellowing phenomenon of pericarp occurs from after ripening for 9 days. The coloration of fruit flesh is increase until after ripening for 9 days and decrease from after ripening for 9 days. There was no significant difference depending on the conditions of temperature. The higher the temperature, the lower the content of the starch. In case of 30℃ and 35℃, was reduced with longer storage period. 25℃ was minimal content change. Total sugar was increased in all treatments with longer storage period. The higher the temperature, the higher the amount of total sugar content is. Therefore, at 25℃ for 18-21 days and at 30℃ for 12-15 days is suitable for ripening.

Keywords: marketing, ripening, temperature, winter squash

Procedia PDF Downloads 581
4723 Alternative Fuel Production from Sewage Sludge

Authors: Jaroslav Knapek, Kamila Vavrova, Tomas Kralik, Tereza Humesova

Abstract:

The treatment and disposal of sewage sludge is one of the most important and critical problems of waste water treatment plants. Currently, 180 thousand tonnes of sludge dry matter are produced in the Czech Republic, which corresponds to approximately 17.8 kg of stabilized sludge dry matter / year per inhabitant of the Czech Republic. Due to the fact that sewage sludge contains a large amount of substances that are not beneficial for human health, the conditions for sludge management will be significantly tightened in the Czech Republic since 2023. One of the tested methods of sludge liquidation is the production of alternative fuel from sludge from sewage treatment plants and paper production. The paper presents an analysis of economic efficiency of alternative fuel production from sludge and its use for fluidized bed boiler with nominal consumption of 5 t of fuel per hour. The evaluation methodology includes the entire logistics chain from sludge extraction, through mechanical moisture reduction to about 40%, transport to the pelletizing line, moisture drying for pelleting and pelleting itself. For economic analysis of sludge pellet production, a time horizon of 10 years corresponding to the expected lifetime of the critical components of the pelletizing line is chosen. The economic analysis of pelleting projects is based on a detailed analysis of reference pelleting technologies suitable for sludge pelleting. The analysis of the economic efficiency of pellet is based on the simulation of cash flows associated with the implementation of the project over the life of the project. For the entered value of return on the invested capital, the price of the resulting product (in EUR / GJ or in EUR / t) is searched to ensure that the net present value of the project is zero over the project lifetime. The investor then realizes the return on the investment in the amount of the discount used to calculate the net present value. The calculations take place in a real business environment (taxes, tax depreciation, inflation, etc.) and the inputs work with market prices. At the same time, the opportunity cost principle is respected; waste disposal for alternative fuels includes the saved costs of waste disposal. The methodology also respects the emission allowances saved due to the displacement of coal by alternative (bio) fuel. Preliminary results of testing of pellet production from sludge show that after suitable modifications of the pelletizer it is possible to produce sufficiently high quality pellets from sludge. A mixture of sludge and paper waste has proved to be a more suitable material for pelleting. At the same time, preliminary results of the analysis of the economic efficiency of this sludge disposal method show that, despite the relatively low calorific value of the fuel produced (about 10-11 MJ / kg), this sludge disposal method is economically competitive. This work has been supported by the Czech Technology Agency within the project TN01000048 Biorefining as circulation technology.

Keywords: Alternative fuel, Economic analysis, Pelleting, Sewage sludge

Procedia PDF Downloads 111
4722 Concordance between Biparametric MRI and Radical Prostatectomy Specimen in the Detection of Clinically Significant Prostate Cancer and Staging

Authors: Rammah Abdlbagi, Egmen Tazcan, Kiriti Tripathi, Vinayagam Sudhakar, Thomas Swallow, Aakash Pai

Abstract:

Introduction and Objectives: MRI has an increasing role in the diagnosis and staging of prostate cancer. Multiparametric MRI includes multiple sequences, including T2 weighting, diffusion weighting, and dynamic contrast enhancement (DCE). Administration of DCE is expensive, time-consuming, and requires medical supervision due to the risk of anaphylaxis. Biparametric MRI (bpMRI), without DCE, overcomes many of these issues; however, there is conflicting data on its accuracy. Furthermore, data on the concordance between bpMRI lesion and pathology specimen, as well as the rates of cancer stage upgrading after surgery, is limited within the available literature. This study aims to examine the diagnostic test accuracy of bpMRI in the diagnosis of prostate cancer and radiological assessment of prostate cancer staging. Specifically, we aimed to evaluate the ability of bpMRI to accurately localise malignant lesions to better understand its accuracy and application in MRI-targeted biopsies. Materials and Methods: One hundred and forty patients who underwent bpMRI prior to radical prostatectomy (RP) were retrospectively reviewed from a single institution. Histological grade from the prostate biopsy was compared with surgical specimens from RP. Clinically significant prostate cancer (csPCa) was defined as Gleason grade group ≥2. bpMRI staging was compared with RP histology. Results: Overall sensitivity of bpMRI in diagnosing csPCa independent of location and staging was 98.87%. Of the 140 patients, 29 (20.71%) had their prostate biopsy histology upgraded at RP. 61 (43.57%) patients had csPca noted on RP specimens in areas that were not identified on the bpMRI. 55 (39.29%) had upstaging after RP from the original staging with bpMRI. Conclusions: Whilst the overall sensitivity of bpMRI in predicting any clinically significant cancer was good, there was notably poor concordance in the location of the tumour between bpMRI and eventual RP specimen. The results suggest that caution should be exercised when using bpMRI for targeted prostate biopsies and validates the continued role of systemic biopsies. Furthermore, a significant number of patients were upstaged at RP from their original staging with bpMRI. Based on these findings, bpMRI results should be interpreted with caution and can underestimate TNM stage, requiring careful consideration of treatment strategy.

Keywords: biparametric MRI, Ca prostate, staging, post prostatectomy histology

Procedia PDF Downloads 44
4721 Alive Cemeteries with Augmented Reality and Semantic Web Technologies

Authors: Tamás Matuszka, Attila Kiss

Abstract:

Due the proliferation of smartphones in everyday use, several different outdoor navigation systems have become available. Since these smartphones are able to connect to the Internet, the users can obtain location-based information during the navigation as well. The users could interactively get to know the specifics of a particular area (for instance, ancient cultural area, Statue Park, cemetery) with the help of thus obtained information. In this paper, we present an Augmented Reality system which uses Semantic Web technologies and is based on the interaction between the user and the smartphone. The system allows navigating through a specific area and provides information and details about the sight an interactive manner.

Keywords: augmented reality, semantic web, human computer interaction, mobile application

Procedia PDF Downloads 319
4720 Identifying Strategies and Techniques for the Egyptian Medium and Large Size Contractors to Respond to Economic Hardship

Authors: Michael Salib, Samer Ezeldin, Ahmed Waly

Abstract:

There are numerous challenges and problems facing the construction industry in several countries in the Middle East, as a result of numerous economic and political effects. As an example in Egypt, several construction companies have shut down and left the market since 2016. The closure of these companies occurred, as they did not respond with the suitable techniques and strategies that will enable them to survive during this economic turmoil period. A research is conducted in order to identify adequate strategies to be implemented by the Egyptian contractors that could allow them survive and keep competing during such economic hardship period. Two different techniques were used in order to identify these startegies. First, a deep research were conducted on the companies located in countries that suffered similar economic harship to identify the strategies they used in order to survive. Second, interviews were conducted with experts in the construction field in order to list the effective strategies they used that allowed them to survive. Moreover, at the end of each interview, the experts were asked to rate the applicability of the previously identified strategies used in the foreign countries, then the efficiency of each strategy if used in Egypt. A framework model is developed in order to assist the construction companies in choosing the suitable techniques to their company size, through identifying the top ranked strategies and techniques that should be adopted by the company based on the parameters given to the model. In order to verify this framework, the financial statements of two leading companies in the Egyptian construction market were studied. The first Contractor has applied nearly all the top ranked strategies identified in this paper, while the other contractor has applied only few of the identified top ranked strategies. Finally, another expert interviews were conducted in order to validate the framework. These experts were asked to test the model and rate through a questionnaire its applicability and effectiveness.

Keywords: construction management, economic hardship, recession, survive

Procedia PDF Downloads 112
4719 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV

Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran

Abstract:

Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.

Keywords: geo-referencing, ortho-rectification, video frame, self-calibration

Procedia PDF Downloads 464
4718 Pathway to Sustainable Shipping: Electric Ships

Authors: Wei Wang, Yannick Liu, Lu Zhen, H. Wang

Abstract:

Maritime transport plays an important role in global economic development but also inevitably faces increasing pressures from all sides, such as ship operating cost reduction and environmental protection. An ideal innovation to address these pressures is electric ships. The electric ship is in the early stage. Considering the special characteristics of electric ships, i.e., travel range limit, to guarantee the efficient operation of electric ships, the service network needs to be re-designed carefully. This research designs a cost-efficient and environmentally friendly service network for electric ships, including the location of charging stations, charging plan, route planning, ship scheduling, and ship deployment. The problem is formulated as a mixed-integer linear programming model with the objective of minimizing total cost comprised of charging cost, the construction cost of charging stations, and fixed cost of ships. A case study using data of the shipping network along the Yangtze River is conducted to evaluate the performance of the model. Two operating scenarios are used: an electric ship scenario where all the transportation tasks are fulfilled by electric ships and a conventional ship scenario where all the transportation tasks are fulfilled by fuel oil ships. Results unveil that the total cost of using electric ships is only 42.8% of using conventional ships. Using electric ships can reduce 80% SOx, 93.47% NOx, 89.47% PM, and 42.62% CO2, but will consume 2.78% more time to fulfill all the transportation tasks. Extensive sensitivity analyses are also conducted for key operating factors, including battery capacity, charging speed, volume capacity, and a service time limit of transportation task. Implications from the results are as follows: 1) it is necessary to equip the ship with a large capacity battery when the number of charging stations is low; 2) battery capacity will influence the number of ships deployed on each route; 3) increasing battery capacity will make the electric ship more cost-effective; 4) charging speed does not affect charging amount and location of charging station, but will influence the schedule of ships on each route; 5) there exists an optimal volume capacity, at which all costs and total delivery time are lowest; 6) service time limit will influence ship schedule and ship cost.

Keywords: cost reduction, electric ship, environmental protection, sustainable shipping

Procedia PDF Downloads 61
4717 Preparation and Characterization of Chitosan-Hydrocortisone Nanoshell for Drug Delivery Application

Authors: Suyeon Kwon, Ik Joong Kang, Wang Bingjie

Abstract:

Chitosan is a polymer that is usually produced from N-deacetylation of chitin. It is emerging as a promising biocompatible polymer that is harmless to humans. For the reason that many merits such as good adsorptive, biodegradability, many researches are being done on the chitosan for drug delivery system. Drug delivery system (DDS) has been developed for the control of drug. It makes the drug can be delivered effectively and safely into the targeted human body. The drug used in this work is hydrocortisone that is used in Rheumatism, skin diseases, allergy treatment. In this work, hydrocortisone was used to make allergic rhinitis medicine. Our study focuses on drug delivery through the nasal mucosa by using hydrocortisone impregnated chitosan nanoshells. This study has performed an investigation in order to establish the optimal conditions, changing concentration, quantity of hydrocortisone. DLS, SEM, TEM, FT-IR, UV spectrum were used to analyze the manufactured chitosan-hydrocortisone silver nanoshell and silver nanoshell, whose function as drug carriers. This study has performed an investigation on new drug carriers and delivery routes for hydrocortisone. Various methods of manufacturing chitosan-hydrocortisone nanoshells were attempted in order to establish the optimal condition. As a result, the average size of chitosan-hydrocortisone silver nanoshell is about 80 nm. So, chitosan-hydrocortisone silver nanoshell is suitable as drug carriers because optimal size of drug carrier in human body is less than 120 nm. UV spectrum of Chitosan-hydrocortisone silver nanoshell shows the characteristic peak of silver nanoshell at 420 nm. Likewise, the average size of chitosan-hydrocortisone silver nanoshell is about 100nm. It is also suitable for drug carrier in human body. Also, multi-layered silver shell over chitosan nanoshells induced the red-shift of absorption peak and increased the intensity of absorption peak. The resultant chitosan–silver nanocomposites (or nanoshells) exhibited the absorption peak around 430nm attributed to silvershell formation. i.e. the absorption peak was red-shifted by ca. 40 nm in reference to 390 nm of silver nanoshells.

Keywords: chitosan, drug delivery, hydrocortisone, rhinitis, nanoshell

Procedia PDF Downloads 243
4716 Formulation and Characterization of NaCS-PDMDAAC Capsules with Immobilized Chlorella vulgaris for Phycoremediation of Palm Oil Mill Effluent

Authors: Quin Emparan, Razif Harun, Dayang R. A. Biak, Rozita Omar, Michael K. Danquah

Abstract:

Cultivation of immobilized microalgae cells is on the rise for biotechnological applications. In this study, cultivation of Chlorella vulgaris was carried out in the form of suspended free-cell and immobilized cells system. NaCS-PDMDAAC capsules were used to immobilize C. vulgaris. Initially, the synthesized NaCS with C. vulgaris culture were prepared at various concentration of 5- 20% (w/v) using a 6% hardening solution (PDMDAAC) to investigate the capsules' gel stability and suitability for microalgae cells growth. Then, the capsules produced from 15% NaCS with C. vulgaris culture were furthered investigated using 5%, 10%, and 15% (w/v) of PDMDAAC solution. The capsules' gel stability was evaluated through dissolution time and loss of uniform spherical shape of capsules, while suitability for microalgae cells growth was evaluated through the optical density of microalgae. In this study, the 15% NaCS-10% PDMDAAC capsules were found to be the most suitable to sustain the capsules' gel stability and microalgae cells growth in MLA. For that reason, the C. vulgaris immobilized in the 15% NaCS-10% PDMDAAC capsules were further characterized using physicochemical analysis in terms of morphological, carbon (C), hydrogen (H) and nitrogen (N), Fourier transform-infrared (FT-IR), scanning electron microscopy-energy dispersive X-ray (SEM-EDX), zeta potential and Brunauer-Emmet-Teller (BET) analyses. The results revealed that the presence of sulfonates in the synthesized NaCS and NaCS-PDMDAAC capsules without and with C. vulgaris proves that cellulose alcohol group was successfully bonded by sulfo group. Besides that, immobilized microalgae cells have a smaller cell size of 6.29 ± 1.09 µm and zeta potential of -11.93 ± 0.91 mV than suspended free-cells microalgae culture. It can be summarized that immobilization of C. vulgaris in the 15% NaCS-10% PDMDAAC capsules are relevant as a bioremediator for wastewater treatment purposes due to its suitable size of pore and capsules as well as structural and compositional properties.

Keywords: biological capsules, immobilized cultivation, microalgae, physico-chemical analysis

Procedia PDF Downloads 148
4715 Mechanism of Veneer Colouring for Production of Multilaminar Veneer from Plantation-Grown Eucalyptus Globulus

Authors: Ngoc Nguyen

Abstract:

There is large plantation of Eucalyptus globulus established which has been grown to produce pulpwood. This resource is not suitable for the production of decorative products, principally due to low grades of wood and “dull” appearance but many trials have been already undertaken for the production of veneer and veneer-based engineered wood products, such as plywood and laminated veneer lumber (LVL). The manufacture of veneer-based products has been recently identified as an unprecedented opportunity to promote higher value utilisation of plantation resources. However, many uncertainties remain regarding the impacts of inferior wood quality of young plantation trees on product recovery and value, and with respect to optimal processing techniques. Moreover, the quality of veneer and veneer-based products is far from optimal as trees are young and have small diameters; and the veneers have the significant colour variation which affects to the added value of final products. Developing production methods which would enhance appearance of low-quality veneer would provide a great potential for the production of high-value wood products such as furniture, joinery, flooring and other appearance products. One of the methods of enhancing appearance of low quality veneer, developed in Italy, involves the production of multilaminar veneer, also named “reconstructed veneer”. An important stage of the multilaminar production is colouring the veneer which can be achieved by dyeing veneer with dyes of different colours depending on the type of appearance products, their design and market demand. Although veneer dyeing technology has been well advanced in Italy, it has been focused on poplar veneer from plantation which wood is characterized by low density, even colour, small amount of defects and high permeability. Conversely, the majority of plantation eucalypts have medium to high density, have a lot of defects, uneven colour and low permeability. Therefore, detailed study is required to develop dyeing methods suitable for colouring eucalypt veneers. Brown reactive dye is used for veneer colouring process. Veneers from sapwood and heartwood of two moisture content levels are used to conduct colouring experiments: green veneer and veneer dried to 12% MC. Prior to dyeing, all samples are treated. Both soaking (dipping) and vacuum pressure methods are used in the study to compare the results and select most efficient method for veneer dyeing. To date, the results of colour measurements by CIELAB colour system showed significant differences in the colour of the undyed veneers produced from heartwood part. The colour became moderately darker with increasing of Sodium chloride, compared to control samples according to the colour measurements. It is difficult to conclude a suitable dye solution used in the experiments at this stage as the variables such as dye concentration, dyeing temperature or dyeing time have not been done. The dye will be used with and without UV absorbent after all trials are completed using optimal parameters in colouring veneers.

Keywords: Eucalyptus globulus, veneer colouring/dyeing, multilaminar veneer, reactive dye

Procedia PDF Downloads 333
4714 Selection of Developmental Stages of Bovine in vitro-Derived Blastocysts Prior to Vitrification and Embryo Transfer: Implications for Cattle Breeding Programs

Authors: Van Huong Do, Simon Walton, German Amaya, Madeline Batsiokis, Sally Catt, Andrew Taylor-Robinson

Abstract:

Identification of the most suitable stages of bovine in vitro-derived blastocysts (early, expanded and hatching) prior to vitrification is a straightforward process that facilitates the decision as to which blastocyst stage to use for transfer of fresh and vitrified embryos. Research on in vitro evaluation of suitable stages has shown that the more advanced developmental stage of blastocysts is recommended for fresh embryo transfer while the earlier stage is proposed for embryo transfer following vitrification. There is, however, limited information on blastocyst stages using in vivo assessment. Hence, the aim of the present study was to determine the optimal stage of a blastocyst for vitrification and embryo transfer through a two-step procedure of embryo transfer followed by pregnancy testing at 35, 60 and 90 days of pregnancy. 410 good quality oocytes aspirated by the ovum pick-up technique from 8 donor cows were subjected to in vitro embryo production, vitrification and embryo transfer. Good quality embryos were selected, subjected to vitrification and embryo transfer. Subsequently, 77 vitrified embryos at different blastocyst stages were transferred to synchronised recipient cows. The overall cleavage and blastocyst rates of oocytes were 68.8% and 41.7%, respectively. In addition, the fertility and blastocyst production of 6 bulls used for in vitro fertilization was examined and shown to be statistically different (P<0.05). Results of ongoing pregnancy trials conducted at 35 days, 60 days and 90 days will be discussed. However, preliminary data indicate that individual bulls demonstrate distinctly different fertility performance in vitro. Findings from conception rates would provide a useful tool to aid selection of bovine in vitro-derived embryos for vitrification and embryo transfer in commercial settings.

Keywords: blastocyst, embryo transfer, in vitro-derived embryos, ovum pick-up, vitrification

Procedia PDF Downloads 280
4713 Low-Impact Development Strategies Assessment for Urban Design

Authors: Y. S. Lin, H. L. Lin

Abstract:

Climate change and land-use change caused by urban expansion increase the frequency of urban flooding. To mitigate the increase in runoff volume, low-impact development (LID) is a green approach for reducing the area of impervious surface and managing stormwater at the source with decentralized micro-scale control measures. However, the current benefit assessment and practical application of LID in Taiwan is still tending to be development plan in the community and building site scales. As for urban design, site-based moisture-holding capacity has been common index for evaluating LID’s effectiveness of urban design, which ignore the diversity, and complexity of the urban built environments, such as different densities, positive and negative spaces, volumes of building and so on. Such inflexible regulations not only probably make difficulty for most of the developed areas to implement, but also not suitable for every different types of built environments, make little benefits to some types of built environments. Looking toward to enable LID to strength the link with urban design to reduce the runoff in coping urban flooding, the research consider different characteristics of different types of built environments in developing LID strategy. Classify the built environments by doing the cluster analysis based on density measures, such as Ground Space Index (GSI), Floor Space Index (FSI), Floors (L), and Open Space Ratio (OSR), and analyze their impervious surface rates and runoff volumes. Simulate flood situations by using quasi-two-dimensional flood plain flow model, and evaluate the flood mitigation effectiveness of different types of built environments in different low-impact development strategies. The information from the results of the assessment can be more precisely implement in urban design. In addition, it helps to enact regulations of low-Impact development strategies in urban design more suitable for every different type of built environments.

Keywords: low-impact development, urban design, flooding, density measures

Procedia PDF Downloads 314
4712 Building a Comprehensive Repository for Montreal Gamelan Archives

Authors: Laurent Bellemare

Abstract:

After the showcase of traditional Indonesian performing arts at the Vancouver Expo 1986, Canadian universities inherited sets of Indonesian gamelan orchestras and soon began offering courses for music students interested in learning these diverse traditions. Among them, Université de Montréal was offered two sets of Balinese orchestras, a novelty that allowed a community of Montreal gamelan enthusiasts to form and engage with this music. A few generations later, a large body of archives have amassed, framing the history of this niche community’s achievements. This data, scattered in public and private archive collections, comes in various formats: Digital Audio Tape, audio cassettes, Video Home System videotape, digital files, photos, reel-to-reel audiotape, posters, concert programs, letters, TV shows, reports and more. Attempting to study these documents in order to unearth a chronology of gamelan in Montreal has proven to be challenging since no suitable platform for preservation, storage, and research currently exists. These files are, therefore, hard to find due to their decentralized locations. Additionally, most of the documents in older formats have yet to be digitized. In the case of recent digital files, such as pictures or rehearsal recordings, their locations can be even messier and their quantity overwhelming. Aside from the basic issue of choosing a suitable repository platform, questions of legal rights and methodology arise. For posterity, these documents should nonetheless be digitized, organized, and stored in an easily accessible online repository. This paper aims to underline the various challenges encountered in the early stages of such a project as well as to suggest ways of overcoming the obstacles to a thorough archival investigation.

Keywords: archival work, archives, Balinese gamelan, Canada, Gamelan, Indonesia, Javanese gamelan, Montreal

Procedia PDF Downloads 101
4711 Modelling Spatial Dynamics of Terrorism

Authors: André Python

Abstract:

To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.

Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling

Procedia PDF Downloads 333
4710 Optimisation of Dyes Decolourisation by Bacillus aryabhattai

Authors: A. Paz, S. Cortés Diéguez, J. M. Cruz, A. B. Moldes, J. M. Domínguez

Abstract:

Synthetic dyes are extensively used in the paper, food, leather, cosmetics, pharmaceutical and textile industries. Wastewater resulting from their production means several environmental problems. Improper disposal of theirs effluents involves adverse impacts and not only about the colour, also on water quality (Total Organic Carbon, Biological Oxygen Demand, Chemical Oxygen Demand, suspended solids, salinity, etc.) on flora (inhibition of photosynthetic activity), fauna (toxic, carcinogenic, and mutagenic effects) and human health. The aim of this work is to optimize the decolourisation process of different types of dyes by Bacillus aryabhattai. Initially, different types of dyes (Indigo Carmine, Coomassie Brilliant Blue and Remazol Brilliant Blue R) and suitable culture media (Nutritive Broth, Luria Bertani Broth and Trypticasein Soy Broth) were selected. Then, a central composite design (CCD) was employed to optimise and analyse the significance of each abiotic parameter. Three process variables (temperature, salt concentration and agitation) were investigated in the CCD at 3 levels with 2-star points. A total of 23 experiments were carried out according to a full factorial design, consisting of 8 factorial experiments (coded to the usual ± 1 notation), 6 axial experiments (on the axis at a distance of ± α from the centre), and 9 replicates (at the centre of the experimental domain). Experiments results suggest the efficiency of this strain to remove the tested dyes on the 3 media studied, although Trypticasein Soy Broth (TSB) was the most suitable medium. Indigo Carmine and Coomassie Brilliant Blue at maximal tested concentration 150 mg/l were completely decolourised, meanwhile, an acceptable removal was observed using the more complicate dye Remazol Brilliant Blue R at a concentration of 50 mg/l.

Keywords: Bacillus aryabhattai, dyes, decolourisation, central composite design

Procedia PDF Downloads 204
4709 Spatial Analysis as a Tool to Assess Risk Management in Peru

Authors: Josué Alfredo Tomas Machaca Fajardo, Jhon Elvis Chahua Janampa, Pedro Rau Lavado

Abstract:

A flood vulnerability index was developed for the Piura River watershed in northern Peru using Principal Component Analysis (PCA) to assess flood risk. The official methodology to assess risk from natural hazards in Peru was introduced in 1980 and proved effective for aiding complex decision-making. This method relies in part on decision-makers defining subjective correlations between variables to identify high-risk areas. While risk identification and ensuing response activities benefit from a qualitative understanding of influences, this method does not take advantage of the advent of national and international data collection efforts, which can supplement our understanding of risk. Furthermore, this method does not take advantage of broadly applied statistical methods such as PCA, which highlight central indicators of vulnerability. Nowadays, information processing is much faster and allows for more objective decision-making tools, such as PCA. The approach presented here develops a tool to improve the current flood risk assessment in the Peruvian basin. Hence, the spatial analysis of the census and other datasets provides a better understanding of the current land occupation and a basin-wide distribution of services and human populations, a necessary step toward ultimately reducing flood risk in Peru. PCA allows the simplification of a large number of variables into a few factors regarding social, economic, physical and environmental dimensions of vulnerability. There is a correlation between the location of people and the water availability mainly found in rivers. For this reason, a comprehensive vision of the population location around the river basin is necessary to establish flood prevention policies. The grouping of 5x5 km gridded areas allows the spatial analysis of flood risk rather than assessing political divisions of the territory. The index was applied to the Peruvian region of Piura, where several flood events occurred in recent past years, being one of the most affected regions during the ENSO events in Peru. The analysis evidenced inequalities for the access to basic services, such as water, electricity, internet and sewage, between rural and urban areas.

Keywords: assess risk, flood risk, indicators of vulnerability, principal component analysis

Procedia PDF Downloads 167
4708 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task

Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes

Abstract:

For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.

Keywords: Alzheimer's disease, keystroke logging, matching, writing process

Procedia PDF Downloads 348