Search results for: speckle noise reduction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5893

Search results for: speckle noise reduction

5113 Analyzing the Performance of the Philippine Disaster Risk Reduction and Management Act of 2010 as Framework for Managing and Recovering from Large-Scale Disasters: A Typhoon Haiyan Recovery Case Study

Authors: Fouad M. Bendimerad, Jerome B. Zayas, Michael Adrian T. Padilla

Abstract:

With the increasing scale of severity and frequency of disasters worldwide, the performance of governance systems for disaster risk reduction and management in many countries are being put to the test. In the Philippines, the Disaster Risk Reduction and Management (DRRM) Act of 2010 (Republic Act 10121 or RA 10121) as the framework for disaster risk reduction and management was tested when Super Typhoon Haiyan hit the eastern provinces of the Philippines in November 2013. Typhoon Haiyan is considered to be the strongest recorded typhoon in history to make landfall with winds exceeding 252 km/hr. In assessing the performance of RA 10121 the authors conducted document reviews of related policies, plans, programs, and key interviews and focus groups with representatives of 21 national government departments, two (2) local government units, six (6) private sector and civil society organizations, and five (5) development agencies. Our analysis will argue that enhancements are needed in RA 10121 in order to meet the challenges of large-scale disasters. The current structure where government agencies and departments organize along DRRM thematic areas such response and relief, preparedness, prevention and mitigation, and recovery and response proved to be inefficient in coordinating response and recovery and in mobilizing resources on the ground. However, experience from various disasters has shown the Philippine government’s tendency to organize major recovery programs along development sectors such as infrastructure, livelihood, shelter, and social services, which is consistent with the concept of DRM mainstreaming. We will argue that this sectoral approach is more effective than the thematic approach to DRRM. The council-type arrangement for coordination has also been rendered inoperable by Typhoon Haiyan because the agency responsible for coordination does not have decision-making authority to mobilize action and resources of other agencies which are members of the council. Resources have been devolved to agencies responsible for each thematic area and there is no clear command and direction structure for decision-making. However, experience also shows that the Philippine government has appointed ad-hoc bodies with authority over other agencies to coordinate and mobilize action and resources in recovering from large-scale disasters. We will argue that this approach be institutionalized within the government structure to enable a more efficient and effective disaster risk reduction and management system.

Keywords: risk reduction and management, recovery, governance, typhoon haiyan response and recovery

Procedia PDF Downloads 285
5112 Waste Prevention and Economic Policy: Policy Tools for Increasing Resource Efficiency and Savings

Authors: Sylvia Graczka

Abstract:

Waste related environmental problems are not only exploding but are also spotlighted for capacity shortages in recycling, as China announced its ban on waste imports. According to the waste hierarchy, prevention is the primary solution for waste, and also the cheapest. Waste related environmental pollution as externality puts an ever-growing burden on communities bearing the social costs. Economic policies often claim to be pro-environment, this often appears only theoretically, or at the level of principles. There are few concrete occurrences of tools in economic policies, such as green taxes, that are truly effective in stimulating the shift towards waste reduction. The paper presents theoretical economic policy tools based on literature review, and case studies on applied economic policy tools by analyzing policy papers, strategies in force, in line with ‘polluter pays’ and ‘extended producer responsibility’ principles. The study also emphasizes the differences between the broader notion of waste reduction and that of waste minimization, parallel to the difference between resource efficiency and resource savings. It also puts the issue in the context of neoclassical environmental economics and ecological economics, to present alternatives in approach. The research concludes in identifying effective economic policy tools that support the reduction of material use, and the prevention of waste. Consumer and producer awareness of waste problems and consciousness related to their choices are inevitable to make economic policy tools work effectively.

Keywords: economic policy, producer responsibility, resource efficiency, waste prevention

Procedia PDF Downloads 149
5111 Thinned Elliptical Cylindrical Antenna Array Synthesis Using Particle Swarm Optimization

Authors: Rajesh Bera, Durbadal Mandal, Rajib Kar, Sakti P. Ghoshal

Abstract:

This paper describes optimal thinning of an Elliptical Cylindrical Array (ECA) of uniformly excited isotropic antennas which can generate directive beam with minimum relative Side Lobe Level (SLL). The Particle Swarm Optimization (PSO) method, which represents a new approach for optimization problems in electromagnetic, is used in the optimization process. The PSO is used to determine the optimal set of ‘ON-OFF’ elements that provides a radiation pattern with maximum SLL reduction. Optimization is done without prefixing the value of First Null Beam Width (FNBW). The variation of SLL with element spacing of thinned array is also reported. Simulation results show that the number of array elements can be reduced by more than 50% of the total number of elements in the array with a simultaneous reduction in SLL to less than -27dB.

Keywords: thinned array, Particle Swarm Optimization, Elliptical Cylindrical Array, Side Lobe Label.

Procedia PDF Downloads 442
5110 Heavy Oil Recovery with Chemical Viscosity-Reduction: An Innovative Low-Carbon and Low-Cost Technology

Authors: Lin Meng, Xi Lu, Haibo Wang, Yong Song, Lili Cao, Wenfang Song, Yong Hu

Abstract:

China has abundant heavy oil resources, and thermal recovery is the main recovery method for heavy oil reservoirs. However, high energy consumption, high carbon emission and high production costs make heavy oil thermal recovery unsustainable. It is urgent to explore a replacement for developing technology. A low Carbon and cost technology of heavy oil recovery, chemical viscosity-reduction in layer (CVRL), is developed by the petroleum exploration and development research institute of Sinopec via investigated mechanisms, synthesized products, and improved oil production technologies, as follows: (1) Proposed a cascade viscous mechanism of heavy oil. Asphaltene and resin grow from free molecules to associative structures further to bulk aggregations by π - π stacking and hydrogen bonding, which causes the high viscosity of heavy oil. (2) Aimed at breaking the π - π stacking and hydrogen bond of heavy oil, the copolymer of N-(3,4-dihydroxyphenethyl) acryl amide and 2-Acrylamido-2-methylpropane sulfonic acid was synthesized as a viscosity reducer. It achieves a viscosity reduction rate of>80% without shearing for heavy oil (viscosity < 50000 mPa‧s), of which fluidity is evidently improved in the layer. (3) Synthesized hydroxymethyl acrylamide-maleic acid-decanol ternary copolymer self-assembly plugging agent. The particle size is 0.1 μm-2 mm adjustable, and the volume is 10-500 times controllable, which can achieve the efficient transportation of viscosity reducer to enriched oil areas. CVRL has applied 400 wells until now, increasing oil production by 470000 tons, saving 81000 tons of standard coal, reducing CO2 emissions by 174000 tons, and reducing production costs by 60%. It promotes the transformation of heavy oil towards low energy consumption, low carbon emissions, and low-cost development.

Keywords: heavy oil, chemical viscosity-reduction, low carbon, viscosity reducer, plugging agent

Procedia PDF Downloads 71
5109 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model

Authors: Didier Auroux, Vladimir Groza

Abstract:

This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.

Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization

Procedia PDF Downloads 315
5108 Comparative Antihyperglycemic Activity of Serpentina (Andrographis paniculata) and Papait (Mollugo oppositifolia linn) Aqueous Extracts in Alloxan-Induced Diabetic Mice

Authors: Karina Marie G. Nicolas, Kimberly M. Visaya, Emmanuel R. Cauinian

Abstract:

A comparative study on the antihyperglycemic activity of aqueous extracts of Serpentina (Andrographis paniculata) and Papait (Mollugo oppositifolia linn) administered at 400mg/kg body weight per orem twice daily for 14 days was investigated using 24 alloxan-induced diabetic male, 6-8 weeks old ICR mice and Metformin as standard control. The blood glucose levels of all the animals in the treatment groups were not reduced to < 200mg/dl so as to consider them as non-diabetic but Papait showed a consistent blood glucose lowering effect from day 0 to 14 causing 36.07% reduction as compared to Serpentina which was observed to cause a fluctuating effect on blood glucose levels and a reduction of only 22.53% while the Metformin treated animals exhibited the highest reduction at 45.29%. The blood glucose levels at day 14 of animals treated with Papait (322.93 mg/dl) had comparable blood glucose levels (p<0.05) with the Metformin treated groups (284.50 mg/dl). Also, all the animals in the three treatment groups were still hypercholesterolemic with an observed consistent weight loss and a decrease in feed intake except for Serpentina which recorded a slight increase. Results of the study showed a superior antihyperglycemic activity of Papait compared with Serpentina.

Keywords: antihyperglycemic, diabetes, hypercholesterolemic, papait, serpentina

Procedia PDF Downloads 357
5107 Reducing Component Stress during Encapsulation of Electronics: A Simulative Examination of Thermoplastic Foam Injection Molding

Authors: Constantin Ott, Dietmar Drummer

Abstract:

The direct encapsulation of electronic components is an effective way of protecting components against external influences. In addition to achieving a sufficient protective effect, there are two other big challenges for satisfying the increasing demand for encapsulated circuit boards. The encapsulation process should be both suitable for mass production and offer a low component load. Injection molding is a method with good suitability for large series production but also with typically high component stress. In this article, two aims were pursued: first, the development of a calculation model that allows an estimation of the occurring forces based on process variables and material parameters. Second, the evaluation of a new approach for stress reduction by means of thermoplastic foam injection molding. For this purpose, simulation-based process data was generated with the Moldflow simulation tool. Based on this, component stresses were calculated with the calculation model. At the same time, this paper provided a model for estimating the forces occurring during overmolding and derived a solution method for reducing these forces. The suitability of this approach was clearly demonstrated and a significant reduction in shear forces during overmolding was achieved. It was possible to demonstrate a process development that makes it possible to meet the two main requirements of direct encapsulation in addition to a high protective effect.

Keywords: encapsulation, stress reduction, foam-injection-molding, simulation

Procedia PDF Downloads 124
5106 Edible Oil Industry Wastewater Treatment by Microfiltration with Ceramic Membrane

Authors: Zita Šereš, Dragana Šoronja Simović, Ljubica Dokić, Lidietta Giorno, Biljana Pajin, Cecilia Hodur, Nikola Maravić

Abstract:

Membrane technology is convenient for separation of suspended solids, colloids and high molecular weight materials that are present. The idea is that the waste stream from edible oil industry, after the separation of oil by using skimmers is subjected to microfiltration and the obtained permeate can be used again in the production process. The wastewater from edible oil industry was used for the microfiltration. For the microfiltration of this effluent a tubular membrane was used with a pore size of 200 nm at transmembrane pressure in range up to 3 bar and in range of flow rate up to 300 L/h. Box–Behnken design was selected for the experimental work and the responses considered were permeate flux and chemical oxygen demand (COD) reduction. The reduction of the permeate COD was in the range 40-60% according to the feed. The highest permeate flux achieved during the process of microfiltration was 160 L/m2h.

Keywords: ceramic membrane, edible oil, microfiltration, wastewater

Procedia PDF Downloads 295
5105 Experimental and Numerical Studies on Hydrogen Behavior in a Small-Scale Container with Passive Autocatalytic Recombiner

Authors: Kazuyuki Takase, Yoshihisa Hiraki, Gaku Takase, Isamu Kudo

Abstract:

One of the most important issue is to ensure the safety of long-term waste storage containers in which fuel debris and radioactive materials are accumulated. In this case, hydrogen generated by water decomposition by radiation is accumulated in the container for a long period of time, so it is necessary to reduce the concentration of hydrogen in the container. In addition, a condition that any power supplies from the outside of the container are unnecessary is requested. Then, radioactive waste storage containers with the passive autocatalytic recombiner (PAR) would be effective. The radioactive waste storage container with PAR was used for moving the fuel debris of the Three Mile Island Unit 2 to the storage location. However, the effect of PAR is not described in detail. Moreover, the reduction of hydrogen concentration during the long-term storage period was performed by the venting system, which was installed on the top of the container. Therefore, development of a long-term storage container with PAR was started with the aim of safely storing fuel debris picked up at the Fukushima Daiichi Nuclear Power Plant for a long period of time. A fundamental experiment for reducing the concentration of hydrogen which generates in a nuclear waste long-term storage container was carried out using a small-scale container with PAR. Moreover, the circulation flow behavior of hydrogen in the small-scale container resulting from the natural convection by the decay heat was clarified. In addition, preliminary numerical analyses were performed to predict the experimental results regarding the circulation flow behavior and the reduction of hydrogen concentration in the small-scale container. From the results of the present study, the validity of the container with PAR was experimentally confirmed on the reduction of hydrogen concentration. In addition, it was predicted numerically that the circulation flow behavior of hydrogen in the small-scale container is blocked by steam which generates by chemical reaction of hydrogen and oxygen.

Keywords: hydrogen behavior, reduction of concentration, long-term storage container, small-scale, PAR, experiment, analysis

Procedia PDF Downloads 162
5104 The Effect of Feature Selection on Pattern Classification

Authors: Chih-Fong Tsai, Ya-Han Hu

Abstract:

The aim of feature selection (or dimensionality reduction) is to filter out unrepresentative features (or variables) making the classifier perform better than the one without feature selection. Since there are many well-known feature selection algorithms, and different classifiers based on different selection results may perform differently, very few studies consider examining the effect of performing different feature selection algorithms on the classification performances by different classifiers over different types of datasets. In this paper, two widely used algorithms, which are the genetic algorithm (GA) and information gain (IG), are used to perform feature selection. On the other hand, three well-known classifiers are constructed, which are the CART decision tree (DT), multi-layer perceptron (MLP) neural network, and support vector machine (SVM). Based on 14 different types of datasets, the experimental results show that in most cases IG is a better feature selection algorithm than GA. In addition, the combinations of IG with DT and IG with SVM perform best and second best for small and large scale datasets.

Keywords: data mining, feature selection, pattern classification, dimensionality reduction

Procedia PDF Downloads 666
5103 Green Synthesis of Red-Fluorescent Gold Nanoclusters: Characterization and Application for Breast Cancer Detection

Authors: Agnė Mikalauskaitė, Renata Karpicz, Vitalijus Karabanovas, Arūnas Jagminas

Abstract:

The use of biocompatible precursors for the synthesis and stabilization of fluorescent gold nanoclusters (NCs) with strong red photoluminescence creates an important link between natural sciences and nanotechnology. Herein, we report the cost-effective synthesis of Au nanoclusters by templating and reduction of chloroauric acid with the cheap amino acid food supplements. This synthesis under the optimized conditions leads to the formation of biocompatible Au NCs having good stability and intense red photoluminescence, peaked at 680 to 705 nm, with a quantum yield (QY) of ≈7% and the average lifetime of up to several µs. The composition and luminescent properties of the obtained NCs were compared with ones formed via well-known bovine serum albumin reduction approach. Our findings implied that synthesized Au NCs tend to accumulate in more tumorigenic breast cancer cells (line MDA-MB-213) and after dialysis can be prospective for bio imagining.

Keywords: gold nanoclusters, proteins, materials chemistry, red-photoluminescence, bioimaging

Procedia PDF Downloads 276
5102 Antioxidant Potential of Pomegranate Rind Extract Attenuates Pain, Inflammation and Bone Damage in Experimental Rats

Authors: Ritu Karwasra, Surender Singh

Abstract:

Inflammation is an important physiological response of the body’s self-defense system that helps in eliminating and protecting organism from harmful stimuli and in tissue repair. It is a highly regulated protective response which helps in eliminating the initial cause of cell injury, and initiates the process of repair. The present study was designed to evaluate the ameliorative effect of pomegranate rind extract on pain and inflammation. Hydroalcoholic standardized rind extract of pomegranate at doses 50, 100 and 200 mg/kg and indomethacin (3 mg/kg) was tested against eddy’s hot plate induced thermal algesia, carrageenan (acute inflammation) and Complete Freund’s Adjuvant (chronic inflammation) induced models in Wistar rats. Parameters analyzed were inhibition of paw edema, measurement of joint diameter, levels of GSH, TBARS, SOD, TNF-α, radiographic imaging, tissue histology and synovial expression of pro-inflammatory cytokine receptor (TNF-R1). Radiological and light microscopical analysis were carried out to find out the bone damage in CFA-induced chronic inflammatory model. Findings of the present study revealed that pomegranate rind extract at a dose of 200 mg/kg caused a significant (p<0.05) reduction in paw swelling in both the inflammatory models. Nociceptive threshold was also significantly (p<0.05) improved. Immunohistochemical analysis of TNF-R1 in CFA-induced group showed elevated level, whereas reduction in level of TNF-R1 was observed in pomegranate (200 mg/kg). Henceforth, we might say that pomegranate produced a dose-dependent reduction in inflammation and pain along with the reduction in levels of oxidative stress markers and tissue histology, and the effect was found to be comparable to that of indomethacin. Thus, it can be concluded that pomegranate is a potential therapeutic target in the pathogenesis of inflammation and pain, and punicalagin is the major constituents found in rind extract might be responsible for the activity.

Keywords: carrageenan, inflammation, nociceptive-threshold, pomegranate, histopathology

Procedia PDF Downloads 217
5101 Glenoid Osteotomy with Various Tendon Transfers for Brachial Plexus Birth Palsy: Clinical Outcomes

Authors: Ramin Zargarbashi, Hamid Rabie, Behnam Panjavi, Hooman Kamran, Seyedarad Mosalamiaghili, Zohre Erfani, Seyed Peyman Mirghaderi, Maryam Salimi

Abstract:

Background: Posterior shoulder dislocation is one of the disabling complications of brachial plexus birth injury (BPBI), and various treatment options, including capsule and surrounding muscles release for open reduction, humeral derotational osteotomy, and tendon transfers, have been recommended to manage it. In the present study, we aimed to determine the clinical outcome of open reduction with soft tissue release, tendon transfer, and glenoid osteotomy inpatients with BPBI and posterior shoulder dislocation or subluxation. Methods: From 2018 to 2020, 33 patients that underwent open reduction, glenoid osteotomy, and tendon transfer were included. The glenohumeral deformity was classified according to the Waters radiographic classification. Functional assessment was performed using the Mallet grading system before and at least two years after the surgery. Results: The patients were monitored for 26.88± 5.47 months. Their average age was 27.5±14 months. Significant improvement was seen in the overall Mallet score (from 13.5 to 18.91 points) and its segments, including hand to mouth, hand to the neck, global abduction, global external rotation, abduction degree, and external rotation degree. Hand-to-back score and the presence of trumpet sign were significantly decreased in the post-operation phase (all p values<0.001). The above-mentioned variables significantly changed for both infantile and non-infantile dislocations. Conclusion: Our study demonstrated that open reduction along with glenoid osteotomy improves retroversion, and muscle strengthening with different muscle transfers is an effective technique for BPBI.

Keywords: birth injuries, nerve injury, brachial plexus birth palsy, Erb palsy, tendon transfer

Procedia PDF Downloads 95
5100 A Review: Role of Chromium in Broiler

Authors: Naveed Zahra, Zahid Kamran, Shakeel Ahmad

Abstract:

Heat stress is one of the most important environmental stressors challenging poultry production worldwide. The detrimental effect of heat stress results in reduction in the productive performance of poultry with high incidences of mortality. Researchers have made efforts to prevent such damage to poultry production through dietary manipulation. Supplementation with Chromium (Cr) might have some positive effects on some aspect of blood parameters and broilers performance. Chromium (Cr) the element whose trivalent Cr (III) organic state is present in trace amounts in animal feed and water is found to be a key element in evading heat stress and thus cutting down the heavy expenditure on air conditioning in broiler sheds. Chromium, along with other essential minerals is lost due to increased excretion during heat stress and thus its inclusion in broiler diet is kind of mandatory in areas of hot climate. Chromium picolinate in broiler diet has shown a hike in growth rate including muscle gain with body fat reduction under environmental stress. Fat reduction is probably linked to the ability of chromium to increase the sensitivity of the insulin receptors on tissues and thus the uptake of sugar from blood increases which decreases the amount of glucose to be converted to amino acids and stored in adipose tissue as triglycerides. Organic chromium has also shown to increase lymphocyte proliferation rate and antioxidant levels. So, the immune competency, muscle gain and fat reduction along with evasion of heat stress are good enough signs that indicate the fruitful inclusion of dietary chromium for broiler. This promising element may bring the much needed break in the local poultry industry. The task is now to set the exact dose of the element in the diet that would be useful enough and still not toxic to broiler. In conclusion there is a growing body of evidence which suggest that chromium may be an essential trace element for livestock and poultry. The nutritional requirement for chromium may vary with different species and physiological state within a species.

Keywords: broiler, chromium, heat stress, performance

Procedia PDF Downloads 282
5099 Life Cycle Assessment of Todays and Future Electricity Grid Mixes of EU27

Authors: Johannes Gantner, Michael Held, Rafael Horn, Matthias Fischer

Abstract:

At the United Nations Climate Change Conference 2015 a global agreement on the reduction of climate change was achieved stating CO₂ reduction targets for all countries. For instance, the EU targets a reduction of 40 percent in emissions by 2030 compared to 1990. In order to achieve this ambitious goal, the environmental performance of the different European electricity grid mixes is crucial. First, the electricity directly needed for everyone’s daily life (e.g. heating, plug load, mobility) and therefore a reduction of the environmental impacts of the electricity grid mix reduces the overall environmental impacts of a country. Secondly, the manufacturing of every product depends on electricity. Thereby a reduction of the environmental impacts of the electricity mix results in a further decrease of environmental impacts of every product. As a result, the implementation of the two-degree goal highly depends on the decarbonization of the European electricity mixes. Currently the production of electricity in the EU27 is based on fossil fuels and therefore bears a high GWP impact per kWh. Due to the importance of the environmental impacts of the electricity mix, not only today but also in future, within the European research projects, CommONEnergy and Senskin, time-dynamic Life Cycle Assessment models for all EU27 countries were set up. As a methodology, a combination of scenario modeling and life cycle assessment according to ISO14040 and ISO14044 was conducted. Based on EU27 trends regarding energy, transport, and buildings, the different national electricity mixes were investigated taking into account future changes such as amount of electricity generated in the country, change in electricity carriers, COP of the power plants and distribution losses, imports and exports. As results, time-dynamic environmental profiles for the electricity mixes of each country and for Europe overall were set up. Thereby for each European country, the decarbonization strategies of the electricity mix are critically investigated in order to identify decisions, that can lead to negative environmental effects, for instance on the reduction of the global warming of the electricity mix. For example, the withdrawal of the nuclear energy program in Germany and at the same time compensation of the missing energy by non-renewable energy carriers like lignite and natural gas is resulting in an increase in global warming potential of electricity grid mix. Just after two years this increase countervailed by the higher share of renewable energy carriers such as wind power and photovoltaic. Finally, as an outlook a first qualitative picture is provided, illustrating from environmental perspective, which country has the highest potential for low-carbon electricity production and therefore how investments in a connected European electricity grid could decrease the environmental impacts of the electricity mix in Europe.

Keywords: electricity grid mixes, EU27 countries, environmental impacts, future trends, life cycle assessment, scenario analysis

Procedia PDF Downloads 185
5098 Road Systems as Environmental Barriers: An Overview of Roadways in Their Function as Fences for Wildlife Movement

Authors: Rachael Bentley, Callahan Gergen, Brodie Thiede

Abstract:

Roadways have a significant impact on the environment in so far as they function as barriers to wildlife movement, both through road mortality and through resultant road avoidance. Roads have an im-mense presence worldwide, and it is predicted to increase substantially in the next thirty years. As roadways become even more common, it is important to consider their environmental impact, and to mitigate the negative effects which they have on wildlife and wildlife mobility. In a thorough analysis of several related studies, a common conclusion was that roads cause habitat fragmentation, which can lead split populations to evolve differently, for better or for worse. Though some populations adapted positively to roadways, becoming more resistant to road mortality, and more tolerant to noise and chemical contamination, many others experienced maladaptation, either due to chemical contamination in and around their environment, or because of genetic mutations from inbreeding when their population was fragmented too substantially to support a large enough group for healthy genetic exchange. Large mammals were especially susceptible to maladaptation from inbreed-ing, as they require larger areas to roam and therefore require even more space to sustain a healthy population. Regardless of whether a species evolved positively or negatively as a result of their proximity to a road, animals tended to avoid roads, making the genetic diversity from habitat fragmentation an exceedingly prevalent issue in the larger discussion of road ecology. Additionally, the consideration of solu-tions, such as overpasses and underpasses, is crucial to ensuring the long term survival of many wildlife populations. In studies addressing the effectiveness of overpasses and underpasses, it seemed as though animals adjusted well to these sorts of solutions, but strategic place-ment, as well as proper sizing, proper height, shelter from road noise, and other considerations were important in construction. When an underpass or overpass was well-built and well-shielded from human activity, animals’ usage of the structure increased significantly throughout its first five years, thus reconnecting previously divided populations. Still, these structures are costly and they are often unable to fully address certain issues such as light, noise, and contaminants from vehicles. Therefore, the need for further discussion of new, crea-tive solutions remains paramount. Roads are one of the most consistent and prominent features of today’s landscape, but their environmental impacts are largely overlooked. While roads are useful for connecting people, they divide landscapes and animal habitats. Therefore, further research and investment in possible solutions is necessary to mitigate the negative effects which roads have on wildlife mobility and to pre-vent issues from resultant habitat fragmentation.

Keywords: fences, habitat fragmentation, roadways, wildlife mobility

Procedia PDF Downloads 174
5097 The Effect of Global Solar Variations on the Performance of n- AlGaAs/ p-GaAs Solar Cells

Authors: A. Guechi, M. Chegaar

Abstract:

This study investigates how AlGaAs/GaAs thin film solar cells perform under varying global solar spectrum due to the changes of environmental parameters such as the air mass and the atmospheric turbidity. The solar irradiance striking the solar cell is simulated using the spectral irradiance model SMARTS2 (Simple Model of the Atmospheric Radiative Transfer of Sunshine) for clear skies on the site of Setif (Algeria). The results show a reduction in the short circuit current due to increasing atmospheric turbidity, it is 63.09% under global radiation. However increasing air mass leads to a reduction in the short circuit current of 81.73%.The efficiency decrease with increasing atmospheric turbidity and air mass.

Keywords: AlGaAs/GaAs, solar cells, environmental parameters, spectral variation, SMARTS

Procedia PDF Downloads 395
5096 Effect of Two Entomopathogenic Fungi Beauveria bassiana and Metarhizium anisopliae var. acridum on the Haemolymph of the Desert Locust Schistocerca gregaria

Authors: Fatima Zohra Bissaad, Farid Bounaceur, Nassima Behidj, Nadjiba Chebouti, Fatma Halouane, Bahia Doumandji-Mitiche

Abstract:

Effect of Beauveria bassiana and Metarhizium anisopliae var. acridum on the 5th instar nymphs of Schistocerca gregaria was studied in the laboratory. Infection by these both entomopathogenic fungi caused reduction in the hemolymph total protein. The average amounts of total proteins were 2.3, 2.07, 2.09 µg/100 ml of haemolymph in the control and M. anisopliae var. acridum, and B. bassiana based-treatments, respectively. Three types of haemocytes were recognized and identified as prohaemocytes, plasmatocytes and granulocytes. The treatment caused significant reduction in the total haemocyte count and in each haemocyte type on the 9th day after its application.

Keywords: Beauveria bassiana, haemolymph picture, haemolymph protein, Metarhizium anisopliae var. acridum, Schistocerca gregaria

Procedia PDF Downloads 475
5095 Synthesis and Characterization of the Carbon Spheres Built Up from Reduced Graphene Oxide

Authors: Takahiro Saida, Takahiro Kogiso, Takahiro Maruyama

Abstract:

The ordered structural carbon (OSC) material is expected to apply to the electrode of secondary batteries, the catalyst supports, and the biomaterials because it shows the low substance-diffusion resistance by its uniform pore size. In general, the OSC material is synthesized using the template material. Changing size and shape of this template provides the pore size of OSC material according to the purpose. Depositing the oxide nanosheets on the polymer sphere template by the layer by layer (LbL) method was reported as one of the preparation methods of OSC material. The LbL method can provide the controlling thickness of structural wall without the surface modification. When the preparation of the uniform carbon sphere prepared by the LbL method which composed of the graphene oxide wall and the polymethyl-methacrylate (PMMA) core, the reduction treatment will be the important object. Since the graphene oxide has poor electron conductivity due to forming a lot of functional groups on the surface, it could be hard to apply to the electrode of secondary batteries and the catalyst support of fuel cells. In this study, the graphene oxide wall of carbon sphere was reduced by the thermal treatment under the vacuum conditions, and its crystalline structure and electronic state were characterized. Scanning electron microscope images of the carbon sphere after the heat treatment at 300ºC showed maintaining sphere shape, but its shape was collapsed with increasing the heating temperature. In this time, the dissolution rate of PMMA core and the reduction rate of graphene oxide were proportionate to heating temperature. In contrast, extending the heating time was conducive to the conservation of the sphere shape. From results of X-ray photoelectron spectroscopy analysis, its electronic state of the surface was indicated mainly sp² carbon. From the above results, we succeeded in the synthesis of the sphere structure composed by the reduction graphene oxide.

Keywords: carbon sphere, graphene oxide, reduction, layer by layer

Procedia PDF Downloads 140
5094 Tuning of the Thermal Capacity of an Envelope for Peak Demand Reduction

Authors: Isha Rathore, Peeyush Jain, Elangovan Rajasekar

Abstract:

The thermal capacity of the envelope impacts the cooling and heating demand of a building and modulates the peak electricity demand. This paper presents the thermal capacity tuning of a building envelope to minimize peak electricity demand for space cooling. We consider a 40 m² residential testbed located in Hyderabad, India (Composite Climate). An EnergyPlus model is validated using real-time data. A Parametric simulation framework for thermal capacity tuning is created using the Honeybee plugin. Diffusivity, Thickness, layer position, orientation and fenestration size of the exterior envelope are parametrized considering a five-layered wall system. A total of 1824 parametric runs are performed and the optimum wall configuration leading to minimum peak cooling demand is presented.

Keywords: thermal capacity, tuning, peak demand reduction, parametric analysis

Procedia PDF Downloads 183
5093 A Comprehensive Comparative Study on Seasonal Variation of Parameters Involved in Site Characterization and Site Response Analysis by Using Microtremor Data

Authors: Yehya Rasool, Mohit Agrawal

Abstract:

The site characterization and site response analysis are the crucial steps for reliable seismic microzonation of an area. So, the basic parameters involved in these fundamental steps are required to be chosen properly in order to efficiently characterize the vulnerable sites of the study region. In this study, efforts are made to delineate the variations in the physical parameter of the soil for the summer and monsoon seasons of the year (2021) by using Horizontal-to-Vertical Spectral Ratios (HVSRs) recorded at five sites of the Indian Institute of Technology (Indian School of Mines), Dhanbad, Jharkhand, India. The data recording at each site was done in such a way that less amount of anthropogenic noise was recorded at each site. The analysis has been done for five seismic parameters like predominant frequency, H/V ratio, the phase velocity of Rayleigh waves, shear wave velocity (Vs), compressional wave velocity (Vp), and Poisson’s ratio for both the seasons of the year. From the results, it is observed that these parameters majorly vary drastically for the upper layers of soil, which in turn may affect the amplification ratios and probability of exceedance obtained from seismic hazard studies. The HVSR peak comes out to be higher in monsoon, with a shift in predominant frequency as compared to the summer season of the year 2021. Also, the drastic reduction in shear wave velocity (up to ~10 m) of approximately 7%-15% is also perceived during the monsoon period with a slight decrease in compressional wave velocity. Generally, the increase in the Poisson ratios is found to have higher values during monsoon in comparison to the summer period. Our study may be very beneficial to various agricultural and geotechnical engineering projects.

Keywords: HVSR, shear wave velocity profile, Poisson ratio, microtremor data

Procedia PDF Downloads 88
5092 Using Atomic Force Microscope to Investigate the Influence of UVA Radiation and HA on Cell Behaviour and Elasticity of Dermal Fibroblasts

Authors: Pei-Hsiu Chiang, Ling Hong Huang, Hsin-I Chang

Abstract:

In this research, we used UVA irradiation, which can penetrate into dermis and fibroblasts, the most abundant cells in dermis, to investigate the effect of UV light on dermis, such as inflammation, ECM degradation and elasticity loss. Moreover, this research is focused on the influence of hyaluronic acid (HA) on UVA treated dermal fibroblasts. We aim to establish whether HA can effectively relief ECM degradation, and restore the elasticity of UVA-damaged fibroblasts. Prolonged exposure to UVA radiation can damage fibroblasts and led variation in cell morphology and reduction in cell viability. Besides, UVA radiation can induce IL-1β expression on fibroblasts and then promote MMP-1 and MMP-3 expression, which can accelerate ECM degradation. On the other hand, prolonged exposure to UVA radiation reduced collagen and elastin synthesis on fibroblasts. Due to the acceleration of ECM degradation and the reduction of ECM synthesis, Atomic force microscope (AFM) was used to analyze the elasticity reduction on UVA-damaged fibroblasts. UVA irradiation causes photoaging on fibroblasts. UVA damaged fibroblasts with HA treatment can down-regulate the gene expression of MMP-1, MMP-3, and then slow down ECM degradation. On the other hand, HA may restore elastin and collagen synthesis in UV-damaged fibroblasts. Based on the slowdown of ECM degradation, UVA-damaged fibroblast elasticity can be effectively restored by HA treatment. In summary, HA can relief the photoaging conditions on fibroblasts, but may not be able to return fibroblasts to normal, healthy state. Although HA cannot fully recover UVA-damaged fibroblasts, HA is still potential for repairing photoaging skin.

Keywords: atomic force microscope, hyaluronic acid, UVA radiation, dermal fibroblasts

Procedia PDF Downloads 390
5091 Digital Antimicrobial Thermometer for Axilliary Usage: A New Device for Measuring the Temperature of the Body for the Reduction of Cross-Infections

Authors: P. Efstathiou, E. Kouskouni, Z. Manolidou, K. Karageorgou, M. Tseroni, A. Efstathiou, V. Karyoti, I. Agrafa

Abstract:

Aim: The aim of this prospective comparative study is to evaluate the reduction of microbial flora on the surface of an axillary digital thermometer, made of antimicrobial copper, in relation with a common digital thermometer. Material – Methods: A brand new digital electronic thermometer implemented with antimicrobial copper (Cu 70% - Nic 30%, low lead) on the two edges of the device (top and bottom: World Patent Number WO2013064847 and Register Number by the Hellenic Copper Development Institute No 11/2012) was manufactured and a comparative study with common digital electronic thermometer was conducted on 18 ICU (Intensive Care Unit) patients of three different hospitals. The thermometry was performed in accordance with the projected International Nursing Protocols for body temperature measurement. A total of 216 microbiological samples were taken from the axillary area of the patients, using both of the investigated body temperature devises. Simultaneously the “Halo” phenomenon (phenomenon “Stefanis”) was studied at the non-antimicrobial copper-implemented parts of the antimicrobial digital electronic thermometer. Results: In all samples collected from the surface of the antimicrobial electronic digital thermometer, the reduction of microbial flora (Klebsiella spp, Staphylococcus aureus, Staphylococcus epidermitis, Candida spp, Pneudomonas spp) was progressively reduced to 99% in two hours after the thermometry. The above flora was found in the axillary cavity remained the same in common thermometer. The statistical analysis (SPSS 21) showed a statistically significant reduction of the microbial load (N = 216, < 0.05). Conclusions: The hospital-acquired infections are linked to the transfer of pathogens due to the multi-usage of medical devices from both health professionals and patients, such as axillary thermometers. The use of antimicrobial digital electronic thermometer minimizes microbes' transportation between patients and health professionals while having all the conditions of reliability, proper functioning, security, ease of use and reduced cost.

Keywords: antimicrobial copper, cross infections, digital thermometers, ICU

Procedia PDF Downloads 400
5090 The Dressing Field Method of Gauge Symmetries Reduction: Presentation and Examples

Authors: Jeremy Attard, Jordan François, Serge Lazzarini, Thierry Masson

Abstract:

Gauge theories are the natural background for describing geometrically fundamental interactions using principal and associated fiber bundles as dynamical entities. The central notion of these theories is their local gauge symmetry implemented by the local action of a Lie group H. There exist several methods used to reduce the symmetry of a gauge theory, like gauge fixing, bundle reduction theorem or spontaneous symmetry breaking mechanism (SSBM). This paper is a presentation of another method of gauge symmetry reduction, distinct from those three. Given a symmetry group H acting on a fiber bundle and its naturally associated fields (Ehresmann (or Cartan) connection, curvature, matter fields, etc.) there sometimes exists a way to erase (in whole or in part) the H-action by just reconfiguring these fields, i.e. by making a mere change of field variables in order to get new (‘composite‘) fields on which H (in whole or in part) does not act anymore. Two examples: the re-interpretation of the BEHGHK (Higgs) mechanism, on the one hand, and the top-down construction of Tractor and Penrose's Twistor spaces and connections in the framework of conformal Cartan geometry, one the other, will be discussed. They have, of course, nothing to do with each other but the dressing field method can be applied on both to get a new insight. In the first example, it turns out, indeed, that generation of masses in the Standard Model can be separated from the symmetry breaking, the latter being a mere change of field variables, i.e. a dressing. This offers an interpretation in opposition with the one usually found in textbooks. In the second case, the dressing field method applied to the conformal Cartan geometry offer a way of understanding the deep geometric nature of the so-called Tractors and Twistors. The dressing field method, distinct from a gauge transformation (even if it can have apparently the same form), is a systematic way of finding and erasing artificial symmetries of a theory, by a mere change of field variables which redistributes the degrees of freedom of the theories.

Keywords: BEHGHK (Higgs) mechanism, conformal gravity, gauge theory, spontaneous symmetry breaking, symmetry reduction, twistors and tractors

Procedia PDF Downloads 237
5089 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 285
5088 A Study on Temperature and Drawing Speed for Diffusion Bonding Enhancement in Drawing of Hot Lined Pipes by FEM Analysis

Authors: M. T. Ahn, J. H. Park, S. H. Park, S. H. Ha

Abstract:

Diffusion bonding has been continuously studied. Temperature and pressure are the most important factors to increase the strength between diffusion bonded interfaces. Diffusion bonding is an important factor affecting the bonding strength of the lined pipe. The increase of the diffusion bonding force results in a high formability clad pipe. However, in the case of drawing, it is difficult to obtain a high pressure between materials due to a relatively small reduction in cross-section, and it is difficult to prevent elongation or to tear of material in hot drawing even if the reduction in the section is increased. In this paper, to increase the diffusion bonding force, we derive optimal temperature and pressure to suppress material stretching and realize precise thickness precision.

Keywords: diffusion bonding, temperature, pressure, drawing speed

Procedia PDF Downloads 371
5087 Improved Wearable Monitoring and Treatment System for Parkinson’s Disease

Authors: Bulcha Belay Etana, Benny Malengier, Janarthanan Krishnamoorthy, Timothy Kwa, Lieva VanLangenhove

Abstract:

Electromyography measures the electrical activity of muscles using surface electrodes or needle electrodes to monitor various disease conditions. Recent developments in the signal acquisition of electromyograms using textile electrodes facilitate wearable devices, enabling patients to monitor and control their health status outside of healthcare facilities. Here, we have developed and tested wearable textile electrodes to acquire electromyography signals from patients suffering from Parkinson’s disease and incorporated a feedback-control system to relieve muscle cramping through thermal stimulus. In brief, the textile electrodes made of stainless steel was knitted into a textile fabric as a sleeve, and their electrical characteristic, such as signal-to-noise ratio, was compared with traditional electrodes. To relieve muscle cramping, a heating element made of stainless-steel conductive yarn sewn onto cotton fabric, coupled with a vibration system, was developed. The system integrated a microcontroller and a Myoware muscle sensor to activate the heating element as well as the vibration motor when cramping occurs, and at the same time, the element gets deactivated when the muscle cramping subsides. An optimum therapeutic temperature of 35.5 °C is regulated by continuous temperature monitoring to deactivate the heating system when this threshold value is reached. The textile electrode exhibited a signal-to-noise ratio of 6.38dB, comparable to that of the traditional electrode’s value of 7.05 dB. For a given 9 V power supply, the rise time was about 6 minutes for the developed heating element to reach an optimum temperature.

Keywords: smart textile system, wearable electronic textile, electromyography, heating textile, vibration therapy, Parkinson’s disease

Procedia PDF Downloads 105
5086 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 225
5085 Reduction of Content of Lead and Zinc from Wastewater by Using of Metallurgical Waste

Authors: L. Rozumová, J. Seidlerová

Abstract:

The aim of this paper was to study the sorption properties of a blast furnace sludge used as the sorbent. The sorbent was utilized for reduction of content of lead and zinc ions. Sorbent utilized in this work was obtained from metallurgical industry from process of wet gas treatment in iron production. The blast furnace sludge was characterized by X-Ray diffraction, scanning electron microscopy, and XRFS spectroscopy. Sorption experiments were conducted in batch mode. The sorption of metal ions in the sludge was determined by correlation of adsorption isotherm models. The adsorption of lead and zinc ions was best fitted with Langmuir adsorption isotherms. The adsorption capacity of lead and zinc ions was 53.8 mg.g-1 and 10.7 mg.g-1, respectively. The results indicated that blast furnace sludge could be effectively used as secondary material and could be also employed as a low-cost alternative for the removal of heavy metals ions from wastewater.

Keywords: blast furnace sludge, lead, zinc, sorption

Procedia PDF Downloads 297
5084 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 189