Search results for: fast Fourier algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4619

Search results for: fast Fourier algorithms

3509 The Extent of Land Use Externalities in the Fringe of Jakarta Metropolitan: An Application of Spatial Panel Dynamic Land Value Model

Authors: Rahma Fitriani, Eni Sumarminingsih, Suci Astutik

Abstract:

In a fast growing region, conversion of agricultural lands which are surrounded by some new development sites will occur sooner than expected. This phenomenon has been experienced by many regions in Indonesia, especially the fringe of Jakarta (BoDeTaBek). Being Indonesia’s capital city, rapid conversion of land in this area is an unavoidable process. The land conversion expands spatially into the fringe regions, which were initially dominated by agricultural land or conservation sites. Without proper control or growth management, this activity will invite greater costs than benefits. The current land use is the use which maximizes its value. In order to maintain land for agricultural activity or conservation, some efforts are needed to keep the land value of this activity as high as possible. In this case, the knowledge regarding the functional relationship between land value and its driving forces is necessary. In a fast growing region, development externalities are the assumed dominant driving force. Land value is the product of the past decision of its use leading to its value. It is also affected by the local characteristics and the observed surrounded land use (externalities) from the previous period. The effect of each factor on land value has dynamic and spatial virtues; an empirical spatial dynamic land value model will be more useful to capture them. The model will be useful to test and to estimate the extent of land use externalities on land value in the short run as well as in the long run. It serves as a basis to formulate an effective urban growth management’s policy. This study will apply the model to the case of land value in the fringe of Jakarta Metropolitan. The model will be used further to predict the effect of externalities on land value, in the form of prediction map. For the case of Jakarta’s fringe, there is some evidence about the significance of neighborhood urban activity – negative externalities, the previous land value and local accessibility on land value. The effects are accumulated dynamically over years, but they will fully affect the land value after six years.

Keywords: growth management, land use externalities, land value, spatial panel dynamic

Procedia PDF Downloads 257
3508 Identifying and Optimizing the Critical Excipients in Moisture Activated Dry Granulation Process for Two Anti TB Drugs of Different Aqueous Solubilities

Authors: K. Srujana, Vinay U. Rao, M. Sudhakar

Abstract:

Isoniazide (INH) a freely water soluble and pyrazinamide (Z) a practically water insoluble first line anti tubercular (TB) drugs were identified as candidates for optimizing the Moisture Activated Dry Granulation (MADG) process. The work focuses on identifying the effect of binder type and concentration as well as the effect of magnesium stearate level on critical quality attributes of Disintegration time (DT) and in vitro dissolution test when the tablets are processed by the MADG process. Also, the level of the drug concentration, binder concentration and fluid addition during the agglomeration stage of the MADG process was evaluated and optimized. For INH, it was identified that for tablets with HPMC as binder at both 2% w/w and 5% w/w level and Magnesium stearate upto 1%w/w as lubrication the DT is within 1 minute and the dissolution rate is the fastest (> 80% in 15 minutes) as compared to when PVP or pregelatinized starch is used as binder. Regarding the process, fast disintegrating and rapidly dissolving tablets are obtained when the level of drug, binder and fluid uptake in agglomeration stage is 25% w/w 0% w/w binder and 0.033%. w/w. At the other 2 levels of these three ingredients, the DT is significantly impacted and dissolution is also slower. For pyrazinamide,it was identified that for the tablets with 2% w/w level of each of PVP as binder and Cross Caramellose Sodium disintegrant the DT is within 2 minutes and the dissolution rate is the fastest(>80 in 15 minutes)as compared to when HPMC or pregelatinized starch is used as binder. This may be attributed to the fact that PVP may be acting as a solubilizer for the practically insoluble Pyrazinamide. Regarding the process,fast dispersing and rapidly disintegrating tablets are obtained when the level of drug, binder and fluid uptake in agglomeration stage is 10% w/w,25% w/w binder and 1% w/w.At the other 2 levels of these three ingredients, the DT is significantly impacted and dissolution is comparatively slower and less complete.

Keywords: agglomeration stage, isoniazide, MADG, moisture distribution stage, pyrazinamide

Procedia PDF Downloads 240
3507 Protective Effect of Cinnamomum zeylanicum Bark Extract against Doxorubicin Induced Cardiotoxicity: A Preliminary Study

Authors: J. A. N. Sandamali, R. P. Hewawasam, K. A. P. W. Jayatilaka, L. K. B. Mudduwa

Abstract:

Introduction: Doxorubicin is widely used in the treatment of solid organ tumors and hematological malignancies, but the dose-dependent cardiotoxicity due to free radical formation compromises its clinical utility. Therapeutic strategies which enhance cellular endogenous defense systems have been identified as promising approaches to combat oxidative stress-associated conditions. Cinnamomum zeylanicum (Ceylon cinnamon) has a number antioxidant compounds, which can effectively scavenge reactive oxygen including superoxide anions, hydroxyl radicals and as well as other free radicals. Therefore, the objective of the study was to elucidate the most effective dose of Cinnamomum bark extract which ameliorates doxorubicin-induced cardiotoxicity. Materials and methods: Wistar rats were divided into seven groups of 10 animals in each. Group 1: normal control (distilled water, orally, for 14 days, 10 mL/kg saline, ip, after 16 hours fast on the 11th day); Group 2: doxorubicin control (distilled water, orally, for 14 days, 18 mg/kg doxorubicin, ip, after 16 hour fast on the 11th day); Groups 3-7: five doses of freeze dried aqueous bark extracts (0.125, 0.25, 0.5, 1.0, 2.0g/kg, orally, daily for 14 days, 18 mg/kg doxorubicin, ip, after 16 hours fast on the 11th day). Animals were sacrificed on the 15th day and blood was collected for the estimation of cardiac troponin I (cTnI), AST and LDH concentrations and myocardial tissues were collected for histopathological assessment of myocardial damage and irreversible changes were graded by developing a score. Results: cTnI concentration of groups 1-7 were 0, 161.9, 128.6, 95.9, 38, 19.41 & 12.36 pg/mL showing significant differences (p<0.05) between group 2 and groups 4-7. In groups 1-7, serum AST concentration were 26.82, 68.1, 37.18, 36.23, 26.8, 26.62 & 22.43U/L and LDH concentrations were 1166.13, 2428.84, 1658.35, 1474.34, 1277.58, 1110.21 & 974.40U/L and a significant difference (p<0.05) was observed between group 2 and groups 3-7. The maximum score for myocardial necrosis was observed in group 2. Parallel to the increase of the dosage of plant extract, a gradual reduction of the score for myocardial necrosis was observed in groups 3-7. Reversible histological changes such as vacuolation, congestion were observed in group 2 and all plant treated groups. Haemorrhages, inflammatory cell infiltrations, and interstitial oedema were observed in group 2, but absent in groups treated with higher doses of the plant extract. Discussion & Conclusion: According to the in vitro antioxidant assays performed, Cinnamomum zeylanicum (Ceylon cinnamon) bark possesses high amounts of polyphenolic substances and high antioxidant activity. The present study showed that Cinnamomum zeylanicum extract at 2.0 g/kg possesses the most significant cardioprotective effect against doxorubicin-induced cardiotoxicity. It can be postulated that pretreatment with Cinnamomum bark extract may replenish the cardiomyocytes with antioxidants that are needed for the defense against oxidative stress induced by doxorubicin.

Keywords: cardioprotection, Cinnamomum zeylanicum, doxorubicin, free radicals

Procedia PDF Downloads 163
3506 Decomposition of Solidification Carbides during Cyclic Thermal Treatments in a Co-Based Alloy Deposit Applied to Stainless Steel

Authors: Sellidj Abdelaziz, Lebaili Soltane

Abstract:

A cobalt-based alloy type Co-Cr-Ni-WC was deposited by plasma transferred arc projection (PTA) on a stainless steel valve. The alloy is characterized at the equilibrium by a solid solution Co (γ) mainly dendritic, and eutectic carbides M₇C₃ and ηM₆C. At the deposit/substrate interface, this microstructure is modified by the fast cooling mode of the alloy when applied in the liquid state on the relatively cold steel substrate. The structure formed in this case is heterogeneous and metastable phases can occur and evolve over temperature service. Coating properties and reliability are directly related to microstructures formed during deposition. We were interested more particularly in this microstructure formed during the solidification of the deposit in the region of the interface joining the soldered couple and its evolution during cyclic heat treatments at temperatures similar to those of the thermal environment of the valve. The characterization was carried out by SEM-EDS microprobe CAMECA, XRD, and micro hardness profiles. The deposit obtained has a linear and regular appearance that is free of cracks and with little porosity. The morphology of the microstructure represents solidification stages that are relatively fast with a temperature gradient high at the beginning of the interface by forming a plane front solid solution Co (γ). It gradually changes with the decreasing temperature gradient by getting farther from the junction towards the outer limit of the deposit. The matrix takes the forms: cellular, mixed (cells and dendrites) and dendritic. Dendritic growth is done according to primary ramifications in the direction of the heat removal which takes place in the direction perpendicular to the interface, towards the external surface of the deposit, following secondary and tertiary undeveloped arms. The eutectic carbides M₇C₃ and ηM₆C formed are very thin and are located in the intercellular and interdendritic spaces of the solid solution Co (γ).

Keywords: Co-Ni-Cr-W-C alloy, solid deposit, microstructure, carbides, cyclic heat treatment

Procedia PDF Downloads 117
3505 Regret-Regression for Multi-Armed Bandit Problem

Authors: Deyadeen Ali Alshibani

Abstract:

In the literature, the multi-armed bandit problem as a statistical decision model of an agent trying to optimize his decisions while improving his information at the same time. There are several different algorithms models and their applications on this problem. In this paper, we evaluate the Regret-regression through comparing with Q-learning method. A simulation on determination of optimal treatment regime is presented in detail.

Keywords: optimal, bandit problem, optimization, dynamic programming

Procedia PDF Downloads 453
3504 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products

Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola

Abstract:

The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.

Keywords: decision making, design euristics, product design, product design process, design paradigms

Procedia PDF Downloads 119
3503 Catalytic Deoxygenation of Propionic Acid in the Vapour Phase

Authors: Hossein Bayahia, Mohammed Saad Motlaq Al-Gahmdi

Abstract:

The gas-phase deoxygenation of propionic acid was investigated in the presence of Co-Mo catalysts in N2 or H2 flow at 200-400 °C. In the presence of N2 the main product was 3-pentanone with other deoxygenates and some light gases: ethane and ethene. Using H2 flow, the catalyst was active for decarboxylation and decarbonylation of acid and the yields of ethane and ethene. The decarboxylation and decarbonylation reactions increased with increasing temperature. Cobalt-molybdenum supported on alumina showed better performance than bulk catalyst, especially at 400 °C in the presence of N2 for the ketonisation of propionic acid to form 3-pentanone as the main product. Bulk and supported catalysts were characterized by surface area porosity (BET), thermogravimetric analysis (TGA) and diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) of pyridine adsorption.

Keywords: deoxygenation, propionic acid, gas-phase, catalyst

Procedia PDF Downloads 287
3502 Dwelling in the Built Environment: The Resilience by Design in Modular Thinking toward an Adaptive Alternatives

Authors: Tzen-Ying Ling

Abstract:

Recently, the resilience of dwellings in urban areas has been deliberated, as to accommodate the growing demand for changing the demography and rapid urbanization. The need to incorporate sustainability and cleaner production thinking have intensified to mitigate climate risks and satisfy the demand for housing. The modular thinking satisfies both the pressing call for fast-tracked housing stocks; while meeting the goal of more sustainable production. In the other side, the importance of the dwelling as a podium for well-being and social connectedness are sought to explore the key human/environment design thinking for the modular system in dwelling. We argue the best practice incorporates the concept of systemic components thinking. The fieldwork reported in this paper illustrates the process of the case study in a modular dwelling unit prototype development; focusing on the systemic frame system design process and adjustment recommendation hereafter. Using a case study method, the study identified that: (1) inclusive human dimensional factoring through systemic design thinking results in affordable implementations possibilities. (2) The environmental dimension encourages the place-based solution suited for the locality and the increasing demand for dwelling in the urban system. (3) Prototype design consideration avails module system component as dwelling construction alternative. (4) Building code often acts as an inhibitor for such dwelling units by the restriction in lot sizes and units placement. The demand for fast-track dwelling construction and cleaner production decisively outweighs the code inhibition; we further underscored the sustainability implication of the alternative prototype as the core of this study. The research suggests that modular thinking results in a resilient solution suited for the locality and the increasing demand for dwelling in the urban system.

Keywords: system prototype, urban resilience, human/environment dimension, modular thinking, dwelling alternative

Procedia PDF Downloads 175
3501 Optimal Operation of Bakhtiari and Roudbar Dam Using Differential Evolution Algorithms

Authors: Ramin Mansouri

Abstract:

Due to the contrast of rivers discharge regime with water demands, one of the best ways to use water resources is to regulate the natural flow of the rivers and supplying water needs to construct dams. Optimal utilization of reservoirs, consideration of multiple important goals together at the same is of very high importance. To study about analyzing this method, statistical data of Bakhtiari and Roudbar dam over 46 years (1955 until 2001) is used. Initially an appropriate objective function was specified and using DE algorithm, the rule curve was developed. In continue, operation policy using rule curves was compared to standard comparative operation policy. The proposed method distributed the lack to the whole year and lowest damage was inflicted to the system. The standard deviation of monthly shortfall of each year with the proposed algorithm was less deviated than the other two methods. The Results show that median values for the coefficients of F and Cr provide the optimum situation and cause DE algorithm not to be trapped in local optimum. The most optimal answer for coefficients are 0.6 and 0.5 for F and Cr coefficients, respectively. After finding the best combination of coefficients values F and CR, algorithms for solving the independent populations were examined. For this purpose, the population of 4, 25, 50, 100, 500 and 1000 members were studied in two generations (G=50 and 100). result indicates that the generation number 200 is suitable for optimizing. The increase in time per the number of population has almost a linear trend, which indicates the effect of population in the runtime algorithm. Hence specifying suitable population to obtain an optimal results is very important. Standard operation policy had better reversibility percentage, but inflicts severe vulnerability to the system. The results obtained in years of low rainfall had very good results compared to other comparative methods.

Keywords: reservoirs, differential evolution, dam, Optimal operation

Procedia PDF Downloads 78
3500 Synthesis and Characterization of Amino-Functionalized Polystyrene Nanoparticles as Reactive Filler

Authors: Yaseen Elhebshi, Abdulkareem Hamid, Nureddin Bin Issa, Xiaonong Chen

Abstract:

A convenient method of preparing ultrafine polystyrene latex nano-particles with amino groups on the surface is developed. Polystyrene latexes in the size range 50–400 nm were prepared via emulsion polymerization, using sodium dodecyl sulfate (SDS) as surfactant. Polystyrene with amino groups on the surface will be fine to use as organic filler to modify rubber. Transmission electron microscopy (TEM) was used to observe the morphology of silicon dioxide and functionalized polystyrene nano-particles. The nature of bonding between the polymer and the reactive groups on the filler surfaces was analyzed using Fourier transform infrared spectroscopy (FTIR). Scanning electron microscopy (SEM) was employed to examine the filler surface.

Keywords: reactive filler, emulsion polymerization, particle size, polystyrene nanoparticles

Procedia PDF Downloads 350
3499 Biosynthesis of Silver-Phosphate Nanoparticles Using the Extracellular Polymeric Substance of Sporosarcina pasteurii

Authors: Mohammadhosein Rahimi, Mohammad Raouf Hosseini, Mehran Bakhshi, Alireza Baghbanan

Abstract:

Silver ions (Ag+) and their compounds are consequentially toxic to microorganisms, showing biocidal effects on many species of bacteria. Silver-phosphate (or silver orthophosphate) is one of these compounds, which is famous for its antimicrobial effect and catalysis application. In the present study, a green method was presented to synthesis silver-phosphate nanoparticles using Sporosarcina pasteurii. The composition of the biosynthesized nanoparticles was identified as Ag3PO4 using X-ray Diffraction (XRD) and Energy Dispersive Spectroscopy (EDS). Also, Fourier Transform Infrared (FTIR) spectroscopy showed that Ag3PO4 nanoparticles was synthesized in the presence of biosurfactants, enzymes, and proteins. In addition, UV-Vis adsorption of the produced colloidal suspension approved the results of XRD and FTIR analyses. Finally, Transmission Electron Microscope (TEM) images indicated that the size of the nanoparticles was about 20 nm.

Keywords: bacteria, biosynthesis, silver-phosphate, Sporosarcina pasteurii, nanoparticle

Procedia PDF Downloads 452
3498 Recent Advances in Data Warehouse

Authors: Fahad Hanash Alzahrani

Abstract:

This paper describes some recent advances in a quickly developing area of data storing and processing based on Data Warehouses and Data Mining techniques, which are associated with software, hardware, data mining algorithms and visualisation techniques having common features for any specific problems and tasks of their implementation.

Keywords: data warehouse, data mining, knowledge discovery in databases, on-line analytical processing

Procedia PDF Downloads 404
3497 Synthesis of Mg/B Containing Compound in a Modified Microwave Oven

Authors: Gülşah Çelik Gül, Figen Kurtuluş

Abstract:

Magnesium containing boron compounds with hexagonal structure have been drawn much attention due to their superconductive nature. The main target of this work is new modified microwave oven by on our own has an ability about passing through a gas in the oven medium for attainment of oxygen-free compounds such as c-BN.  Mg containing boride was synthesized by modified-microwave method under nitrogen atmosphere using amorphous boron and magnesium source in appropriate molar ratio. Microwave oven with oxygen free environment has been modified to aimed to obtain magnesium boride without oxygen. Characterizations were done by powder X-ray diffraction (XRD), and Fourier transform infrared (FTIR) spectroscopy. Mg containing boride, generally named magnesium boride, with amorphous character without oxygen is obtained via designed microwave oven system.

Keywords: magnesium containing boron compounds, modified microwave synthesis, powder X-ray diffraction, FTIR

Procedia PDF Downloads 374
3496 A Study on the Contribution of Nitrogen Pollution Sources in a Stream Runoff Using Hydrograph Separation

Authors: Sunghyen Cho, Dongguen Lee, Woo-Jin Shin, Kwang-Sik Lee

Abstract:

The water quality policy of Korea is to maintain the sum of the total concentration in the basin water below a certain standard. However, interest in nitrogen, the main nutrient source of eutrophication, is increasing due to eutrophication and the deterioration of water quality caused by climate change. We classified the nitrogen sources of a stream into livestock manure, domestic wastewater, and soil and examined whether they could be distinguished using stable isotopes of nitrogen and oxygen in nitrate and stable isotopes of boron. Sampling was performed through regular sampling and sampling during rainfall. While nitrate concentrations were much higher during the wet season than during the dry season, nitrogen from the manure and sewage appeared to contribute less to watershed discharge. Because a large amount of soil nitrogen flows into the stream during the wet season, the contribution ratio of nitrogen from the wastewater is estimated to be lower even if the amount of nitrogen supplied to the stream is greater during the season than during the dry season. During the flood period, we collected samples from both rainfall and streamflow and separated the runoff hydrographs into old water and rainfall components to assess the pathways by nitrogen entered the stream. To assess the contribution of non-point source nitrogen from the surface to the stream by rainfall, the fast-flowing surface or intermediate runoff components were additionally separated from the runoff hydrograph. The study found that soil water contributed significantly to the stream runoff caused by rainfall. These results also convinced us that a large amount of soil nitrogen flows into stream discharge during the rainy season. We believe that further research is needed as we cannot confirm this through a single field experiment. It is also necessary to regularly sample stream water and analyze its quality. As these results accumulate, we believe that the origin of nitrogen pollution will become clear and that the data can be utilized in our country's water resource management policy.

Keywords: nitrogen, stable isotope, old water, fast-flowing surface, intermediate runoff, rainfall, hydrograph

Procedia PDF Downloads 4
3495 Refining Scheme Using Amphibious Epistemologies

Authors: David Blaine, George Raschbaum

Abstract:

The evaluation of DHCP has synthesized SCSI disks, and current trends suggest that the exploration of e-business that would allow for further study into robots will soon emerge. Given the current status of embedded algorithms, hackers worldwide obviously desire the exploration of replication, which embodies the confusing principles of programming languages. In our research we concentrate our efforts on arguing that erasure coding can be made "fuzzy", encrypted, and game-theoretic.

Keywords: SCHI disks, robot, algorithm, hacking, programming language

Procedia PDF Downloads 430
3494 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 124
3493 Flexible Communication Platform for Crisis Management

Authors: Jiří Barta, Tomáš Ludík, Jiří Urbánek

Abstract:

The topics of disaster and emergency management are highly debated among experts. Fast communication will help to deal with emergencies. Problem is with the network connection and data exchange. The paper suggests a solution, which allows possibilities and perspectives of new flexible communication platform to the protection of communication systems for crisis management. This platform is used for everyday communication and communication in crisis situations too.

Keywords: crisis management, information systems, interoperability, crisis communication, security environment, communication platform

Procedia PDF Downloads 475
3492 Highly Transparent, Hydrophobic and Self-Cleaning ZnO-Durazane Based Hybrid Organic-Inorganic Coatings

Authors: Abderrahmane Hamdi, Julie Chalon, Benoit Dodin, Philippe Champagne

Abstract:

In this report, we present a simple route to realize robust, hydrophobic, and highly transparent coatings using organic polysilazane (durazane) and zinc oxide nanoparticles (ZnO). These coatings were deposited by spraying the mixture solution on glass slides. Thus, the properties of the films were characterized by scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FT-IR), UV–vis-NIR spectrophotometer, and water contact angle method. This sprayable polymer mixed with ZnO nanoparticles shows high transparency for visible light > 90%, a hydrophobic character (CA > 90°), and good mechanical and chemical stability. The coating also demonstrates excellent self-cleaning properties, which makes it a promising candidate for commercial use.

Keywords: coatings, durability, hydrophobicity, organic polysilazane, self-cleaning, transparence, zinc oxide nanoparticles

Procedia PDF Downloads 172
3491 Structural Changes Induced in Graphene Oxide Film by Low Energy Ion Beam Irradiation

Authors: Chetna Tyagi, Ambuj Tripathi, Devesh Avasthi

Abstract:

Graphene oxide consists of sp³ hybridization along with sp² hybridization due to the presence of different oxygen-containing functional groups on its edges and basal planes. However, its sp³ / sp² hybridization can be tuned by various methods to utilize it in different applications, like transistors, solar cells and biosensors. Ion beam irradiation can also be one of the methods to optimize sp² and sp³ hybridization ratio for its desirable properties. In this work, graphene oxide films were irradiated with 100 keV Argon ions at different fluences varying from 10¹³ to 10¹⁶ ions/cm². Synchrotron X-ray diffraction measurements showed an increase in crystallinity at the low fluence of 10¹³ ions/cm². Raman spectroscopy performed on irradiated samples determined the defects induced by the ion beam qualitatively. Also, identification of different groups and their removal with different fluences was done using Fourier infrared spectroscopy technique.

Keywords: graphene oxide, ion beam irradiation, spectroscopy, X-ray diffraction

Procedia PDF Downloads 136
3490 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems

Authors: Samuel Kaspi, Sitalakshmi Venkatraman

Abstract:

In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.

Keywords: in-memory database, disk-based system, hybrid database, concurrency control

Procedia PDF Downloads 420
3489 Using Urban Conversion to Green Public Space as a Tool to Generate Urban Change: Case of Seoul

Authors: Rachida Benabbou, Sang Hun Park, Hee Chung Lee

Abstract:

The world’s population is increasing with unprecedented speed, leading to fast growing urbanization pace. Cities since the Industrial revolution had evolved to fit the growing demand on infrastructure, roads, transportation, and housing. Through this evolution, cities had grown into grey, polluted, and vehicle-oriented urban areas with a significant lack of green spaces. Consequently, we ended up with low quality of life for citizens. Therefore, many cities, nowadays, are revising the way we think urbanism and try to grow into more livable and citizen-friendly, by creating change from the inside out. Thus, cities are trying to bring back nature in its crowded grey centers and regenerate many urban areas as green public spaces not only as a way to give new breath to the city, but also as a way to create change either in the environmental, social and economic levels. The city of Seoul is one of the fast growing global cities. Its population is over 12 million and it is expected to continue to grow to a point where the quality of life may seriously deteriorate. As most green areas in Seoul are located in the suburbs in form of mountains, the city’s urban areas suffer from lack of accessible green spaces in a walking distance. Understanding the gravity and consequences of this issue, Seoul city is undergoing major changes. Many of its projects are oriented to be green public spaces where citizens can enjoy the public life in healthy outdoors. The aim of this paper is to explore the results of urban conversions into green public spaces. Starting with different locations, nature, size, and scale, these conversions can lead to significant change in the surrounding areas, thus can be used as an efficient tool of regeneration for urban areas. Through a comparative analysis of three different types of urban conversions projects in the city of Seoul, we try to show the positive urban influence of the outcomes, in order to encourage cities to use green spaces as a strategic tool for urban regeneration and redevelopment.

Keywords: urban conversion, green public space, change, urban regeneration

Procedia PDF Downloads 307
3488 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population

Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath

Abstract:

Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.

Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics

Procedia PDF Downloads 164
3487 Sensor Registration in Multi-Static Sonar Fusion Detection

Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin

Abstract:

In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.

Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem

Procedia PDF Downloads 169
3486 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data

Authors: Martin Pellon Consunji

Abstract:

Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.

Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms

Procedia PDF Downloads 124
3485 Parametric Evaluation for the Optimization of Gastric Emptying Protocols Used in Health Care Institutions

Authors: Yakubu Adamu

Abstract:

The aim of this research was to assess the factors contributing to the need for optimisation of the gastric emptying protocols in nuclear medicine and molecular imaging (SNMMI) procedures. The objective is to suggest whether optimisation is possible and provide supporting evidence for the current imaging protocols of gastric emptying examination used in nuclear medicine. The research involved the use of some selected patients with 30 dynamic series for the image processing using ImageJ, and by so doing, the calculated half-time, retention fraction to the 60 x1 minute, 5 minute and 10-minute protocol, and other sampling intervals were obtained. Results from the study IDs for the gastric emptying clearance half-time were classified into normal, abnormal fast, and abnormal slow categories. In the normal category, which represents 50% of the total gastric emptying image IDs processed, their clearance half-time was within the range of 49.5 to 86.6 minutes of the mean counts. Also, under the abnormal fast category, their clearance half-time fell between 21 to 43.3 minutes of the mean counts, representing 30% of the total gastric emptying image IDs processed, and the abnormal slow category had clearance half-time within the range of 138.6 to 138.6 minutes of the mean counts, representing 20%. The results indicated that the calculated retention fraction values from the 1, 5, and 10-minute sampling curves and the measured values of gastric emptying retention fraction from sampling curves of the study IDs had a normal retention fraction of <60% and decreased exponentially with an increase in time and it was evident with low percentages of retention fraction ratios of < 10% after the 4 hours. Thus, this study does not change categories suggesting that these values could feasibly be used instead of having to acquire actual images. Findings from the study suggest that the current gastric emptying protocol can be optimized by acquiring fewer images. The study recommended that the gastric emptying studies should be performed with imaging at a minimum of 0, 1, 2, and 4 hours after meal ingestion.

Keywords: gastric emptying, retention fraction, clearance halftime, optimisation, protocol

Procedia PDF Downloads 10
3484 Multi-Criteria Evolutionary Algorithm to Develop Efficient Schedules for Complex Maintenance Problems

Authors: Sven Tackenberg, Sönke Duckwitz, Andreas Petz, Christopher M. Schlick

Abstract:

This paper introduces an extension to the well-established Resource-Constrained Project Scheduling Problem (RCPSP) to apply it to complex maintenance problems. The problem is to assign technicians to a team which has to process several tasks with multi-level skill requirements during a work shift. Here, several alternative activities for a task allow both, the temporal shift of activities or the reallocation of technicians and tools. As a result, switches from one valid work process variant to another can be considered and may be selected by the developed evolutionary algorithm based on the present skill level of technicians or the available tools. An additional complication of the observed scheduling problem is that the locations of the construction sites are only temporarily accessible during a day. Due to intensive rail traffic, the available time slots for maintenance and repair works are extremely short and are often distributed throughout the day. To identify efficient working periods, a first concept of a Bayesian network is introduced and is integrated into the extended RCPSP with pre-emptive and non-pre-emptive tasks. Thereby, the Bayesian network is used to calculate the probability of a maintenance task to be processed during a specific period of the shift. Focusing on the domain of maintenance of the railway infrastructure in metropolitan areas as the most unproductive implementation process at construction site, the paper illustrates how the extended RCPSP can be applied for maintenance planning support. A multi-criteria evolutionary algorithm with a problem representation is introduced which is capable of revising technician-task allocations, whereas the duration of the task may be stochastic. The approach uses a novel activity list representation to ensure easily describable and modifiable elements which can be converted into detailed shift schedules. Thereby, the main objective is to develop a shift plan which maximizes the utilization of each technician due to a minimization of the waiting times caused by rail traffic. The results of the already implemented core algorithm illustrate a fast convergence towards an optimal team composition for a shift, an efficient sequence of tasks and a high probability of the subsequent implementation due to the stochastic durations of the tasks. In the paper, the algorithm for the extended RCPSP is analyzed in experimental evaluation using real-world example problems with various size, resource complexity, tightness and so forth.

Keywords: maintenance management, scheduling, resource constrained project scheduling problem, genetic algorithms

Procedia PDF Downloads 232
3483 Piezoelectric and Dielectric Properties of Poly(Vinylideneflouride-Hexafluoropropylene)/ZnO Nanocomposites

Authors: P. Hemalatha, Deepalekshmi Ponnamma, Mariam Al Ali Al-Maadeed

Abstract:

The Poly(vinylideneflouride-hexafluoropropylene) (PVDF-HFP)/ zinc oxide (ZnO) nanocomposites films were successfully prepared by mixing the fine ZnO particles into PVDF-HFP solution followed by film casting and sandwich techniques. Zinc oxide nanoparticles were synthesized by hydrothermal method. Fourier transform infrared (FT-IR) spectroscopy, X-ray diffraction (XRD) and scanning electron microscopy (SEM) were used to characterize the structure and properties of the obtained nanocomposites. The dielectric properties of the PVDF-HFP/ZnO nanocomposites were analyzed in detail. In comparison with pure PVDF-HFP, the dielectric constant of the nanocomposite (1wt% ZnO) was significantly improved. The piezoelectric co-efficients of the nanocomposites films were measured. Experimental results revealed the influence of filler on the properties of PVDF-HFP and enhancement in the output performance and dielectric properties reflects the ability for energy storage capabilities.

Keywords: dielectric constant, hydrothermal, nanoflowers, organic compounds

Procedia PDF Downloads 286
3482 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 70
3481 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator

Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty

Abstract:

Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.

Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state

Procedia PDF Downloads 266
3480 Multifunctional Coating of Nylon Using Nano-Si, Nano-Ti and SiO2-TiO2 Nancomposite :Properties of Colorimetric and Flammability

Authors: E. Fereydouni, Laleh Maleknia , M. E. Olya

Abstract:

The present research, nylon fabric dyed by pressure method with nano-Si, nano-Ti particles and SiO2-TiO2 nancomposite. The influence of the amount of Si, Ti and SiO2-TiO2 on the performance of nylon fabric was investigated by the use of Fourier transform infrared spectrophotometer (FTIR), horizontal flammability apparatus (HFA), scanning electron microscope (SEM), electron dispersive X-ray spectroscope (EDX), water contact angle tester (WCA) and CIE LAB colorimetric system. The possible interactions between particles and nylon fiber were elucidated by the FTIR spectroscopy. Results indicated that the stabilized nanoparticles and nanocomposite enhances flame retardancy of nylon fabrics. Also, the prominet features of nanoparticles and nanocomposite treatment can note increase of adsorption and fixation of dye.

Keywords: nano-Si, nano- Ti, SiO2-TiO2 nancomposite, nylon fabric, flame retardant nylon

Procedia PDF Downloads 362