Search results for: computational neural networks
3079 LncRNA-miRNA-mRNA Networks Associated with BCR-ABL T315I Mutation in Chronic Myeloid Leukemia
Authors: Adenike Adesanya, Nonthaphat Wong, Xiang-Yun Lan, Shea Ping Yip, Chien-Ling Huang
Abstract:
Background: The most challenging mutation of the oncokinase BCR-ABL protein T315I, which is commonly known as the “gatekeeper” mutation and is notorious for its strong resistance to almost all tyrosine kinase inhibitors (TKIs), especially imatinib. Therefore, this study aims to identify T315I-dependent downstream microRNA (miRNA) pathways associated with drug resistance in chronic myeloid leukemia (CML) for prognostic and therapeutic purposes. Methods: T315I-carrying K562 cell clones (K562-T315I) were generated by the CRISPR-Cas9 system. Imatinib-treated K562-T315I cells were subjected to small RNA library preparation and next-generation sequencing. Putative lncRNA-miRNA-mRNA networks were analyzed with (i) DESeq2 to extract differentially expressed miRNAs, using Padj value of 0.05 as cut-off, (ii) STarMir to obtain potential miRNA response element (MRE) binding sites of selected miRNAs on lncRNA H19, (iii) miRDB, miRTarbase, and TargetScan to predict mRNA targets of selected miRNAs, (iv) IntaRNA to obtain putative interactions between H19 and the predicted mRNAs, (v) Cytoscape to visualize putative networks, and (vi) several pathway analysis platforms – Enrichr, PANTHER and ShinyGO for pathway enrichment analysis. Moreover, mitochondria isolation and transcript quantification were adopted to determine the new mechanism involved in T315I-mediated resistance of CML treatment. Results: Verification of the CRISPR-mediated mutagenesis with digital droplet PCR detected the mutation abundance of ≥80%. Further validation showed the viability of ≥90% by cell viability assay, and intense phosphorylated CRKL protein band being detected with no observable change for BCR-ABL and c-ABL protein expressions by Western blot. As reported by several investigations into hematological malignancies, we determined a 7-fold increase of H19 expression in K562-T315I cells. After imatinib treatment, a 9-fold increment was observed. DESeq2 revealed 171 miRNAs were differentially expressed K562-T315I, 112 out of these miRNAs were identified to have MRE binding regions on H19, and 26 out of the 112 miRNAs were significantly downregulated. Adopting the seed-sequence analysis of these identified miRNAs, we obtained 167 mRNAs. 6 hub miRNAs (hsa-let-7b-5p, hsa-let-7e-5p, hsa-miR-125a-5p, hsa-miR-129-5p, and hsa-miR-372-3p) and 25 predicted genes were identified after constructing hub miRNA-target gene network. These targets demonstrated putative interactions with H19 lncRNA and were mostly enriched in pathways related to cell proliferation, senescence, gene silencing, and pluripotency of stem cells. Further experimental findings have also shown the up-regulation of mitochondrial transcript and lncRNA MALAT1 contributing to the lncRNA-miRNA-mRNA networks induced by BCR-ABL T315I mutation. Conclusions: Our results have indicated that lncRNA-miRNA regulators play a crucial role not only in leukemogenesis but also in drug resistance, considering the significant dysregulation and interactions in the K562-T315I cell model generated by CRISPR-Cas9. In silico analysis has further shown that lncRNAs H19 and MALAT1 bear several complementary miRNA sites. This implies that they could serve as a sponge, hence sequestering the activity of the target miRNAs.Keywords: chronic myeloid leukemia, imatinib resistance, lncRNA-miRNA-mRNA, T315I mutation
Procedia PDF Downloads 1583078 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 1283077 Two-Phase Flow Study of Airborne Transmission Control in Dental Practices
Authors: Mojtaba Zabihi, Stephen Munro, Jonathan Little, Ri Li, Joshua Brinkerhoff, Sina Kheirkhah
Abstract:
Occupational Safety and Health Administration (OSHA) identified dental workers at the highest risk of contracting COVID-19. This is because aerosol-generating procedures (AGP) during dental practices generate aerosols ( < 5µm) and droplets. These particles travel at varying speeds, in varying directions, and for varying durations. If these particles bear infectious viruses, their spreading causes airborne transmission of the virus in the dental room, exposing dentists, hygienists, dental assistants, and even other dental clinic clients to the infection risk. Computational fluid dynamics (CFD) simulation of two-phase flows based on a discrete phase model (DPM) is carried out to study the spreading of aerosol and droplets in a dental room. The simulation includes momentum, heat, and mass transfers between the particles and the airflow. Two simulations are conducted and compared. One simulation focuses on the effects of room ventilation in winter and summer on the particles' travel. The other simulation focuses on the control of aerosol and droplets' spreading. A suction collector is added near the source of aerosol and droplets, creating a flow sink in order to remove the particles. The effects of the suction flow on the aerosol and droplet travel are studied. The suction flow can remove aerosols and also reduce the spreading of droplets.Keywords: aerosols, computational fluid dynamics, COVID-19, dental, discrete phase model, droplets, two-phase flow
Procedia PDF Downloads 2623076 A Hybrid Classical-Quantum Algorithm for Boundary Integral Equations of Scattering Theory
Authors: Damir Latypov
Abstract:
A hybrid classical-quantum algorithm to solve boundary integral equations (BIE) arising in problems of electromagnetic and acoustic scattering is proposed. The quantum speed-up is due to a Quantum Linear System Algorithm (QLSA). The original QLSA of Harrow et al. provides an exponential speed-up over the best-known classical algorithms but only in the case of sparse systems. Due to the non-local nature of integral operators, matrices arising from discretization of BIEs, are, however, dense. A QLSA for dense matrices was introduced in 2017. Its runtime as function of the system's size N is bounded by O(√Npolylog(N)). The run time of the best-known classical algorithm for an arbitrary dense matrix scales as O(N².³⁷³). Instead of exponential as in case of sparse matrices, here we have only a polynomial speed-up. Nevertheless, sufficiently high power of this polynomial, ~4.7, should make QLSA an appealing alternative. Unfortunately for the QLSA, the asymptotic separability of the Green's function leads to high compressibility of the BIEs matrices. Classical fast algorithms such as Multilevel Fast Multipole Method (MLFMM) take advantage of this fact and reduce the runtime to O(Nlog(N)), i.e., the QLSA is only quadratically faster than the MLFMM. To be truly impactful for computational electromagnetics and acoustics engineers, QLSA must provide more substantial advantage than that. We propose a computational scheme which combines elements of the classical fast algorithms with the QLSA to achieve the required performance.Keywords: quantum linear system algorithm, boundary integral equations, dense matrices, electromagnetic scattering theory
Procedia PDF Downloads 1523075 An Ensemble System of Classifiers for Computer-Aided Volcano Monitoring
Authors: Flavio Cannavo
Abstract:
Continuous evaluation of the status of potentially hazardous volcanos plays a key role for civil protection purposes. The importance of monitoring volcanic activity, especially for energetic paroxysms that usually come with tephra emissions, is crucial not only for exposures to the local population but also for airline traffic. Presently, real-time surveillance of most volcanoes worldwide is essentially delegated to one or more human experts in volcanology, who interpret data coming from different kind of monitoring networks. Unfavorably, the high nonlinearity of the complex and coupled volcanic dynamics leads to a large variety of different volcanic behaviors. Moreover, continuously measured parameters (e.g. seismic, deformation, infrasonic and geochemical signals) are often not able to fully explain the ongoing phenomenon, thus making the fast volcano state assessment a very puzzling task for the personnel on duty at the control rooms. With the aim of aiding the personnel on duty in volcano surveillance, here we introduce a system based on an ensemble of data-driven classifiers to infer automatically the ongoing volcano status from all the available different kind of measurements. The system consists of a heterogeneous set of independent classifiers, each one built with its own data and algorithm. Each classifier gives an output about the volcanic status. The ensemble technique allows weighting the single classifier output to combine all the classifications into a single status that maximizes the performance. We tested the model on the Mt. Etna (Italy) case study by considering a long record of multivariate data from 2011 to 2015 and cross-validated it. Results indicate that the proposed model is effective and of great power for decision-making purposes.Keywords: Bayesian networks, expert system, mount Etna, volcano monitoring
Procedia PDF Downloads 2463074 Two-Dimensional CFD Simulation of the Behaviors of Ferromagnetic Nanoparticles in Channel
Authors: Farhad Aalizadeh, Ali Moosavi
Abstract:
This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, particle tracking. The purpose of this paper is applied magnetic field effect on Magnetic Nanoparticles velocities distribution. It is shown that the permeability of the particles determines the effect of the magnetic field on the deposition of the particles and the deposition of the particles is inversely proportional to the Reynolds number. Using MHD and its property it is possible to control the flow velocity, remove the fouling on the walls and return the system to its original form. we consider a channel 2D geometry and solve for the resulting spatial distribution of particles. According to obtained results when only magnetic fields are applied perpendicular to the flow, local particles velocity is decreased due to the direct effect of the magnetic field return the system to its original fom. In the method first, in order to avoid mixing with blood, the ferromagnetic particles are covered with a gel-like chemical composition and are injected into the blood vessels. Then, a magnetic field source with a specified distance from the vessel is used and the particles are guided to the affected area. This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, laminar flow of an incompressible magnetorheological (MR) fluid between two fixed parallel plates in the presence of a uniform magnetic field. The purpose of this study is to develop a numerical tool that is able to simulate MR fluids flow in valve mode and determineB0, applied magnetic field effect on flow velocities and pressure distributions.Keywords: MHD, channel clots, magnetic nanoparticles, simulations
Procedia PDF Downloads 3673073 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.Keywords: harmonics, passive filter, power factor, power quality
Procedia PDF Downloads 3053072 Liesegang Phenomena: Experimental and Simulation Studies
Authors: Vemula Amalakrishna, S. Pushpavanam
Abstract:
Change and motion characterize and persistently reshape the world around us, on scales from molecular to global. The subtle interplay between change (Reaction) and motion (Diffusion) gives rise to an astonishing intricate spatial or temporal pattern. These pattern formation in nature has been intellectually appealing for many scientists since antiquity. Periodic precipitation patterns, also known as Liesegang patterns (LP), are one of the stimulating examples of such self-assembling reaction-diffusion (RD) systems. LP formation has a great potential in micro and nanotechnology. So far, the research on LPs has been concentrated mostly on how these patterns are forming, retrieving information to build a universal mathematical model for them. Researchers have developed various theoretical models to comprehensively construct the geometrical diversity of LPs. To the best of our knowledge, simulation studies of LPs assume an arbitrary value of RD parameters to explain experimental observation qualitatively. In this work, existing models were studied to understand the mechanism behind this phenomenon and challenges pertaining to models were understood and explained. These models are not computationally effective due to the presence of discontinuous precipitation rate in RD equations. To overcome the computational challenges, smoothened Heaviside functions have been introduced, which downsizes the computational time as well. Experiments were performed using a conventional LP system (AgNO₃-K₂Cr₂O₇) to understand the effects of different gels and temperatures on formed LPs. The model is extended for real parameter values to compare the simulated results with experimental data for both 1-D (Cartesian test tubes) and 2-D(cylindrical and Petri dish).Keywords: reaction-diffusion, spatio-temporal patterns, nucleation and growth, supersaturation
Procedia PDF Downloads 1513071 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents
Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat
Abstract:
This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents
Procedia PDF Downloads 683070 An Artificial Neural Network Model Based Study of Seismic Wave
Authors: Hemant Kumar, Nilendu Das
Abstract:
A study based on ANN structure gives us the information to predict the size of the future in realizing a past event. ANN, IMD (Indian meteorological department) data and remote sensing were used to enable a number of parameters for calculating the size that may occur in the future. A threshold selected specifically above the high-frequency harvest reached the area during the selected seismic activity. In the field of human and local biodiversity it remains to obtain the right parameter compared to the frequency of impact. But during the study the assumption is that predicting seismic activity is a difficult process, not because of the parameters involved here, which can be analyzed and funded in research activity.Keywords: ANN, Bayesion class, earthquakes, IMD
Procedia PDF Downloads 1233069 Artificial Intelligence Techniques for Enhancing Supply Chain Resilience: A Systematic Literature Review, Holistic Framework, and Future Research
Authors: Adane Kassa Shikur
Abstract:
Today’s supply chains (SC) have become vulnerable to unexpected and ever-intensifying disruptions from myriad sources. Consequently, the concept of supply chain resilience (SCRes) has become crucial to complement the conventional risk management paradigm, which has failed to cope with unexpected SC disruptions, resulting in severe consequences affecting SC performances and making business continuity questionable. Advancements in cutting-edge technologies like artificial intelligence (AI) and their potential to enhance SCRes by improving critical antecedents in the different phases have attracted the attention of scholars and practitioners. The research from academia and the practical interest of the industry have yielded significant publications at the nexus of AI and SCRes during the last two decades. However, the applications and examinations have been primarily conducted independently, and the extant literature is dispersed into research streams despite the complex nature of SCRes. To close this research gap, this study conducts a systematic literature review of 106 peer-reviewed articles by curating, synthesizing, and consolidating up-to-date literature and presents the state-of-the-art development from 2010 to 2022. Bayesian networks are the most topical ones among the 13 AI techniques evaluated. Concerning the critical antecedents, visibility is the first ranking to be realized by the techniques. The study revealed that AI techniques support only the first 3 phases of SCRes (readiness, response, and recovery), and readiness is the most popular one, while no evidence has been found for the growth phase. The study proposed an AI-SCRes framework to inform research and practice to approach SCRes holistically. It also provided implications for practice, policy, and theory as well as gaps for impactful future research.Keywords: ANNs, risk, Bauesian networks, vulnerability, resilience
Procedia PDF Downloads 933068 A Comparative Semantic Network Study between Chinese and Western Festivals
Authors: Jianwei Qian, Rob Law
Abstract:
With the expansion of globalization and the increment of market competition, the festival, especially the traditional one, has demonstrated its vitality under the new context. As a new tourist attraction, festivals play a critically important role in promoting the tourism economy, because the organization of a festival can engage more tourists, generate more revenues and win a wider media concern. However, in the current stage of China, traditional festivals as a way to disseminate national culture are undergoing the challenge of foreign festivals and the related culture. Different from those special events created solely for developing economy, traditional festivals have their own culture and connotation. Therefore, it is necessary to conduct a study on not only protecting the tradition, but promoting its development as well. This study conducts a comparative study of the development of China’s Valentine’s Day and Western Valentine’s Day under the Chinese context and centers on newspaper reports in China from 2000 to 2016. Based on the literature, two main research focuses can be established: one is concerned about the festival’s impact and the other is about tourists’ motivation to engage in a festival. Newspaper reports serve as the research discourse and can help cover the two focal points. With the assistance of content mining techniques, semantic networks for both Days are constructed separately to help depict the status quo of these two festivals in China. Based on the networks, two models are established to show the key component system of traditional festivals in the hope of perfecting the positive role festival tourism plays in the promotion of economy and culture. According to the semantic networks, newspaper reports on both festivals have similarities and differences. The difference is mainly reflected in its cultural connotation, because westerners and Chinese may show their love in different ways. Nevertheless, they share more common points in terms of economy, tourism, and society. They also have a similar living environment and stakeholders. Thus, they can be promoted together to revitalize some traditions in China. Three strategies are proposed to realize the aforementioned aim. Firstly, localize international festivals to suit the Chinese context to make it function better. Secondly, facilitate the internationalization process of traditional Chinese festivals to receive more recognition worldwide. Finally, allow traditional festivals to compete with foreign ones to help them learn from each other and elucidate the development of other festivals. It is believed that if all these can be realized, not only the traditional Chinese festivals can obtain a more promising future, but foreign ones are the same as well. Accordingly, the paper can contribute to the theoretical construction of festival images by the presentation of the semantic network. Meanwhile, the identified features and issues of festivals from two different cultures can enlighten the organization and marketing of festivals as a vital tourism activity. In the long run, the study can enhance the festival as a key attraction to keep the sustainable development of both the economy and the society.Keywords: Chinese context, comparative study, festival tourism, semantic network analysis, valentine’s day
Procedia PDF Downloads 2303067 Computational Fluid Dynamics (CFD) Simulation Approach for Developing New Powder Dispensing Device
Authors: Revanth Rallapalli
Abstract:
Manually dispensing solids and powders can be difficult as it requires gradually pour and check the amount on the scale to be dispensed. Current systems are manual and non-continuous in nature and are user-dependent and difficult to control powder dispensation. Recurrent dosing of powdered medicines in precise amounts quickly and accurately has been an all-time challenge. Various new powder dispensing mechanisms are being designed to overcome these challenges. A battery-operated screw conveyor mechanism is being innovated to overcome the above problems faced. These inventions are numerically evaluated at the concept development level by employing Computational Fluid Dynamics (CFD) of gas-solids multiphase flow systems. CFD has been very helpful in development of such devices saving time and money by reducing the number of prototypes and testing. Furthermore, this paper describes a simulation of powder dispensation from the trocar’s end by considering the powder as secondary flow in air, is simulated by using the technique called Dense Discrete Phase Model incorporated with Kinetic Theory of Granular Flow (DDPM-KTGF). By considering the volume fraction of powder as 50%, the transportation of powder from the inlet side to trocar’s end side is done by rotation of the screw conveyor. Thus, the performance is calculated for a 1-sec time frame in an unsteady computation manner. This methodology will help designers in developing design concepts to improve the dispensation and also at the effective area within a quick turnaround time frame.Keywords: DDPM-KTGF, gas-solids multiphase flow, screw conveyor, Unsteady
Procedia PDF Downloads 1793066 Using ANN in Emergency Reconstruction Projects Post Disaster
Authors: Rasha Waheeb, Bjorn Andersen, Rafa Shakir
Abstract:
Purpose The purpose of this study is to avoid delays that occur in emergency reconstruction projects especially in post disaster circumstances whether if they were natural or manmade due to their particular national and humanitarian importance. We presented a theoretical and practical concepts for projects management in the field of construction industry that deal with a range of global and local trails. This study aimed to identify the factors of effective delay in construction projects in Iraq that affect the time and the specific quality cost, and find the best solutions to address delays and solve the problem by setting parameters to restore balance in this study. 30 projects were selected in different areas of construction were selected as a sample for this study. Design/methodology/approach This study discusses the reconstruction strategies and delay in time and cost caused by different delay factors in some selected projects in Iraq (Baghdad as a case study).A case study approach was adopted, with thirty construction projects selected from the Baghdad region, of different types and sizes. Project participants from the case projects provided data about the projects through a data collection instrument distributed through a survey. Mixed approach and methods were applied in this study. Mathematical data analysis was used to construct models to predict delay in time and cost of projects before they started. The artificial neural networks analysis was selected as a mathematical approach. These models were mainly to help decision makers in construction project to find solutions to these delays before they cause any inefficiency in the project being implemented and to strike the obstacles thoroughly to develop this industry in Iraq. This approach was practiced using the data collected through survey and questionnaire data collection as information form. Findings The most important delay factors identified leading to schedule overruns were contractor failure, redesigning of designs/plans and change orders, security issues, selection of low-price bids, weather factors, and owner failures. Some of these are quite in line with findings from similar studies in other countries/regions, but some are unique to the Iraqi project sample, such as security issues and low-price bid selection. Originality/value we selected ANN’s analysis first because ANN’s was rarely used in project management , and never been used in Iraq to finding solutions for problems in construction industry. Also, this methodology can be used in complicated problems when there is no interpretation or solution for a problem. In some cases statistical analysis was conducted and in some cases the problem is not following a linear equation or there was a weak correlation, thus we suggested using the ANN’s because it is used for nonlinear problems to find the relationship between input and output data and that was really supportive.Keywords: construction projects, delay factors, emergency reconstruction, innovation ANN, post disasters, project management
Procedia PDF Downloads 1653065 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery
Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko
Abstract:
In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analysed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realised via a two-way coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary lagrangian-eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analysed in the study. The axial velocity at normalised position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.Keywords: Large Eddy Simulation, Fluid Structural Interaction, constricted artery, Computational Fluid Dynamics
Procedia PDF Downloads 2923064 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 1583063 Design and Testing of Electrical Capacitance Tomography Sensors for Oil Pipeline Monitoring
Authors: Sidi M. A. Ghaly, Mohammad O. Khan, Mohammed Shalaby, Khaled A. Al-Snaie
Abstract:
Electrical capacitance tomography (ECT) is a valuable, non-invasive technique used to monitor multiphase flow processes, especially within industrial pipelines. This study focuses on the design, testing, and performance comparison of ECT sensors configured with 8, 12, and 16 electrodes, aiming to evaluate their effectiveness in imaging accuracy, resolution, and sensitivity. Each sensor configuration was designed to capture the spatial permittivity distribution within a pipeline cross-section, enabling visualization of phase distribution and flow characteristics such as oil and water interactions. The sensor designs were implemented and tested in closed pipes to assess their response to varying flow regimes. Capacitance data collected from each electrode configuration were reconstructed into cross-sectional images, enabling a comparison of image resolution, noise levels, and computational demands. Results indicate that the 16-electrode configuration yields higher image resolution and sensitivity to phase boundaries compared to the 8- and 12-electrode setups, making it more suitable for complex flow visualization. However, the 8 and 12-electrode sensors demonstrated advantages in processing speed and lower computational requirements. This comparative analysis provides critical insights into optimizing ECT sensor design based on specific industrial requirements, from high-resolution imaging to real-time monitoring needs.Keywords: capacitance tomography, modeling, simulation, electrode, permittivity, fluid dynamics, imaging sensitivity measurement
Procedia PDF Downloads 03062 Computational Fluid Dynamics Modeling of Flow Properties Fluctuations in Slug-Churn Flow through Pipe Elbow
Authors: Nkemjika Chinenye-Kanu, Mamdud Hossain, Ghazi Droubi
Abstract:
Prediction of multiphase flow induced forces, void fraction and pressure is crucial at both design and operating stages of practical energy and process pipe systems. In this study, transient numerical simulations of upward slug-churn flow through a vertical 90-degree elbow have been conducted. The volume of fluid (VOF) method was used to model the two-phase flows while the K-epsilon Reynolds-Averaged Navier-Stokes (RANS) equations were used to model turbulence in the flows. The simulation results were validated using experimental results. Void fraction signal, peak frequency and maximum magnitude of void fraction fluctuation of the slug-churn flow validation case studies compared well with experimental results. The x and y direction force fluctuation signals at the elbow control volume were obtained by carrying out force balance calculations using the directly extracted time domain signals of flow properties through the control volume in the numerical simulation. The computed force signal compared well with experiment for the slug and churn flow validation case studies. Hence, the present numerical simulation technique was able to predict the behaviours of the one-way flow induced forces and void fraction fluctuations.Keywords: computational fluid dynamics, flow induced vibration, slug-churn flow, void fraction and force fluctuation
Procedia PDF Downloads 1553061 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 1213060 Enhanced CNN for Rice Leaf Disease Classification in Mobile Applications
Authors: Kayne Uriel K. Rodrigo, Jerriane Hillary Heart S. Marcial, Samuel C. Brillo
Abstract:
Rice leaf diseases significantly impact yield production in rice-dependent countries, affecting their agricultural sectors. As part of precision agriculture, early and accurate detection of these diseases is crucial for effective mitigation practices and minimizing crop losses. Hence, this study proposes an enhancement to the Convolutional Neural Network (CNN), a widely-used method for Rice Leaf Disease Image Classification, by incorporating MobileViTV2—a recently advanced architecture that combines CNN and Vision Transformer models while maintaining fewer parameters, making it suitable for broader deployment on edge devices. Our methodology utilizes a publicly available rice disease image dataset from Kaggle, which was validated by a university structural biologist following the guidelines provided by the Philippine Rice Institute (PhilRice). Modifications to the dataset include renaming certain disease categories and augmenting the rice leaf image data through rotation, scaling, and flipping. The enhanced dataset was then used to train the MobileViTV2 model using the Timm library. The results of our approach are as follows: the model achieved notable performance, with 98% accuracy in both training and validation, 6% training and validation loss, and a Receiver Operating Characteristic (ROC) curve ranging from 95% to 100% for each label. Additionally, the F1 score was 97%. These metrics demonstrate a significant improvement compared to a conventional CNN-based approach, which, in a previous 2022 study, achieved only 78% accuracy after using 5 convolutional layers and 2 dense layers. Thus, it can be concluded that MobileViTV2, with its fewer parameters, outperforms traditional CNN models, particularly when applied to Rice Leaf Disease Image Identification. For future work, we recommend extending this model to include datasets validated by international rice experts and broadening the scope to accommodate biotic factors such as rice pest classification, as well as abiotic stressors such as climate, soil quality, and geographic information, which could improve the accuracy of disease prediction.Keywords: convolutional neural network, MobileViTV2, rice leaf disease, precision agriculture, image classification, vision transformer
Procedia PDF Downloads 193059 Investigating Kinetics and Mathematical Modeling of Batch Clarification Process for Non-Centrifugal Sugar Production
Authors: Divya Vats, Sanjay Mahajani
Abstract:
The clarification of sugarcane juice plays a pivotal role in the production of non-centrifugal sugar (NCS), profoundly influencing the quality of the final NCS product. In this study, we have investigated the kinetics and mathematical modeling of the batch clarification process. The turbidity of the clarified cane juice (NTU) emerges as the determinant of the end product’s color. Moreover, this parameter underscores the significance of considering other variables as performance indicators for accessing the efficacy of the clarification process. Temperature-controlled experiments were meticulously conducted in a laboratory-scale batch mode. The primary objective was to discern the essential and optimized parameters crucial for augmenting the clarity of cane juice. Additionally, we explored the impact of pH and flocculant loading on the kinetics. Particle Image Velocimetry (PIV) is employed to comprehend the particle-particle and fluid-particle interaction. This technique facilitated a comprehensive understanding, paving the way for the subsequent multiphase computational fluid dynamics (CFD) simulations using the Eulerian-Lagrangian approach in the Ansys fluent. Impressively, these simulations accurately replicated comparable velocity profiles. The final mechanism of this study helps to make a mathematical model and presents a valuable framework for transitioning from the traditional batch process to a continuous process. The ultimate aim is to attain heightened productivity and unwavering consistency in product quality.Keywords: non-centrifugal sugar, particle image velocimetry, computational fluid dynamics, mathematical modeling, turbidity
Procedia PDF Downloads 703058 A Transient Coupled Numerical Analysis of the Flow of Magnetorheological Fluids in Closed Domains
Authors: Wael Elsaady, S. Olutunde Oyadiji, Adel Nasser
Abstract:
The non-linear flow characteristics of magnetorheological (MR) fluids in MR dampers are studied via a coupled numerical approach that incorporates a two-phase flow model. The approach couples the Finite Element (FE) modelling of the damper magnetic circuit, with the Computational Fluid Dynamics (CFD) analysis of the flow field in the damper. The two-phase flow CFD model accounts for the effect of fluid compressibility due to the presence of liquid and gas in the closed domain of the damper. The dynamic mesh model included in ANSYS/Fluent CFD solver is used to simulate the movement of the MR damper piston in order to perform the fluid excitation. The two-phase flow analysis is studied by both Volume-Of-Fluid (VOF) model and mixture model that are included in ANSYS/Fluent. The CFD models show that the hysteretic behaviour of MR dampers is due to the effect of fluid compressibility. The flow field shows the distributions of pressure, velocity, and viscosity contours. In particular, it shows the high non-Newtonian viscosity in the affected fluid regions by the magnetic field and the low Newtonian viscosity elsewhere. Moreover, the dependence of gas volume fraction on the liquid pressure inside the damper is predicted by the mixture model. The presented approach targets a better understanding of the complicated flow characteristics of viscoplastic fluids that could be applied in different applications.Keywords: viscoplastic fluid, magnetic FE analysis, computational fluid dynamics, two-phase flow, dynamic mesh, user-defined functions
Procedia PDF Downloads 1733057 Electrochemical Behavior of Cocaine on Carbon Paste Electrode Chemically Modified with Cu(II) Trans 3-MeO Salcn Complex
Authors: Alex Soares Castro, Matheus Manoel Teles de Menezes, Larissa Silva de Azevedo, Ana Carolina Caleffi Patelli, Osmair Vital de Oliveira, Aline Thais Bruni, Marcelo Firmino de Oliveira
Abstract:
Considering the problem of the seizure of illicit drugs, as well as the development of electrochemical sensors using chemically modified electrodes, this work shows the study of the electrochemical activity of cocaine in carbon paste electrode chemically modified with Cu (II) trans 3-MeO salcn complex. In this context, cyclic voltammetry was performed on 0.1 mol.L⁻¹ KCl supporting electrolyte at a scan speed of 100 mV s⁻¹, using an electrochemical cell composed of three electrodes: Ag /AgCl electrode (filled KCl 3 mol.L⁻¹) from Metrohm® (reference electrode); a platinum spiral electrode, as an auxiliary electrode, and a carbon paste electrode chemically modified with Cu (II) trans 3-MeO complex (as working electrode). Two forms of cocaine were analyzed: cocaine hydrochloride (pH 3) and cocaine free base form (pH 8). The PM7 computational method predicted that the hydrochloride form is more stable than the free base form of cocaine, so with cyclic voltammetry, we found electrochemical signal only for cocaine in the form of hydrochloride, with an anodic peak at 1.10 V, with a linearity range between 2 and 20 μmol L⁻¹ had LD and LQ of 2.39 and 7.26x10-5 mol L⁻¹, respectively. The study also proved that cocaine is adsorbed on the surface of the working electrode, where through an irreversible process, where only anode peaks are observed, we have the oxidation of cocaine, which occurs in the hydrophilic region due to the loss of two electrons. The mechanism of this reaction was confirmed by the ab-inito quantum method.Keywords: ab-initio computational method, analytical method, cocaine, Schiff base complex, voltammetry
Procedia PDF Downloads 1923056 Software-Defined Networking: A New Approach to Fifth Generation Networks: Security Issues and Challenges Ahead
Authors: Behrooz Daneshmand
Abstract:
Software Defined Networking (SDN) is designed to meet the future needs of 5G mobile networks. The SDN architecture offers a new solution that involves separating the control plane from the data plane, which is usually paired together. Network functions traditionally performed on specific hardware can now be abstracted and virtualized on any device, and a centralized software-based administration approach is based on a central controller, facilitating the development of modern applications and services. These plan standards clear the way for a more adaptable, speedier, and more energetic network beneath computer program control compared with a conventional network. We accept SDN gives modern inquire about openings to security, and it can significantly affect network security research in numerous diverse ways. Subsequently, the SDN architecture engages systems to effectively screen activity and analyze threats to facilitate security approach modification and security benefit insertion. The segregation of the data planes and control and, be that as it may, opens security challenges, such as man-in-the-middle attacks (MIMA), denial of service (DoS) attacks, and immersion attacks. In this paper, we analyze security threats to each layer of SDN - application layer - southbound interfaces/northbound interfaces - controller layer and data layer. From a security point of see, the components that make up the SDN architecture have a few vulnerabilities, which may be abused by aggressors to perform noxious activities and hence influence the network and its administrations. Software-defined network assaults are shockingly a reality these days. In a nutshell, this paper highlights architectural weaknesses and develops attack vectors at each layer, which leads to conclusions about further progress in identifying the consequences of attacks and proposing mitigation strategies.Keywords: software-defined networking, security, SDN, 5G/IMT-2020
Procedia PDF Downloads 983055 A Single Stage Rocket Using Solid Fuels in Conventional Propulsion Systems
Authors: John R Evans, Sook-Ying Ho, Rey Chin
Abstract:
This paper describes the research investigations orientated to the starting and propelling of a solid fuel rocket engine which operates as combined cycle propulsion system using three thrust pulses. The vehicle has been designed to minimise the cost of launching small number of Nano/Cube satellites into low earth orbits (LEO). A technology described in this paper is a ground-based launch propulsion system which starts the rocket vertical motion immediately causing air flow to enter the ramjet’s intake. Current technology has a ramjet operation predicted to be able to start high subsonic speed of 280 m/s using a liquid fuel ramjet (LFRJ). The combined cycle engine configuration is in many ways fundamentally different from the LFRJ. A much lower subsonic start speed is highly desirable since the use of a mortar to obtain the latter speed for rocket means a shorter launcher length can be utilized. This paper examines the means and has some performance calculations, including Computational Fluid Dynamics analysis of air-intake at suitable operational conditions, 3-DOF point mass trajectory analysis of multi-pulse propulsion system (where pulse ignition time and thrust magnitude can be controlled), etc. of getting a combined cycle rocket engine use in a single stage vehicle.Keywords: combine cycle propulsion system, low earth orbit launch vehicle, computational fluid dynamics analysis, 3dof trajectory analysis
Procedia PDF Downloads 1893054 Evaluation of Initial Graft Tension during ACL Reconstruction Using a Three-Dimensional Computational Finite Element Simulation: Effect of the Combination of a Band of Gracilis with the Former Graft
Authors: S. Alireza Mirghasemi, Javad Parvizi, Narges R. Gabaran, Shervin Rashidinia, Mahdi M. Bijanabadi, Dariush G. Savadkoohi
Abstract:
Background: The anterior cruciate ligament is one of the most frequent ligament to be disrupted. Surgical reconstruction of the anterior cruciate ligament is a common practice to treat the disability or chronic instability of the knee. Several factors associated with success or failure of the ACL reconstruction including preoperative laxity of the knee, selection of the graft material, surgical technique, graft tension, and postoperative rehabilitation. We aimed to examine the biomechanical properties of any graft type and initial graft tensioning during ACL reconstruction using 3-dimensional computational finite element simulation. Methods: In this paper, 3-dimensional model of the knee was constructed to investigate the effect of graft tensioning on the knee joint biomechanics. Four different grafts were compared: 1) Bone-patellar tendon-bone graft (BPTB) 2) Hamstring tendon 3) BPTB and a band of gracilis4) Hamstring and a band of gracilis. The initial graft tension was set as “0, 20, 40, or 60N”. The anterior loading was set to 134 N. Findings: The resulting stress pattern and deflection in any of these models were compared to that of the intact knee. The obtained results showed that the combination of a band of gracilis with the former graft (BPTB or Hamstring) increases the structural stiffness of the knee. Conclusion: Required pretension during surgery decreases significantly by adding a band of gracilis to the proper graft.Keywords: ACL reconstruction, deflection, finite element simulation, stress pattern
Procedia PDF Downloads 2973053 Influences of Separation of the Boundary Layer in the Reservoir Pressure in the Shock Tube
Authors: Bruno Coelho Lima, Joao F.A. Martos, Paulo G. P. Toro, Israel S. Rego
Abstract:
The shock tube is a ground-facility widely used in aerospace and aeronautics science and technology for studies on gas dynamic and chemical-physical processes in gases at high-temperature, explosions and dynamic calibration of pressure sensors. A shock tube in its simplest form is comprised of two separate tubes of equal cross-section by a diaphragm. The diaphragm function is to separate the two reservoirs at different pressures. The reservoir containing high pressure is called the Driver, the low pressure reservoir is called Driven. When the diaphragm is broken by pressure difference, a normal shock wave and non-stationary (named Incident Shock Wave) will be formed in the same place of diaphragm and will get around toward the closed end of Driven. When this shock wave reaches the closer end of the Driven section will be completely reflected. Now, the shock wave will interact with the boundary layer that was created by the induced flow by incident shock wave passage. The interaction between boundary layer and shock wave force the separation of the boundary layer. The aim of this paper is to make an analysis of influences of separation of the boundary layer in the reservoir pressure in the shock tube. A comparison among CDF (Computational Fluids Dynamics), experiments test and analytical analysis were performed. For the analytical analysis, some routines in Python was created, in the numerical simulations (Computational Fluids Dynamics) was used the Ansys Fluent, and the experimental tests were used T1 shock tube located in IEAv (Institute of Advanced Studies).Keywords: boundary layer separation, moving shock wave, shock tube, transient simulation
Procedia PDF Downloads 3143052 Networking Approach for Historic Urban Landscape: Case Study of the Porcelain Capital of China
Abstract:
This article presents a “networking approach” as an alternative to the “layering model” in the issue of the historic urban landscape [HUL], based on research conducted in the historic city of Jingdezhen, the center of the porcelain industry in China. This study points out that the existing HUL concept, which can be traced back to the fundamental conceptual divisions set forth by western science, tends to analyze the various elements of urban heritage (composed of hybrid natural-cultural elements) by layers and ignore the nuanced connections and interweaving structure of various elements. Instead, the networking analysis approach can respond to the challenges of complex heritage networks and to the difficulties that are often faced when modern schemes of looking and thinking of landscape in the Eurocentric heritage model encounters local knowledge of Chinese settlement. The fieldwork in this paper examines the local language regarding place names and everyday uses of urban spaces, thereby highlighting heritage systems grounded in local life and indigenous knowledge. In the context of Chinese “Fengshui”, this paper demonstrates the local knowledge of nature and local intelligence of settlement location and design. This paper suggests that industrial elements (kilns, molding rooms, piers, etc.) and spiritual elements (temples for ceramic saints or water gods) are located in their intimate natural networks. Furthermore, the functional, spiritual, and natural elements are perceived as a whole and evolve as an interactive system. This paper proposes a local and cognitive approach in heritage, which was initially developed in European Landscape Convention and historic landscape characterization projects, and yet seeks a more tentative and nuanced model based on urban ethnography in a Chinese city.Keywords: Chinese city, historic urban landscape, heritage conservation, network
Procedia PDF Downloads 1403051 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer
Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın
Abstract:
We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer
Procedia PDF Downloads 2443050 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation
Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski
Abstract:
In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming
Procedia PDF Downloads 405