Search results for: singular value decomposition (SVD)
504 Gas Network Noncooperative Game
Authors: Teresa Azevedo PerdicoúLis, Paulo Lopes Dos Santos
Abstract:
The conceptualisation of the problem of network optimisation as a noncooperative game sets up a holistic interactive approach that brings together different network features (e.g., com-pressor stations, sources, and pipelines, in the gas context) where the optimisation objectives are different, and a single optimisation procedure becomes possible without having to feed results from diverse software packages into each other. A mathematical model of this type, where independent entities take action, offers the ideal modularity and subsequent problem decomposition in view to design a decentralised algorithm to optimise the operation and management of the network. In a game framework, compressor stations and sources are under-stood as players which communicate through network connectivity constraints–the pipeline model. That is, in a scheme similar to tatonnementˆ, the players appoint their best settings and then interact to check for network feasibility. The devolved degree of network unfeasibility informs the players about the ’quality’ of their settings, and this two-phase iterative scheme is repeated until a global optimum is obtained. Due to network transients, its optimisation needs to be assessed at different points of the control interval. For this reason, the proposed approach to optimisation has two stages: (i) the first stage computes along the period of optimisation in order to fulfil the requirement just mentioned; (ii) the second stage is initialised with the solution found by the problem computed at the first stage, and computes in the end of the period of optimisation to rectify the solution found at the first stage. The liability of the proposed scheme is proven correct on an abstract prototype and three example networks.Keywords: connectivity matrix, gas network optimisation, large-scale, noncooperative game, system decomposition
Procedia PDF Downloads 152503 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling
Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra
Abstract:
Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.Keywords: multi-temporal satellite image, urban growth, non-stationary, stochastic model
Procedia PDF Downloads 428502 Ammonia Cracking: Catalysts and Process Configurations for Enhanced Performance
Authors: Frea Van Steenweghen, Lander Hollevoet, Johan A. Martens
Abstract:
Compared to other hydrogen (H₂) carriers, ammonia (NH₃) is one of the most promising carriers as it contains 17.6 wt% hydrogen. It is easily liquefied at ≈ 9–10 bar pressure at ambient temperature. More importantly, NH₃ is a carbon-free hydrogen carrier with no CO₂ emission at final decomposition. Ammonia has a well-defined regulatory framework and a good track record regarding safety concerns. Furthermore, the industry already has an existing transport infrastructure consisting of pipelines, tank trucks and shipping technology, as ammonia has been manufactured and distributed around the world for over a century. While NH₃ synthesis and transportation technological solutions are at hand, a missing link in the hydrogen delivery scheme from ammonia is an energy-lean and efficient technology for cracking ammonia into H₂ and N₂. The most explored option for ammonia decomposition is thermo-catalytic cracking which is, by itself, the most energy-efficient approach compared to other technologies, such as plasma and electrolysis, as it is the most energy-lean and robust option. The decomposition reaction is favoured only at high temperatures (> 300°C) and low pressures (1 bar) as the thermocatalytic ammonia cracking process is faced with thermodynamic limitations. At 350°C, the thermodynamic equilibrium at 1 bar pressure limits the conversion to 99%. Gaining additional conversion up to e.g. 99.9% necessitates heating to ca. 530°C. However, reaching thermodynamic equilibrium is infeasible as a sufficient driving force is needed, requiring even higher temperatures. Limiting the conversion below the equilibrium composition is a more economical option. Thermocatalytic ammonia cracking is documented in scientific literature. Among the investigated metal catalysts (Ru, Co, Ni, Fe, …), ruthenium is known to be most active for ammonia decomposition with an onset of cracking activity around 350°C. For establishing > 99% conversion reaction, temperatures close to 600°C are required. Such high temperatures are likely to reduce the round-trip efficiency but also the catalyst lifetime because of the sintering of the supported metal phase. In this research, the first focus was on catalyst bed design, avoiding diffusion limitation. Experiments in our packed bed tubular reactor set-up showed that extragranular diffusion limitations occur at low concentrations of NH₃ when reaching high conversion, a phenomenon often overlooked in experimental work. A second focus was thermocatalyst development for ammonia cracking, avoiding the use of noble metals. To this aim, candidate metals and mixtures were deposited on a range of supports. Sintering resistance at high temperatures and the basicity of the support were found to be crucial catalyst properties. The catalytic activity was promoted by adding alkaline and alkaline earth metals. A third focus was studying the optimum process configuration by process simulations. A trade-off between conversion and favorable operational conditions (i.e. low pressure and high temperature) may lead to different process configurations, each with its own pros and cons. For example, high-pressure cracking would eliminate the need for post-compression but is detrimental for the thermodynamic equilibrium, leading to an optimum in cracking pressure in terms of energy cost.Keywords: ammonia cracking, catalyst research, kinetics, process simulation, thermodynamic equilibrium
Procedia PDF Downloads 66501 Frequency Domain Decomposition, Stochastic Subspace Identification and Continuous Wavelet Transform for Operational Modal Analysis of Three Story Steel Frame
Authors: Ardalan Sabamehr, Ashutosh Bagchi
Abstract:
Recently, Structural Health Monitoring (SHM) based on the vibration of structures has attracted the attention of researchers in different fields such as: civil, aeronautical and mechanical engineering. Operational Modal Analysis (OMA) have been developed to identify modal properties of infrastructure such as bridge, building and so on. Frequency Domain Decomposition (FDD), Stochastic Subspace Identification (SSI) and Continuous Wavelet Transform (CWT) are the three most common methods in output only modal identification. FDD, SSI, and CWT operate based on the frequency domain, time domain, and time-frequency plane respectively. So, FDD and SSI are not able to display time and frequency at the same time. By the way, FDD and SSI have some difficulties in a noisy environment and finding the closed modes. CWT technique which is currently developed works on time-frequency plane and a reasonable performance in such condition. The other advantage of wavelet transform rather than other current techniques is that it can be applied for the non-stationary signal as well. The aim of this paper is to compare three most common modal identification techniques to find modal properties (such as natural frequency, mode shape, and damping ratio) of three story steel frame which was built in Concordia University Lab by use of ambient vibration. The frame has made of Galvanized steel with 60 cm length, 27 cm width and 133 cm height with no brace along the long span and short space. Three uniaxial wired accelerations (MicroStarin with 100mv/g accuracy) have been attached to the middle of each floor and gateway receives the data and send to the PC by use of Node Commander Software. The real-time monitoring has been performed for 20 seconds with 512 Hz sampling rate. The test is repeated for 5 times in each direction by hand shaking and impact hammer. CWT is able to detect instantaneous frequency by used of ridge detection method. In this paper, partial derivative ridge detection technique has been applied to the local maxima of time-frequency plane to detect the instantaneous frequency. The extracted result from all three methods have been compared, and it demonstrated that CWT has the better performance in term of its accuracy in noisy environment. The modal parameters such as natural frequency, damping ratio and mode shapes are identified from all three methods.Keywords: ambient vibration, frequency domain decomposition, stochastic subspace identification, continuous wavelet transform
Procedia PDF Downloads 296500 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 128499 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups
Authors: Sakshi Bhalla
Abstract:
On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.Keywords: communication, emoji, language, Twitter
Procedia PDF Downloads 95498 Mapping Methods to Solve a Modified Korteweg de Vries Type Equation
Authors: E. V. Krishnan
Abstract:
In this paper, we employ mapping methods to construct exact travelling wave solutions for a modified Korteweg-de Vries equation. We have derived periodic wave solutions in terms of Jacobi elliptic functions, kink solutions and singular wave solutions in terms of hyperbolic functions.Keywords: travelling wave solutions, Jacobi elliptic functions, solitary wave solutions, Korteweg-de Vries equation
Procedia PDF Downloads 331497 On Paranorm Zweier I-Convergent Sequence Spaces
Authors: Nazneen Khan, Vakeel A. Khan
Abstract:
In this article we introduce the Paranorm Zweier I-convergent sequence spaces, for a sequence of positive real numbers. We study some topological properties, prove the decomposition theorem and study some inclusion relations on these spaces.Keywords: ideal, filter, I-convergence, I-nullity, paranorm
Procedia PDF Downloads 481496 Adaptive Target Detection of High-Range-Resolution Radar in Non-Gaussian Clutter
Authors: Lina Pan
Abstract:
In non-Gaussian clutter of a spherically invariant random vector, in the cases that a certain estimated covariance matrix could become singular, the adaptive target detection of high-range-resolution radar is addressed. Firstly, the restricted maximum likelihood (RML) estimates of unknown covariance matrix and scatterer amplitudes are derived for non-Gaussian clutter. And then the RML estimate of texture is obtained. Finally, a novel detector is devised. It is showed that, without secondary data, the proposed detector outperforms the existing Kelly binary integrator.Keywords: non-Gaussian clutter, covariance matrix estimation, target detection, maximum likelihood
Procedia PDF Downloads 464495 Determinants of Child Nutritional Inequalities in Pakistan: Regression-Based Decomposition Analysis
Authors: Nilam Bano, Uzma Iram
Abstract:
Globally, the dilemma of undernutrition has become a notable concern for the researchers, academicians, and policymakers because of its severe consequences for many centuries. The nutritional deficiencies create hurdles for the people to achieve goals related to live a better lifestyle. Not only at micro level but also at the macro level, the consequences of undernutrition affect the economic progress of the country. The initial five years of a child’s life are considered critical for the physical growth and brain development. In this regard, children require special care and good quality food (nutrient intake) to fulfill their nutritional demand of the growing body. Having the sensitive stature and health, children specially under the age of 5 years are more vulnerable to the poor economic, housing, environmental and other social conditions. Beside confronting economic challenges and political upheavals, Pakistan is also going through from a rough patch in the context of social development. Majority of the children are facing serious health problems in the absence of required nutrition. The complexity of this issue is getting severe day by day and specially children are left behind with different type of immune problems and vitamins and mineral deficiencies. It is noted that children from the well-off background are less likely affected by the undernutrition. In order to underline this issue, the present study aims to highlight the existing nutritional inequalities among the children of under five years in Pakistan. Moreover, this study strives to decompose those factors that severely affect the existing nutritional inequality and standing in the queue to capture the consideration of concerned authorities. Pakistan Demographic and Health Survey 2012-13 was employed to assess the relevant indicators of undernutrition such as stunting, wasting, underweight and associated socioeconomic factors. The objectives were executed through the utilization of the relevant empirical techniques. Concentration indices were constructed to measure the nutritional inequalities by utilizing three measures of undernutrition; stunting, wasting and underweight. In addition to it, the decomposition analysis following the logistic regression was made to unfold the determinants that severely affect the nutritional inequalities. The negative values of concentration indices illustrate that children from the marginalized background are affected by the undernutrition more than their counterparts who belong from rich households. Furthermore, the result of decomposition analysis indicates that child age, size of a child at birth, wealth index, household size, parents’ education, mother’s health and place of residence are the most contributing factors in the prevalence of existing nutritional inequalities. Considering the result of the study, it is suggested to the policymakers to design policies in a way so that the health sector of Pakistan can stimulate in a productive manner. Increasing the number of effective health awareness programs for mothers would create a notable difference. Moreover, the education of the parents must be concerned by the policymakers as it has a significant association with the present research in terms of eradicating the nutritional inequalities among children.Keywords: concentration index, decomposition analysis, inequalities, undernutrition, Pakistan
Procedia PDF Downloads 132494 Catalytic Dehydrogenation of Formic Acid into H2/CO2 Gas: A Novel Approach
Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy
Abstract:
Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of biomass platform, comprising a potential pool of hydrogen energy that stands as a new energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need of in-situ H2 production, which plays a key role in the hydrogenation reactions of biomass into higher value components. It is reported elsewhere in literature that catalytic decomposition of FA is usually performed in poorly designed setup using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. it work suggests an approach that integrates designing a novel catalyst featuring magnetic property with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H2 gas from FA. Using ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under inert medium. Through a novel approach, FA is charged into the reactor via high-pressure positive displacement pump at steady state conditions. The produced gas (H2+CO2) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The novelty of this work lies in designing a very responsive catalyst, pumping consistent amount of FA into a sealed reactor running at steady state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at lower temperature range (35-50°C) yielded more gas while the catalyst loading and Pd doping wt.% were found to be the most significant factors with a P-values 0.026 & 0.031, respectively.Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles
Procedia PDF Downloads 52493 Development of a Model Based on Wavelets and Matrices for the Treatment of Weakly Singular Partial Integro-Differential Equations
Authors: Somveer Singh, Vineet Kumar Singh
Abstract:
We present a new model based on viscoelasticity for the Non-Newtonian fluids.We use a matrix formulated algorithm to approximate solutions of a class of partial integro-differential equations with the given initial and boundary conditions. Some numerical results are presented to simplify application of operational matrix formulation and reduce the computational cost. Convergence analysis, error estimation and numerical stability of the method are also investigated. Finally, some test examples are given to demonstrate accuracy and efficiency of the proposed method.Keywords: Legendre Wavelets, operational matrices, partial integro-differential equation, viscoelasticity
Procedia PDF Downloads 336492 Treatment of Isopropyl Alcohol in Aqueous Solutions by VUV-Based AOPs within a Laminar-Falling-Film-Slurry Type Photoreactor
Authors: Y. S. Shen, B. H. Liao
Abstract:
This study aimed to develop the design equation of a laminar-falling-film-slurry (LFFS) type photoreactor for the treatment of organic wastewaters containing isopropyl alcohol (IPA) by VUV-based advanced oxidation processes (AOPs). The photoreactor design equations were established by combining with the chemical kinetics of the photocatalytic system, light absorption model within the photoreactor, and was used to predict the decomposition of IPA in aqueous solutions in the photoreactors of different geometries at various operating conditions (volumetric flow rate, oxidants, catalysts, solution pH values, UV light intensities, and initial concentration of pollutants) to verify its rationality and feasibility. By the treatment of the LFFS-VUV only process, it was found that the decomposition rates of IPA in aqueous solutions increased with the increase of volumetric flow rate, VUV light intensity, dosages of TiO2 and H2O2. The removal efficiencies of IPA by photooxidation processes were in the order: VUV/H2O2>VUV/TiO2/H2O2>VUV/TiO2>VUV only. In VUV, VUV/H2O2, VUV/TiO2/H2O2 processes, integrating with the reaction kinetic equations of IPA, the mass conservation equation and the linear light source model, the photoreactor design equation can reasonably to predict reaction behaviors of IPA at various operating conditions and to describe the concentration distribution profiles of IPA within photoreactors.The results of this research can be useful basis for the future application of the homogeneous and heterogeneous VUV-based advanced oxidation processes.Keywords: isopropyl alcohol, photoreactor design, VUV, AOPs
Procedia PDF Downloads 377491 Insight into the Binding Theme of CA-074Me to Cathepsin B: Molecular Dynamics Simulations and Scaffold Hopping to Identify Potential Analogues as Anti-Neurodegenerative Diseases
Authors: Tivani Phosa Mashamba-Thompson, Mahmoud E. S. Soliman
Abstract:
To date, the cause of neurodegeneration is not well understood and diseases that stem from neurodegeneration currently have no known cures. Cathepsin B (CB) enzyme is known to be involved in the production of peptide neurotransmitters and toxic peptides in neurodegenerative diseases (NDs). CA-074Me is a membrane-permeable irreversible selective cathepsin B (CB) inhibitor as confirmed by in vivo studies. Due to the lack of the crystal structure, the binding mode of CA-074Me with the human CB at molecular level has not been previously reported. The main aim of this study is to gain an insight into the binding mode of CB CA-074Me to human CB using various computational tools. Herein, molecular dynamics simulations, binding free energy calculations and per-residue energy decomposition analysis were employed to accomplish the aim of the study. Another objective was to identify novel CB inhibitors based on the structure of CA-074Me using fragment based drug design using scaffold hoping drug design approach. Results showed that two of the designed ligands (hit 1 and hit 2) were found to have better binding affinities than the prototype inhibitor, CA-074Me, by ~2-3 kcal/mol. Per-residue energy decomposition showed that amino acid residues Cys29, Gly196, His197 and Val174 contributed the most towards the binding. The Van der Waals binding forces were found to be the major component of the binding interactions. The findings of this study should assist medicinal chemist towards the design of potential irreversible CB inhibitors.Keywords: cathepsin B, scaffold hopping, docking, molecular dynamics, binding-free energy, neurodegerative diseases
Procedia PDF Downloads 377490 Model Predictive Control Applied to Thermal Regulation of Thermoforming Process Based on the Armax Linear Model and a Quadratic Criterion Formulation
Authors: Moaine Jebara, Lionel Boillereaux, Sofiane Belhabib, Michel Havet, Alain Sarda, Pierre Mousseau, Rémi Deterre
Abstract:
Energy consumption efficiency is a major concern for the material processing industry such as thermoforming process and molding. Indeed, these systems should deliver the right amount of energy at the right time to the processed material. Recent technical development, as well as the particularities of the heating system dynamics, made the Model Predictive Control (MPC) one of the best candidates for thermal control of several production processes like molding and composite thermoforming to name a few. The main principle of this technique is to use a dynamic model of the process inside the controller in real time in order to anticipate the future behavior of the process which allows the current timeslot to be optimized while taking future timeslots into account. This study presents a procedure based on a predictive control that brings balance between optimality, simplicity, and flexibility of its implementation. The development of this approach is progressive starting from the case of a single zone before its extension to the multizone and/or multisource case, taking thus into account the thermal couplings between the adjacent zones. After a quadratic formulation of the MPC criterion to ensure the thermal control, the linear expression is retained in order to reduce calculation time thanks to the use of the ARMAX linear decomposition methods. The effectiveness of this approach is illustrated by experiment and simulation.Keywords: energy efficiency, linear decomposition methods, model predictive control, mold heating systems
Procedia PDF Downloads 272489 Microwave Heating and Catalytic Activity of Iron/Carbon Materials for H₂ Production from the Decomposition of Plastic Wastes
Authors: Peng Zhang, Cai Liang
Abstract:
The non-biodegradable plastic wastes have posed severe environmental and ecological contaminations. Numerous technologies, such as pyrolysis, incineration, and landfilling, have already been employed for the treatment of plastic waste. Compared with conventional methods, microwave has displayed unique advantages in the rapid production of hydrogen from plastic wastes. Understanding the interaction between microwave radiation and materials would promote the optimization of several parameters for the microwave reaction system. In this work, various carbon materials have been investigated to reveal microwave heating performance and the ensuing catalytic activity. Results showed that the diversity in the heating characteristic was mainly due to the dielectric properties and the individual microstructures. Furthermore, the gaps and steps among the surface of carbon materials would lead to the distortion of the electromagnetic field, which correspondingly induced plasma discharging. The intensity and location of local plasma were also studied. For high-yield H₂ production, iron nanoparticles were selected as the active sites, and a series of iron/carbon bifunctional catalysts were synthesized. Apart from the high catalytic activity, the iron particles in nano-size close to the microwave skin depth would transfer microwave irradiation to the heat, intensifying the decomposition of plastics. Under microwave radiation, iron is supported on activated carbon material with 10wt.% loading exhibited the best catalytic activity for H₂ production. Specifically, the plastics were rapidly heated up and subsequently converted into H₂ with a hydrogen efficiency of 85%. This work demonstrated a deep understanding of microwave reaction systems and provided the optimization for plastic treatment.Keywords: plastic waste, recycling, hydrogen, microwave
Procedia PDF Downloads 71488 Solution of Some Boundary Value Problems of the Generalized Theory of Thermo-Piezoelectricity
Authors: Manana Chumburidze
Abstract:
We have considered a non-classical model of dynamical problems for a conjugated system of differential equations arising in thermo-piezoelectricity, which was formulated by Toupin – Mindlin. The basic concepts and the general theory of solvability for isotropic homogeneous elastic media is considered. They are worked by using the methods the Laplace integral transform, potential method and singular integral equations. Approximate solutions of mixed boundary value problems for finite domain, bounded by the some closed surface are constructed. They are solved in explicitly by using the generalized Fourier's series method.Keywords: thermo-piezoelectricity, boundary value problems, Fourier's series, isotropic homogeneous elastic media
Procedia PDF Downloads 465487 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 343486 Effect of the Applied Bias on Miniband Structures in Dimer Fibonacci Inas/Ga1-Xinxas Superlattices
Authors: Z. Aziz, S. Terkhi, Y. Sefir, R. Djelti, S. Bentata
Abstract:
The effect of a uniform electric field across multibarrier systems (InAs/InxGa1-xAs) is exhaustively explored by a computational model using exact airy function formalism and the transfer-matrix technique. In the case of biased DFHBSL structure a strong reduction in transmission properties was observed and the width of the miniband structure linearly decreases with the increase of the applied bias. This is due to the confinement of the states in the miniband structure, which becomes increasingly important (Wannier-Stark Effect).Keywords: dimer fibonacci height barrier superlattices, singular extended state, exact airy function, transfer matrix formalism
Procedia PDF Downloads 305485 Anonymity and Irreplaceability: Gross Anatomical Practices in Japanese Medical Education
Authors: Ayami Umemura
Abstract:
Without exception, all the bodies dissected in the gross anatomical practices are bodies that have lived irreplaceable lives, laughing and talking with family and friends. While medical education aims to cultivate medical knowledge that is universally applicable to all human bodies, it relies on a unique, irreplaceable, and singular entity. In this presentation, we will explore the ``irreplaceable relationship'' that is cultivated between medical students and anonymous cadavers during gross anatomical practices, drawing on Emmanuel Levinas's ``ethics of the face'' and Martin Buber's discussion of “I-Thou.'' Through this, we aim to present ``a different ethic'' that emerges only in the context of face-to-face relationships, which differs from the generalized, institutionalized, mass-produced ethics like seen in so-called ``ethics codes.'' Since the 1990s, there has been a movement around the world to use gross anatomical practices as an "educational tool" for medical professionalism and medical ethics, and some educational institutions have started disclosing the actual names, occupations, and places of birth of corpses to medical students. These efforts have also been criticized because they lack medical calmness. In any case, the issue here is that this information is all about the past that medical students never know directly. The critical fact that medical students are building relationships from scratch and spending precious time together without any information about the corpses before death is overlooked. Amid gross anatomical practices, a medical student is exposed to anonymous cadavers with faces and touching and feeling them. In this presentation, we will examine a collection of essays written by medical students on gross anatomical practices collected by the Japanese Association for Volunteer Body Donation from medical students across the country since 1978. There, we see the students calling out to the corpse, being called out to, being encouraged, superimposing the carcasses on their own immediate family, regretting parting, and shedding tears. Then, medical students can be seen addressing the dead body in the second person singular, “you.” These behaviors reveal an irreplaceable relationship between the anonymous cadavers and the medical students. The moment they become involved in an irreplaceable relationship between “I and you,” an accidental and anonymous encounter becomes inevitable. When medical students notice being the inevitable takers of voluntary and addressless gifts, they pledge to become “Good Doctors” owing the anonymous persons. This presentation aims to present “a different ethic” based on uniqueness and irreplaceability that comes from the faces of the others embedded in each context, which is different from “routine” and “institutionalized” ethics. That can only be realized ``because of anonymity''.Keywords: anonymity, irreplaceability, uniqueness, singularlity, emanuel levinas, martin buber, alain badiou, medical education
Procedia PDF Downloads 62484 The Lopsided Burden of Non-Communicable Diseases in India: Evidences from the Decade 2004-2014
Authors: Kajori Banerjee, Laxmi Kant Dwivedi
Abstract:
India is a part of the ongoing globalization, contemporary convergence, industrialization and technical advancement that is taking place world-wide. Some of the manifestations of this evolution is rapid demographic, socio-economic, epidemiological and health transition. There has been a considerable increase in non-communicable diseases due to change in lifestyle. This study aims to assess the direction of burden of disease and compare the pressure of infectious diseases against cardio-vascular, endocrine, metabolic and nutritional diseases. The change in prevalence in a ten-year period (2004-2014) is further decomposed to determine the net contribution of various socio-economic and demographic covariates. The present study uses the recent 71st (2014) and 60th (2004) rounds of National Sample Survey. The pressure of infectious diseases against cardio-vascular (CVD), endocrine, metabolic and nutritional (EMN) diseases during 2004-2014 is calculated by Prevalence Rates (PR), Hospitalization Rates (HR) and Case Fatality Rates (CFR). The prevalence of non-communicable diseases are further used as a dependent variable in a logit regression to find the effect of various social, economic and demographic factors on the chances of suffering from the particular disease. Multivariate decomposition technique further assists in determining the net contribution of socio-economic and demographic covariates. This paper upholds evidences of stagnation of the burden of communicable diseases (CD) and rapid increase in the burden of non-communicable diseases (NCD) uniformly for all population sub-groups in India. CFR for CVD has increased drastically in 2004-2014. Logit regression indicates the chances of suffering from CVD and EMN is significantly higher among the urban residents, older ages, females, widowed/ divorced and separated individuals. Decomposition displays ample proof that improvement in quality of life markers like education, urbanization, longevity of life has positively contributed in increasing the NCD prevalence rate. In India’s current epidemiological phase, compression theory of morbidity is in action as a significant rise in the probability of contracting the NCDs over the time period among older ages is observed. Age is found to play a vital contributor in increasing the probability of having CVD and EMN over the study decade 2004-2014 in the nationally representative sample of National Sample Survey.Keywords: cardio-vascular disease, case-fatality rate, communicable diseases, hospitalization rate, multivariate decomposition, non-communicable diseases, prevalence rate
Procedia PDF Downloads 312483 Engaging Teacher Inquiry via New Media in Traditional and E-Learning Environments
Authors: Daniel A. Walzer
Abstract:
As the options for course delivery and development expand, plenty of misconceptions still exist concerning e-learning and online course delivery. Classroom instructors often discuss pedagogy, methodologies, and best practices regarding teaching from a singular, traditional in-class perspective. As more professors integrate online, blended, and hybrid courses into their dossier, a clearly defined rubric for gauging online course delivery is essential. The transition from a traditional learning structure towards an updated distance-based format requires careful planning, evaluation, and revision. This paper examines how new media stimulates reflective practice and guided inquiry to improve pedagogy, engage interdisciplinary collaboration, and supply rich qualitative data for future research projects in media arts disciplines.Keywords: action research, inquiry, new media, reflection
Procedia PDF Downloads 307482 Efficient Study of Substrate Integrated Waveguide Devices
Authors: J. Hajri, H. Hrizi, N. Sboui, H. Baudrand
Abstract:
This paper presents a study of SIW circuits (Substrate Integrated Waveguide) with a rigorous and fast original approach based on Iterative process (WCIP). The theoretical suggested study is validated by the simulation of two different examples of SIW circuits. The obtained results are in good agreement with those of measurement and with software HFSS.Keywords: convergence study, HFSS, modal decomposition, SIW circuits, WCIP method
Procedia PDF Downloads 498481 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid
Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang
Abstract:
Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid
Procedia PDF Downloads 432480 A Study on Soil Micro-Arthropods Assemblage in Selected Plantations in The Nilgiris, Tamilnadu
Authors: J. Dharmaraj, C. Gunasekaran
Abstract:
Invertebrates are the reliable ecological indicators of disturbance of the forest ecosystems and they respond to environment changes more quickly than other fauna. Among these the terrestrial invertebrates are vital to functioning ecosystems, contributing to processes such as decomposition, nutrient cycling and soil fertility. The natural ecosystems of the forests have been subject to various types of disturbances, which lead to decline of flora and fauna. The comparative diversity of micro-arthropods in natural forest, wattle plantation and eucalyptus plantations were studied in Nilgiris. The study area was divided in to five major sites (Emerald (Site-I), Thalaikundha (Site-II), Kodapmund (Site-III), Aravankad (Site-IV), Kattabettu (Site-V). The research was conducted during period from March 2014 to August 2014. The leaf and soil samples were collected and isolated by using Berlese funnel extraction methods. Specimens were isolated and identified according to their morphology (Balogh 1972). In the present study results clearly showed the variation in soil pH, NPK (Major Nutrients) and organic carbon among the study sites. The chemical components of the leaf litters of the plantation decreased the diversity of micro-arthropods and decomposition rate leads to low amount of carbon and other nutrients present in the soil. Moreover eucalyptus and wattle plantations decreases the availability of the ground water source to other plantations and micro-arthropods and hences affects the soil fertility. Hence, the present study suggests to minimize the growth of wattle and eucalyptus tree plantations in the natural areas which may help to reduce the decline of forests.Keywords: micro-arthropods, assemblage, berlese funnel, morphology, NPK, nilgiris
Procedia PDF Downloads 308479 The Effect of Metal-Organic Framework Pore Size to Hydrogen Generation of Ammonia Borane via Nanoconfinement
Authors: Jing-Yang Chung, Chi-Wei Liao, Jing Li, Bor Kae Chang, Cheng-Yu Wang
Abstract:
Chemical hydride ammonia borane (AB, NH3BH3) draws attentions to hydrogen energy researches for its high theoretical gravimetrical capacity (19.6 wt%). Nevertheless, the elevated AB decomposition temperatures (Td) and unwanted byproducts are main hurdles in practical application. It was reported that the byproducts and Td can be reduced with nanoconfinement technique, in which AB molecules are confined in porous materials, such as porous carbon, zeolite, metal-organic frameworks (MOFs), etc. Although nanoconfinement empirically shows effectiveness on hydrogen generation temperature reduction in AB, the theoretical mechanism is debatable. Low Td was reported in AB@IRMOF-1 (Zn4O(BDC)3, BDC = benzenedicarboxylate), where Zn atoms form closed metal clusters secondary building unit (SBU) with no exposed active sites. Other than nanosized hydride, it was also observed that catalyst addition facilitates AB decomposition in the composite of Li-catalyzed carbon CMK-3, MOF JUC-32-Y with exposed Y3+, etc. It is believed that nanosized AB is critical for lowering Td, while active sites eliminate byproducts. Nonetheless, some researchers claimed that it is the catalytic sites that are the critical factor to reduce Td, instead of the hydride size. The group physically ground AB with ZIF-8 (zeolitic imidazolate frameworks, (Zn(2-methylimidazolate)2)), and found similar reduced Td phenomenon, even though AB molecules were not ‘confined’ or forming nanoparticles by physical hand grinding. It shows the catalytic reaction, not nanoconfinement, leads to AB dehydrogenation promotion. In this research, we explored the possible criteria of hydrogen production temperature from nanoconfined AB in MOFs with different pore sizes and active sites. MOFs with metal SBU such as Zn (IRMOF), Zr (UiO), and Al (MIL-53), accompanying with various organic ligands (BDC and BPDC; BPDC = biphenyldicarboxylate) were modified with AB. Excess MOFs were used for AB size constrained in micropores estimated by revisiting Horvath-Kawazoe model. AB dissolved in methanol was added to MOFs crystalline with MOF pore volume to AB ratio 4:1, and the slurry was dried under vacuum to collect AB@MOF powders. With TPD-MS (temperature programmed desorption with mass spectroscopy), we observed Td was reduced with smaller MOF pores. For example, it was reduced from 100°C to 64°C when MOF micropore ~1 nm, while ~90°C with pore size up to 5 nm. The behavior of Td as a function of AB crystalline radius obeys thermodynamics when the Gibbs free energy of AB decomposition is zero, and no obvious correlation with metal type was observed. In conclusion, we discovered Td of AB is proportional to the reciprocal of MOF pore size, possibly stronger than the effect of active sites.Keywords: ammonia borane, chemical hydride, metal-organic framework, nanoconfinement
Procedia PDF Downloads 186478 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition
Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can
Abstract:
To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning
Procedia PDF Downloads 85477 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections
Authors: Anthony D. Rhodes, Manan Goel
Abstract:
We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.Keywords: computer vision, object segmentation, interactive segmentation, model compression
Procedia PDF Downloads 120476 A Proof of the N. Davydov Theorem for Douglis Algebra Valued Functions
Authors: Jean-Marie Vilaire, Ricardo Abreu-Blaya, Juan Bory-Reyes
Abstract:
The classical Beltrami system of elliptic equations generalizes the Cauchy Riemann equation in the complex plane and offers the possibility to consider homogeneous system with no terms of zero order. The theory of Douglis-valued functions, called Hyper-analytic functions, is special case of the above situation. In this note, we prove an analogue of the N. Davydov theorem in the framework of the theory of hyperanalytic functions. The used methodology contemplates characteristic methods of the hypercomplex analysis as well as the singular integral operators and elliptic systems of the partial differential equations theories.Keywords: Beltrami equation, Douglis algebra-valued function, Hypercomplex Cauchy type integral, Sokhotski-Plemelj formulae
Procedia PDF Downloads 250475 Exact Solutions of a Nonlinear Schrodinger Equation with Kerr Law Nonlinearity
Authors: Muna Alghabshi, Edmana Krishnan
Abstract:
A nonlinear Schrodinger equation has been considered for solving by mapping methods in terms of Jacobi elliptic functions (JEFs). The equation under consideration has a linear evolution term, linear and nonlinear dispersion terms, the Kerr law nonlinearity term and three terms representing the contribution of meta materials. This equation which has applications in optical fibers is found to have soliton solutions, shock wave solutions, and singular wave solutions when the modulus of the JEFs approach 1 which is the infinite period limit. The equation with special values of the parameters has also been solved using the tanh method.Keywords: Jacobi elliptic function, mapping methods, nonlinear Schrodinger Equation, tanh method
Procedia PDF Downloads 314