Search results for: filtered back projection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2006

Search results for: filtered back projection

776 Landfill Failure Mobility Analysis: A Probabilistic Approach

Authors: Ali Jahanfar, Brajesh Dubey, Bahram Gharabaghi, Saber Bayat Movahed

Abstract:

Ever increasing population growth of major urban centers and environmental challenges in siting new landfills have resulted in a growing trend in design of mega-landfills some with extraordinary heights and dangerously steep slopes. Landfill failure mobility risk analysis is one of the most uncertain types of dynamic rheology models due to very large inherent variabilities in the heterogeneous solid waste material shear strength properties. The waste flow of three historic dumpsite and two landfill failures were back-analyzed using run-out modeling with DAN-W model. The travel distances of the waste flow during landfill failures were calculated approach by taking into account variability in material shear strength properties. The probability distribution function for shear strength properties of the waste material were grouped into four major classed based on waste material compaction (landfills versus dumpsites) and composition (high versus low quantity) of high shear strength waste materials such as wood, metal, plastic, paper and cardboard in the waste. This paper presents a probabilistic method for estimation of the spatial extent of waste avalanches, after a potential landfill failure, to create maps of vulnerability scores to inform property owners and residents of the level of the risk.

Keywords: landfill failure, waste flow, Voellmy rheology, friction coefficient, waste compaction and type

Procedia PDF Downloads 288
775 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 127
774 Correlation between Clinical Measurements of Static Foot Posture in Young Adults

Authors: Phornchanok Motantasut, Torkamol Hunsawong, Lugkana Mato, Wanida Donpunha

Abstract:

Identifying abnormal foot posture is important for prescribing appropriate management in patients with lower limb disorders and chronic non-specific low back pain. The normalized navicular height truncated (NNHt) and the foot posture index-6 (FPI-6) have been recommended as the common, simple, valid, and reliable static measures for clinical application. The NNHt is a single plane measure while the FPI-6 is a triple plane measure. At present, there is inadequate information about the correlation between the NNHt and the FPI-6 for categorizing foot posture that leads to a difficulty of choosing the appropriate assessment. Therefore, the present study aimed to determine the correlation between the NNHt and the FPI-6 measures in adult participants with asymptomatic feet. Methods: A cross-sectional descriptive study was conducted in 47 asymptomatic individuals (23 males and 24 females) aged 28.89 ± 7.67 years with body mass index 21.73 ± 1.76 kg/m². The right foot was measured twice by the experienced rater using the NNHt and the FPI-6. A sequence of the measures was randomly arranged for each participant with a 10-minute rest between the tests. The Pearson’s correlation coefficient (r) was used to determine the relationship between the measures. Results: The mean NNHt score was 0.23 ± 0.04 (ranged from 0.15 to 0.36) and the mean FPI-6 score was 4.42 ± 4.36 (ranged from -6 to +11). The Pearson’s correlation coefficient among the NNHt score and the FPI-6 score was -0.872 (p < 0.01). Conclusion: The present finding demonstrates the strong correlation between the NNHt and FPI-6 in adult feet and implies that both measures could be substituted for each other in identifying foot posture.

Keywords: foot posture index, foot type, measurement of foot posture, navicular height

Procedia PDF Downloads 136
773 Preparation of Biodegradable Methacrylic Nanoparticles by Semicontinuous Heterophase Polymerization for Drugs Loading: The Case of Acetylsalicylic Acid

Authors: J. Roberto Lopez, Hened Saade, Graciela Morales, Javier Enriquez, Raul G. Lopez

Abstract:

Implementation of systems based on nanostructures for drug delivery applications have taken relevance in recent studies focused on biomedical applications. Although there are several nanostructures as drugs carriers, the use of polymeric nanoparticles (PNP) has been widely studied for this purpose, however, the main issue for these nanostructures is the size control below 50 nm with a narrow distribution size, due to they must go through different physiological barriers and avoid to be filtered by kidneys (< 10 nm) or the spleen (> 100 nm). Thus, considering these and other factors, it can be mentioned that drug-loaded nanostructures with sizes varying between 10 and 50 nm are preferred in the development and study of PNP/drugs systems. In this sense, the Semicontinuous Heterophase Polymerization (SHP) offers the possibility to obtain PNP in the desired size range. Considering the above explained, methacrylic copolymer nanoparticles were obtained under SHP. The reactions were carried out in a jacketed glass reactor with the required quantities of water, ammonium persulfate as initiator, sodium dodecyl sulfate/sodium dioctyl sulfosuccinate as surfactants, methyl methacrylate and methacrylic acid as monomers with molar ratio of 2/1, respectively. The monomer solution was dosed dropwise during reaction at 70 °C with a mechanical stirring of 650 rpm. Nanoparticles of poly(methyl methacrylate-co-methacrylic acid) were loaded with acetylsalicylic acid (ASA, aspirin) by a chemical adsorption technique. The purified latex was put in contact with a solution of ASA in dichloromethane (DCM) at 0.1, 0.2, 0.4 or 0.6 wt-%, at 35°C during 12 hours. According to the boiling point of DCM, as well as DCM and water densities, the loading process is completed when the whole DCM is evaporated. The hydrodynamic diameter was measured after polymerization by quasi-elastic light scattering and transmission electron microscopy, before and after loading procedures with ASA. The quantitative and qualitative analyses of PNP loaded with ASA were measured by infrared spectroscopy, differential scattering calorimetry and thermogravimetric analysis. Also, the molar mass distributions of polymers were determined in a gel permeation chromatograph apparatus. The load capacity and efficiency were determined by gravimetric analysis. The hydrodynamic diameter results for methacrylic PNP without ASA showed a narrow distribution with an average particle size around 10 nm and a composition methyl methacrylate/methacrylic acid molar ratio equal to 2/1, same composition of Eudragit S100, which is a commercial compound widely used as excipient. Moreover, the latex was stabilized in a relative high solids content (around 11 %), a monomer conversion almost 95 % and a number molecular weight around 400 Kg/mol. The average particle size in the PNP/aspirin systems fluctuated between 18 and 24 nm depending on the initial percentage of aspirin in the loading process, being the drug content as high as 24 % with an efficiency loading of 36 %. These average sizes results have not been reported in the literature, thus, the methacrylic nanoparticles here reported are capable to be loaded with a considerable amount of ASA and be used as a drug carrier.

Keywords: aspirin, biocompatibility, biodegradable, Eudragit S100, methacrylic nanoparticles

Procedia PDF Downloads 137
772 Numerical Analysis of Supersonic Impinging Jets onto Resonance Tube

Authors: Shinji Sato, M. M. A. Alam, Manabu Takao

Abstract:

In recent, investigation of an unsteady flow inside the resonance tube have become a strongly motivated research field for their potential application as high-frequency actuators. By generating a shock wave inside the resonance tube, a high temperature and pressure can be achieved inside the tube, and this high temperature can also be used to ignite a jet engine. In the present research, a computational fluid dynamics (CFD) analysis was carried out to investigate the flow inside the resonance tube. The density-based solver of rhoCentralFoam in OpenFOAM was used to numerically simulate the flow. The supersonic jet that was driven by a cylindrical nozzle with a nominal exit diameter of φd = 20.3 mm impinged onto the resonance tube. The jet pressure ratio was varied between 2.6 and 7.8. The gap s between the nozzle exit and tube entrance was changed between 1.5d and 3.0d. The diameter and length of the tube were taken as D = 1.25d and L=3.0D, respectively. As a result, when a supersonic jet has impinged onto the resonance tube, a compression wave was found generating inside the tube and propagating towards the tube end wall. This wave train resulted in a rise in the end wall gas temperature and pressure. While, in an outflow phase, the gas near tube enwall was found cooling back isentropically to its initial temperature. Thus, the compression waves repeated a reciprocating motion in the tube like a piston, and a fluctuation in the end wall pressures and temperatures were observed. A significant change was found in the end wall pressures and temperatures with a change of jet flow conditions. In this study, the highest temperature was confirmed at a jet pressure ratio of 4.2 and a gap of s=2.0d

Keywords: compressible flow, OpenFOAM, oscillations, a resonance tube, shockwave

Procedia PDF Downloads 144
771 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 129
770 Mobile Traffic Management in Congested Cells using Fuzzy Logic

Authors: A. A. Balkhi, G. M. Mir, Javid A. Sheikh

Abstract:

To cater the demands of increasing traffic with new applications the cellular mobile networks face new changes in deployment in infrastructure for making cellular networks heterogeneous. To reduce overhead processing the densely deployed cells require smart behavior with self-organizing capabilities with high adaptation to the neighborhood. We propose self-organization of unused resources usually excessive unused channels of neighbouring cells with densely populated cells to reduce handover failure rates. The neighboring cells share unused channels after fulfilling some conditional candidature criterion using threshold values so that they are not suffered themselves for starvation of channels in case of any abrupt change in traffic pattern. The cells are classified as ‘red’, ‘yellow’, or ‘green’, as per the available channels in cell which is governed by traffic pattern and thresholds. To combat the deficiency of channels in red cell, migration of unused channels from under-loaded cells, hierarchically from the qualified candidate neighboring cells is explored. The resources are returned back when the congested cell is capable of self-contained traffic management. In either of the cases conditional sharing of resources is executed for enhanced traffic management so that User Equipment (UE) is provided uninterrupted services with high Quality of Service (QoS). The fuzzy logic-based simulation results show that the proposed algorithm is efficiently in coincidence with improved successful handoffs.

Keywords: candidate cell, channel sharing, fuzzy logic, handover, small cells

Procedia PDF Downloads 115
769 Improving Activity Recognition Classification of Repetitious Beginner Swimming Using a 2-Step Peak/Valley Segmentation Method with Smoothing and Resampling for Machine Learning

Authors: Larry Powell, Seth Polsley, Drew Casey, Tracy Hammond

Abstract:

Human activity recognition (HAR) systems have shown positive performance when recognizing repetitive activities like walking, running, and sleeping. Water-based activities are a reasonably new area for activity recognition. However, water-based activity recognition has largely focused on supporting the elite and competitive swimming population, which already has amazing coordination and proper form. Beginner swimmers are not perfect, and activity recognition needs to support the individual motions to help beginners. Activity recognition algorithms are traditionally built around short segments of timed sensor data. Using a time window input can cause performance issues in the machine learning model. The window’s size can be too small or large, requiring careful tuning and precise data segmentation. In this work, we present a method that uses a time window as the initial segmentation, then separates the data based on the change in the sensor value. Our system uses a multi-phase segmentation method that pulls all peaks and valleys for each axis of an accelerometer placed on the swimmer’s lower back. This results in high recognition performance using leave-one-subject-out validation on our study with 20 beginner swimmers, with our model optimized from our final dataset resulting in an F-Score of 0.95.

Keywords: time window, peak/valley segmentation, feature extraction, beginner swimming, activity recognition

Procedia PDF Downloads 114
768 Great Food, No Atmosphere: A Review of Performance Nutrition for Application to Extravehicular Activities in Spaceflight

Authors: Lauren E. Church

Abstract:

Background: Extravehicular activities (EVAs) are a critical aspect of missions aboard the International Space Station (ISS). It has long been noted that the spaceflight environment and the physical demands of EVA cause physiological and metabolic changes in humans; this review aims to combine these findings with nutritional studies in analogues of the spaceflight and EVA environments to make nutritional recommendations for astronauts scheduled for and immediately returning from EVAs. Results: Energy demands increase during orbital spaceflight and see further increases during EVA. Another critical element of EVA nutrition is adequate hydration. Orbital EVA appears to provide adequate hydration under current protocol, but during lunar surface EVA (LEVA) and in a 10km lunar walk-back test astronauts have stated that up to 20% more water was needed. Previous attempts for in-suit edible sustenance have not been adequately taken up by astronauts to be economically viable. In elite endurance athletes, a mixture of glucose and fructose is used in gels, improving performance. Discussion: A combination of non-caffeinated energy drink and simple water should be available for astronauts during EVA, allowing more autonomy. There should also be provision of gels or a similar product containing appropriate sodium levels to maintain hydration, but not so much as to hyperhydrate through renal water reabsorption. It is also suggested that short breaks be built into the schedule of EVAs for these gels to be consumed, as it is speculated that reason for low uptake of in-suit sustenance is the lack of time available in which to consume it.

Keywords: astronaut, nutrition, space, sport

Procedia PDF Downloads 124
767 Evaluation of Patients’ Quality of Life After Lumbar Disc Surgery and Movement Limitations

Authors: Shirin Jalili, Ramin Ghasemi

Abstract:

Lumbar microdiscectomy is the most commonly performed spinal surgery strategy; it is regularly performed to lighten the indications and signs of sciatica within the lower back and leg caused by a lumbar disc herniation. This surgery aims to progress leg pain, reestablish function, and enable a return to ordinary day-by-day exercises. Rates of lumbar disc surgery show critical geographic varieties recommending changing treatment criteria among working specialists. Few population-based considers have investigated the hazard of reoperation after disc surgery, and regional or inter specialty varieties within the reoperations are obscure. The conventional approach to recouping from lumbar microdiscectomy has been to restrain bending, lifting, or turning for a least 6 weeks in arrange to anticipate the disc from herniating once more. Traditionally, patients were exhorted to limit post-operative action, which was accepted to decrease the hazard of disc herniation and progressive insecurity. In modern hone, numerous specialists don't limit understanding of postoperative action due to the discernment this practice is pointless. There's a need of thinks about highlighting the result by distinctive scores or parameters after surgery for repetitive circle herniations of the lumbar spine at the starting herniation location. This study will evaluate the quality of life after surgical treatment of recurrent herniations with distinctive standardized approved result instruments.

Keywords: post-operative activity, disc, quality of life, treatment, movements

Procedia PDF Downloads 78
766 Crystallization in the TeO2 - Ta2O5 - Bi2O3 System: From Glass to Anti-Glass to Transparent Ceramic

Authors: Hasnaa Benchorfi

Abstract:

The Tellurite glasses exhibit interesting properties, notably their low melting point (700-900°C), high refractive index (≈2), high transparency in the infrared region (up to 5−6 μm), interesting linear and non-linear optical properties and high rare earth ions solubility. These properties give tellurite glasses a great interest in various optical applications. Transparent ceramics present advantages compared to glasses, such as improved mechanical, thermal and optical properties. But, the elaboration process of these ceramics requires complex sintering conditions. The full crystallization of glass into transparent ceramics is an alternative to circumvent the technical challenges related to the ceramics obtained by conventional processing. In this work, a crystallization study of a specific glass composition in the system TeO2-Ta2O5-Bi2O3 shows structural transitions from the glass to the stabilization of an unreported anti-glass phase to a transparent ceramic upon heating. An anti-glass is a material with a cationic long-range order and a disordered anion sublattice. Thus, the X-ray diffraction patterns show sharp peaks, while the Raman bands are broad and similar to those of the parent glass. The structure and microstructure of the anti-glass and corresponding ceramic were characterized by Powder X-Ray Diffraction, Electron Back Scattered Diffraction, Transmission Electron Microscopy and Raman spectroscopy. The optical properties of the Er3+-doped samples are also discussed.

Keywords: glass, congruent crystallization, anti-glass, glass-ceramic, optics

Procedia PDF Downloads 77
765 Analysis of the Relationship between Micro-Regional Human Development and Brazil's Greenhouse Gases Emission

Authors: Geanderson Eduardo Ambrósio, Dênis Antônio Da Cunha, Marcel Viana Pires

Abstract:

Historically, human development has been based on economic gains associated with intensive energy activities, which often are exhaustive in the emission of Greenhouse Gases (GHGs). It requires the establishment of targets for mitigation of GHGs in order to disassociate the human development from emissions and prevent further climate change. Brazil presents itself as one of the most GHGs emitters and it is of critical importance to discuss such reductions in intra-national framework with the objective of distributional equity to explore its full mitigation potential without compromising the development of less developed societies. This research displays some incipient considerations about which Brazil’s micro-regions should reduce, when the reductions should be initiated and what its magnitude should be. We started with the methodological assumption that human development and GHGs emissions arise in the future as their behavior was observed in the past. Furthermore, we assume that once a micro-region became developed, it is able to maintain gains in human development without the need of keep growing GHGs emissions rates. The human development index and the carbon dioxide equivalent emissions (CO2e) were extrapolated to the year 2050, which allowed us to calculate when the micro-regions will become developed and the mass of GHG’s emitted. The results indicate that Brazil must throw 300 GT CO2e in the atmosphere between 2011 and 2050, of which only 50 GT will be issued by micro-regions before it’s develop and 250 GT will be released after development. We also determined national mitigation targets and structured reduction schemes where only the developed micro-regions would be required to reduce. The micro-region of São Paulo, the most developed of the country, should be also the one that reduces emissions at most, emitting, in 2050, 90% less than the value observed in 2010. On the other hand, less developed micro-regions will be responsible for less impactful reductions, i.e. Vale do Ipanema will issue in 2050 only 10% below the value observed in 2010. Such methodological assumption would lead the country to issue, in 2050, 56.5% lower than that observed in 2010, so that the cumulative emissions between 2011 and 2050 would reduce by 130 GT CO2e over the initial projection. The fact of associating the magnitude of the reductions to the level of human development of the micro-regions encourages the adoption of policies that favor both variables as the governmental planner will have to deal with both the increasing demand for higher standards of living and with the increasing magnitude of reducing emissions. However, if economic agents do not act proactively in local and national level, the country is closer to the scenario in which emits more than the one in which mitigates emissions. The research highlighted the importance of considering the heterogeneity in determining individual mitigation targets and also ratified the theoretical and methodological feasibility to allocate larger share of contribution for those who historically emitted more. It is understood that the proposals and discussions presented should be considered in mitigation policy formulation in Brazil regardless of the adopted reduction target.

Keywords: greenhouse gases, human development, mitigation, intensive energy activities

Procedia PDF Downloads 315
764 Art, Nature, and City in the Construction of Contemporary Public Space

Authors: Rodrigo Coelho

Abstract:

We believe that in the majority of the “recent production of public space", the overvaluation of the "image", of the "ephemeral" and of the "objectual", has come to determine the configuration of banal and (more or less) arbitrary "public spaces", mostly linked to a problem of “outdoor decoration”, reflecting a clear sign of uncertainty and arbitrariness about the meaning, the role and shape of public space and public art.This "inconsistency" which is essentially linked to the loss of urban, but also social, cultural and political, vocation of the disciplines that “shape” the urban space (but is also linked to the lack of urban and technical culture of techinicians and policy makers) converted a significant set of the recently built "public space" and “urban art” into diffuse and multi-referenced pieces, which generally shares the inability of confering to the urban space, civic, aesthetic, social and symbolic meanings. In this sense we consider it is essential to undertake a theoretical reflection on the values, the meaning(s) and the shape(s) that open space, and urban art may (or must) take in the current urban and cultural context, in order to redeem for public space its status of significant physical reference, able to embody a spatial and urban identity, and simultaneously enable the collective accession and appropriation of public space. Taking as reference public space interventions built in the last decade on the European context, we will seek to explore and defend the need of considering public space as a true place of exception, an exceptional support where the emphasis is placed on the quality of the experience, especially by the relations public space/urban art can established with the city, with nature and geography in a broad sense, referring us back to a close and inseparable and timeless relationship between nature and culture.

Keywords: art, city, nature, public space

Procedia PDF Downloads 447
763 Modelling and Control of Binary Distillation Column

Authors: Narava Manose

Abstract:

Distillation is a very old separation technology for separating liquid mixtures that can be traced back to the chemists in Alexandria in the first century A. D. Today distillation is the most important industrial separation technology. By the eleventh century, distillation was being used in Italy to produce alcoholic beverages. At that time, distillation was probably a batch process based on the use of just a single stage, the boiler. The word distillation is derived from the Latin word destillare, which means dripping or trickling down. By at least the sixteenth century, it was known that the extent of separation could be improved by providing multiple vapor-liquid contacts (stages) in a so called Rectifactorium. The term rectification is derived from the Latin words rectefacere, meaning to improve. Modern distillation derives its ability to produce almost pure products from the use of multi-stage contacting. Throughout the twentieth century, multistage distillation was by far the most widely used industrial method for separating liquid mixtures of chemical components.The basic principle behind this technique relies on the different boiling temperatures for the various components of the mixture, allowing the separation between the vapor from the most volatile component and the liquid of other(s) component(s). •Developed a simple non-linear model of a binary distillation column using Skogestad equations in Simulink. •We have computed the steady-state operating point around which to base our analysis and controller design. However, the model contains two integrators because the condenser and reboiler levels are not controlled. One particular way of stabilizing the column is the LV-configuration where we use D to control M_D, and B to control M_B; such a model is given in cola_lv.m where we have used two P-controllers with gains equal to 10.

Keywords: modelling, distillation column, control, binary distillation

Procedia PDF Downloads 273
762 Cost Reduction Techniques for Provision of Shelter to Homeless

Authors: Mukul Anand

Abstract:

Quality oriented affordable shelter for all has always been the key issue in the housing sector of our country. Homelessness is the acute form of housing need. It is a paradox that in spite of innumerable government initiated programmes for affordable housing, certain section of society is still devoid of shelter. About nineteen million (18.78 million) households grapple with housing shortage in Urban India in 2012. In Indian scenario there is major mismatch between the people for whom the houses are being built and those who need them. The prime force faced by public authorities in facilitation of quality housing for all is high cost of construction. The present paper will comprehend executable techniques for dilution of cost factor in housing the homeless. The key actors responsible for delivery of cheap housing stock such as capacity building, resource optimization, innovative low cost building material and indigenous skeleton housing system will also be incorporated in developing these techniques. Time performance, which is an important angle of above actors, will also be explored so as to increase the effectiveness of low cost housing. Along with this best practices will be taken up as case studies where both conventional techniques of housing and innovative low cost housing techniques would be cited. Transportation consists of approximately 30% of total construction budget. Thus use of alternative local solutions depending upon the region would be covered so as to highlight major components of low cost housing. Government is laid back regarding base line information on use of innovative low cost method and technique of resource optimization. Therefore, the paper would be an attempt to bring to light simpler solutions for achieving low cost housing.

Keywords: construction, cost, housing, optimization, shelter

Procedia PDF Downloads 443
761 The Role of Two Macrophyte Species in Mineral Nutrient Cycling in Human-Impacted Water Reservoirs

Authors: Ludmila Polechonska, Agnieszka Klink

Abstract:

The biogeochemical studies of macrophytes shed light on elements bioavailability, transfer through the food webs and their possible effects on the biota, and provide a basis for their practical application in aquatic monitoring and remediation. Measuring the accumulation of elements in plants can provide time-integrated information about the presence of chemicals in aquatic ecosystems. The aim of the study was to determine and compare the contents of micro- and macroelements in two cosmopolitan macrophytes, submerged Ceratophyllum demersum (hornworth) and free-floating Hydrocharis morsus-ranae (European frog-bit), in order to assess their bioaccumulation potential, elements stock accumulated in each plant and their role in nutrients cycling in small water reservoirs. Sampling sites were designated in 25 oxbow lakes in urban areas in Lower Silesia (SW Poland). In each sampling site, fresh whole plants of C. demersum and H. morsus-ranae were collected from squares of 1x1 meters each where the species coexisted. European frog-bit was separated into leaves, stems and roots. For biomass measurement all plants growing on 1 square meter were collected, dried and weighed. At the same time, water samples were collected from each reservoir and their pH and EC were determined. Water samples were filtered and acidified and plant samples were digested in concentrated nitric acid. Next, the content of Ca, Cu, Fe, K, Mg, Mn, Ni and Zn was determined using atomic absorption method (AAS). Statistical analysis showed that C. demersum and organs of H. morsus-ranae differed significantly in respect of metals content (Kruskal-Wallis Anova, p<0.05). Contents of Cu, Mn, Ni and Zn were higher in hornwort, while European frog-bit contained more Ca, Fe, K, Mg. Bioaccumulation Factors (BCF=content in plant/concentration in water) showed similar pattern of metal bioaccumulation – microelements were more intensively accumulated by hornwort and macroelements by frog-bit. Based on BCF values both species may be positively evaluated as good accumulators of Cu, Fe, Mn, Ni and Zn. However, the distribution of metals in H. morsus-ranae was uneven – the majority of studied elements were retained in roots, which may indicate to existence of physiological barriers developed for dealing with toxicity. Some percent of Ca and K was actively transported to stems, but to leaves Mg only. Although the biomass of C. demersum was two times greater than biomass of H. morsus-ranae, the element off-take was greater only for Cu, Mn, Ni and Zn. Nevertheless, it can be stated that despite a relatively small biomass, compared to other macrophytes, both species may have an influence on the removal of trace elements from aquatic ecosystems and, as they serve as food for some animals, also on the incorporation of toxic elements into food chains. There was a significant positive correlation between content of Mn and Fe in water and roots of H. morus-ranae (R=0.51 and R=0.60, respectively) as well as between Cu concentration in water and in C. demersum (R=0.41) (Spearman rank correlation, p<0.05). High bioaccumulation rates and correlation between plants and water elements concentrations point to their possible use as passive biomonitors of aquatic pollution.

Keywords: aquatic plants, bioaccumulation, biomonitoring, macroelements, phytoremediation, trace metals

Procedia PDF Downloads 182
760 Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance

Authors: George Zhou, Yunchan Chen, Candace Chien

Abstract:

Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.

Keywords: arteriovenous fistula, blood flow sounds, metadata encoding, deep learning

Procedia PDF Downloads 84
759 Concrete Mix Design Using Neural Network

Authors: Rama Shanker, Anil Kumar Sachan

Abstract:

Basic ingredients of concrete are cement, fine aggregate, coarse aggregate and water. To produce a concrete of certain specific properties, optimum proportion of these ingredients are mixed. The important factors which govern the mix design are grade of concrete, type of cement and size, shape and grading of aggregates. Concrete mix design method is based on experimentally evolved empirical relationship between the factors in the choice of mix design. Basic draw backs of this method are that it does not produce desired strength, calculations are cumbersome and a number of tables are to be referred for arriving at trial mix proportion moreover, the variation in attainment of desired strength is uncertain below the target strength and may even fail. To solve this problem, a lot of cubes of standard grades were prepared and attained 28 days strength determined for different combination of cement, fine aggregate, coarse aggregate and water. An artificial neural network (ANN) was prepared using these data. The input of ANN were grade of concrete, type of cement, size, shape and grading of aggregates and output were proportions of various ingredients. With the help of these inputs and outputs, ANN was trained using feed forward back proportion model. Finally trained ANN was validated, it was seen that it gave the result with/ error of maximum 4 to 5%. Hence, specific type of concrete can be prepared from given material properties and proportions of these materials can be quickly evaluated using the proposed ANN.

Keywords: aggregate proportions, artificial neural network, concrete grade, concrete mix design

Procedia PDF Downloads 387
758 Older Adult Grandparents' Voices as a Principle Care Giver in a Skipped-Generation Family

Authors: Kerdsiri Hongthai, Darunee Jongudomkarn, Rutja Phuphaibul

Abstract:

In Thailand, many adults in rural areas migrate to seek employ¬ment resulting in skipped-generation family where grandparents care for grandchildren with no other adults present. This is a preliminary study using qualitative case study methods, aimed to explore the situations of older adult grandparents' experiences in skipped-generation family in North-East of Thailand. Data were collected by in-depth inter¬views with 6 grandparents living in skipped-generation families; 5 females and 1 males grandparents, aged 62-75, some of them have diabetes mellitus, hypertension, during November to December, 2017. The finding themes are: ‘Caught up in the middle’: the older adults were pleased to have grandchildren but, at the same time, acknowledge the burden that this placed on them, especially when the migrant children failed to send enough money back to support the family. ‘Getting bad health’: they reported to be fatigued and stressed due to burden of caring for their grandchildren without support. This situation can aggravate problems of poor health status and be worsening economic status of the grandparents. In some cases of deprivation, the grandparents feel that having to be the sole care providers of their grandchildren can negative adversely affect their mental status. It is important to find out in other sectors similar to Thailand and lead to more in-depth research to answer the research questions about policy and social support in skipped-generation family in the future.

Keywords: older adult grandparents, experiences, principle care giver, skipped-generation family

Procedia PDF Downloads 142
757 Enabling Cloud Adoption Based Secured Mobile Banking through Backend as a Service

Authors: P. S. Jagadeesh Kumar, S. Meenakshi Sundaram

Abstract:

With the increase of prevailing non-traditional rivalry, mobile banking experiences an ever changing commercial backdrop. Substantial customer demands have established to be more intricate as customers request more expediency and superintend over their banking services. To enterprise advance and modernization in mobile banking applications, it is gradually obligatory to deeply leapfrog the scuffle using business model transformation. The dramaturgical vicissitudes taking place in mobile banking entail advanced traditions to exploit security. By reforming and transforming older back office into integrated mobile banking applications, banks can engender a supple and nimble banking environment that can rapidly respond to new business requirements over cloud computing. Cloud computing is transfiguring ecosystems in numerous industries, and mobile banking is no exemption providing services innovation, greater flexibility to respond to improved security and enhanced business intelligence with less cost. Cloud technology offer secure deployment possibilities that can provision banks in developing new customer experiences, empower operative relationship and advance speed to efficient banking transaction. Cloud adoption is escalating quickly since it can be made secured for commercial mobile banking transaction through backend as a service in scrutinizing the security strategies of the cloud service provider along with the antiquity of transaction details and their security related practices.

Keywords: cloud adoption, backend as a service, business intelligence, secured mobile banking

Procedia PDF Downloads 252
756 Evaluation of Nematicidal Action of Some Botanicals on Plant-Parasitic Nematode

Authors: Lakshmi, Yakshita Awasthi, Deepika, Lovleen Jha, Archna Kumar

Abstract:

From the back of centuries, plant-parasitic nematodes (PPN) have been recognized as a major threat to agriculturalists globally. It causes 21.3% global food loss annually. The utilization of harmful chemical pesticides to minimize the nematode population may cause acute and delayed health hazards and harmful impacts on human health. In recent years, a variety of plants have been evaluated for their nematicidal properties and efficacy in the management of plant-parasitic nematodes. Several Phyto-nematicides are available, but most of them are incapable of sustainable management of PPN, especially Meloidogyne spp. Thus, there is a great need for a new eco-friendly, highly efficient, sustainable control measure for this nematode species. Keeping all these facts and after reviewing the literature, aqueous extract of Cymbopogon citratus, Tagetes erecta, and Azadirachta indica were prepared by adding distilled water (1 g sample mixed with 10ml of water). In vitro studies were conducted to evaluate the efficacious nature of targeted botanicals against PPN Meloidogyne spp. The mortality status of PPN was recorded by counting the live and dead individuals after applying 100μl of selected extract. The impact was observed at different time durations, i.e., 24h and 48h. The result showed that the highest 100% mortality was at 48h in all three extracts. Thus, these extracts, with the addition of a suitable shelf-life enhancer, may be exploited in different nematode control programs as an economical, sustainable measure.

Keywords: Meloidogyne, Cymbopogon citratus, Tagetes erecta, Azadirachta indica, nematicidal

Procedia PDF Downloads 160
755 Mitigation Measures for the Acid Mine Drainage Emanating from the Sabie Goldfield: Case Study of the Nestor Mine

Authors: Rudzani Lusunzi, Frans Waanders, Elvis Fosso-Kankeu, Robert Khashane Netshitungulwana

Abstract:

The Sabie Goldfield has a history of gold mining dating back more than a century. Acid mine drainage (AMD) from the Nestor mine tailings storage facility (MTSF) poses a serious threat to the nearby ecosystem, specifically the Sabie River system. This study aims at developing mitigation measures for the AMD emanating from the Nestor MTSF using materials from the Glynns Lydenburg MTSF. The Nestor MTSF (NM) and the Glynns Lydenburg MTSF (GM) each provided about 20 kg of bulk composite samples. Using samples from the Nestor MTSF and the Glynns Lydenburg MTSF, two mixtures were created. MIX-A is a mixture that contains 25% weight percent (GM) and 75% weight percent (NM). MIX-B is the name given to the second mixture, which contains 50% AN and 50% AG. The same static test, i.e., acid–base accounting (ABA), net acid generation (NAG), and acid buffering characteristics curve (ABCC) was used to estimate the acid-generating probabilities of samples NM and GM for MIX-A and MIX-B. Furthermore, the mineralogy of the Nestor MTSF samples consists of the primary acid-producing mineral pyrite as well as the secondary minerals ferricopiapite and jarosite, which are common in acidic conditions. The Glynns Lydenburg MTSF samples, on the other hand, contain primary acid-neutralizing minerals calcite and dolomite. Based on the assessment conducted, materials from the Glynns Lydenburg are capable of neutralizing AMD from Nestor MTSF. Therefore, the alkaline tailings materials from the Glynns Lydenburg MTSF can be used to rehabilitate the acidic Nestor MTSF.

Keywords: Nestor Mine, acid mine drainage, mitigation, Sabie River system

Procedia PDF Downloads 81
754 Feasibility Study of Friction Stir Welding Application for Kevlar Material

Authors: Ahmet Taşan, Süha Tirkeş, Yavuz Öztürk, Zafer Bingül

Abstract:

Friction stir welding (FSW) is a joining process in the solid state, which eliminates problems associated with the material melting and solidification, such as cracks, residual stresses and distortions generated during conventional welding. Among the most important advantages of FSW are; easy automation, less distortion, lower residual stress and good mechanical properties in the joining region. FSW is a recent approach to metal joining and although originally intended for aluminum alloys, it is investigated in a variety of metallic materials. The basic concept of FSW is a rotating tool, made of non-consumable material, specially designed with a geometry consisting of a pin and a recess (shoulder). This tool is inserted as spinning on its axis at the adjoining edges of two sheets or plates to be joined and then it travels along the joining path line. The tool rotation axis defines an angle of inclination with which the components to be welded. This angle is used for receiving the material to be processed at the tool base and to promote the gradual forge effect imposed by the shoulder during the passage of the tool. This prevents the material plastic flow at the tool lateral, ensuring weld closure on the back of the pin. In this study, two 4 mm Kevlar® plates which were produced with the Kevlar® fabrics, are analyzed with COMSOL Multiphysics in order to investigate the weldability via FSW. Thereafter, some experimental investigation is done with an appropriate workbench in order to compare them with the analysis results.

Keywords: analytical modeling, composite materials welding, friction stir welding, heat generation

Procedia PDF Downloads 156
753 Providing Reliability, Availability and Scalability Support for Quick Assist Technology Cryptography on the Cloud

Authors: Songwu Shen, Garrett Drysdale, Veerendranath Mannepalli, Qihua Dai, Yuan Wang, Yuli Chen, David Qian, Utkarsh Kakaiya

Abstract:

Hardware accelerator has been a promising solution to reduce the cost of cloud data centers. This paper investigates the QoS enhancement of the acceleration of an important datacenter workload: the webserver (or proxy) that faces high computational consumption originated from secure sockets layer (SSL) or transport layer security (TLS) procession in the cloud environment. Our study reveals that for the accelerator maintenance cases—need to upgrade driver/firmware or hardware reset due to hardware hang; we still can provide cryptography services by switching to software during maintenance phase and then switching back to accelerator after maintenance. The switching is seamless to server application such as Nginx that runs inside a VM on top of the server. To achieve this high availability goal, we propose a comprehensive fallback solution based on Intel® QuickAssist Technology (QAT). This approach introduces an architecture that involves the collaboration between physical function (PF) and virtual function (VF), and collaboration among VF, OpenSSL, and web application Nginx. The evaluation shows that our solution could provide high reliability, availability, and scalability (RAS) of hardware cryptography service in a 7x24x365 manner in the cloud environment.

Keywords: accelerator, cryptography service, RAS, secure sockets layer/transport layer security, SSL/TLS, virtualization fallback architecture

Procedia PDF Downloads 151
752 A Research on the Benefits of Drone Usage in Industry by Determining Companies Using Drone in the World

Authors: Ahmet Akdemir, Güzide Karakuş, Leyla Polat

Abstract:

Aviation that has been arisen in accordance with flying request that is existing inside of people, has not only made life easier by making a great contribution to humanity; it has also accelerated globalization by reducing distances between countries. It is seen that the growth rate of aviation industry has reached the undreamed level when it is looked back on. Today, the last point in aviation is unmanned aerial vehicles that are self-ventilating and move in desired coordinates without any onboard pilot. For those vehicles, there are two different control systems are developed. In the first type of control, an unmanned aerial vehicle (UAV) moves according to instructions of a remote control. UAV that moves with a remote control is named as drone; it can be used personally. In the second one, there is a flight plan that is programmed and placed inside of UAV before flight. Recently, drones have started to be used in unimagined areas and utilize specific, important benefits for any industry. Within this framework, this study answers the question that is drone usage would be beneficial for businesses or not. To answer this question, applied basic methodologies are determining businesses using drone in the world, their purposes to use drone, and then, comparing their economy as before drone and after drone. In the end of this study, it is seen that many companies in different business areas use drone in logistics support, and it makes their work easier than before. This paper has contributed to academic literature about this subject, and it has introduced the benefits of drone usage for businesses. In addition, it has encouraged businesses that they keep pace with this technological age by following the developments about drones.

Keywords: aviation, drone, drone in business, unmanned aerial vehicle

Procedia PDF Downloads 251
751 Motor Vehicle Accidents During Pregnancy: Analysis of Maternal and Fetal Outcome at a University Hospital

Authors: Manjunath Attibele, Alsawafi Manal, Al Dughaishi Tamima

Abstract:

Introduction: The purpose of this study was to describe the clinical characteristics and types of mechanisms of injuries caused by Motor vehicle accidents (MVA) during pregnancy. To analyze the patterns of accidents during pregnancy and its adverse consequences on both maternal and fetal outcome. Methods: This was a retrospective cohort study on pregnant patients who met with MVAs The study period was from January 1, 2010, to December 31, 2019. All relevant data were retrieved from electronic patients’ records from the hospital information system and from the antenatal ward admission register Results: Out of 168 women who had motor vehicle accidents during the study period, of which, 39 (23.2%) women during pregnancy. Twenty-one (53.8%) women were over 30 years old. Thirty-five (89.7%) women were Omanis, and 27 (69.2%) were in their third trimester. Twenty-three (59%) of accidents happened at night, and 31 (79.5%) of them happened on a weekday. Twenty-two (56.4%) of women were driving themselves, and 24 (61.5%) of them were not using any seatbelt. Accident related abdominal & back pain was seen in 23(59%) women. Regarding the outcome of pregnancy, 23 (74.2%) had a normal vaginal delivery. The mean accident to delivery interval was 7 weeks. Thirty (96.7%) of involved newborns were relatively healthy. One woman (3.2%) had a ruptured uterusleading to fetal death (3.2%). Conclusion: This study showed that the incidence of motor vehicle accidents during pregnancy is around 23.2% . Majority had trauma-associated pain. One serious injury to a woman causing a ruptured uterus which lead to fetal death. Majority of involved newborns were relatively healthy. No reported maternal death.

Keywords: motor vehicle accidents, pregnancy, maternal outcome, fetal outcome

Procedia PDF Downloads 88
750 Voice Liveness Detection Using Kolmogorov Arnold Networks

Authors: Arth J. Shah, Madhu R. Kamble

Abstract:

Voice biometric liveness detection is customized to certify an authentication process of the voice data presented is genuine and not a recording or synthetic voice. With the rise of deepfakes and other equivalently sophisticated spoofing generation techniques, it’s becoming challenging to ensure that the person on the other end is a live speaker or not. Voice Liveness Detection (VLD) system is a group of security measures which detect and prevent voice spoofing attacks. Motivated by the recent development of the Kolmogorov-Arnold Network (KAN) based on the Kolmogorov-Arnold theorem, we proposed KAN for the VLD task. To date, multilayer perceptron (MLP) based classifiers have been used for the classification tasks. We aim to capture not only the compositional structure of the model but also to optimize the values of univariate functions. This study explains the mathematical as well as experimental analysis of KAN for VLD tasks, thereby opening a new perspective for scientists to work on speech and signal processing-based tasks. This study emerges as a combination of traditional signal processing tasks and new deep learning models, which further proved to be a better combination for VLD tasks. The experiments are performed on the POCO and ASVSpoof 2017 V2 database. We used Constant Q-transform, Mel, and short-time Fourier transform (STFT) based front-end features and used CNN, BiLSTM, and KAN as back-end classifiers. The best accuracy is 91.26 % on the POCO database using STFT features with the KAN classifier. In the ASVSpoof 2017 V2 database, the lowest EER we obtained was 26.42 %, using CQT features and KAN as a classifier.

Keywords: Kolmogorov Arnold networks, multilayer perceptron, pop noise, voice liveness detection

Procedia PDF Downloads 36
749 Effects of Using Alternative Energy Sources and Technologies to Reduce Energy Consumption and Expenditure of a Single Detached House

Authors: Gul Nihal Gugul, Merih Aydinalp-Koksal

Abstract:

In this study, hourly energy consumption model of a single detached house in Ankara, Turkey is developed using ESP-r building energy simulation software. Natural gas is used for space heating, cooking, and domestic water heating in this two story 4500 square feet four-bedroom home. Hourly electricity consumption of the home is monitored by an automated meter reading system, and daily natural gas consumption is recorded by the owners during 2013. Climate data of the region and building envelope data are used to develop the model. The heating energy consumption of the house that is estimated by the ESP-r model is then compared with the actual heating demand to determine the performance of the model. Scenarios are applied to the model to determine the amount of reduction in the total energy consumption of the house. The scenarios are using photovoltaic panels to generate electricity, ground source heat pumps for space heating and solar panels for domestic hot water generation. Alternative scenarios such as improving wall and roof insulations and window glazing are also applied. These scenarios are evaluated based on annual energy, associated CO2 emissions, and fuel expenditure savings. The pay-back periods for each scenario are also calculated to determine best alternative energy source or technology option for this home to reduce annual energy use and CO2 emission.

Keywords: ESP-r, building energy simulation, residential energy saving, CO2 reduction

Procedia PDF Downloads 194
748 The Concepts of Ibn Taymiyyah in Halal and Haram and Their Relevance to Contemporary Issues

Authors: Muhammad Fakhrul Arrazi

Abstract:

Ibn Taymiyyah is a great figure in Islam. His works have become the reference for many Muslims in implementing the fiqh of Ibadah and Muamalat. This article reviews the concepts that Ibn Taymiyyah has initiated in Halal and Haram, long before the books on Halal and Haram are written by contemporary scholars. There are at least four concepts of Halal and Haram ever spawned by Ibn Taymiyyah. First, the belief of a jurist (Faqih) in a matter that is Haram does not necessarily make the matter Haram. Haram arises from the Quran, Sunnah, Ijma’ and Qiyas as the tarjih. Due to the different opinions among the ulama, we should revisit this concept. Second, if a Muslim involves in a transaction (Muamalat), believes it permissible and gets money from such transaction, then it is legal for other Muslims to transact with the property of this Muslim brother, even if he does not believe that the transactions made by his Muslims brother are permissible. Third, Haram is divided into two; first is Haram because of the nature of an object, such as carrion, blood, and pork. If it is mixed with water or food and alters their taste, color, and smell, the food and water become Haram. Second is Haram because of the way it is obtained such as a stolen item and a broken aqad. If it is mixed with the halal property, the property does not automatically become Haram. Fourth, a treasure whose owners cannot be traced back then it is used for the benefit of the ummah. This study used the secondary data from the classics books by Ibn Taymiyyah, particularly those entailing his views on Halal and Haram. The data were then analyzed by using thematic and comparative approach. It is found that most of the concepts proposed by Ibn Taymiyyah in Halal and Haram correspond the majority’s views in the schools. However, some of his concepts are also in contrary to other scholars. His concepts will benefit the ummah, should it be applied to the contemporary issues.

Keywords: fiqh Muamalat, halal, haram, Ibn Taymiyyah

Procedia PDF Downloads 181
747 Using Cyclic Structure to Improve Inference on Network Community Structure

Authors: Behnaz Moradijamei, Michael Higgins

Abstract:

Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.

Keywords: hypothesis testing, RNBRW, network inference, community structure

Procedia PDF Downloads 148