Search results for: particle size distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10582

Search results for: particle size distribution

6442 Stochastic Repair and Replacement with a Single Repair Channel

Authors: Mohammed A. Hajeeh

Abstract:

This paper examines the behavior of a system, which upon failure is either replaced with certain probability p or imperfectly repaired with probability q. The system is analyzed using Kolmogorov's forward equations method; the analytical expression for the steady state availability is derived as an indicator of the system’s performance. It is found that the analysis becomes more complex as the number of imperfect repairs increases. It is also observed that the availability increases as the number of states and replacement probability increases. Using such an approach in more complex configurations and in dynamic systems is cumbersome; therefore, it is advisable to resort to simulation or heuristics. In this paper, an example is provided for demonstration.

Keywords: repairable models, imperfect, availability, exponential distribution

Procedia PDF Downloads 272
6441 Fracture And Fatigue Crack Growth Analysis and Modeling

Authors: Volkmar Nolting

Abstract:

Fatigue crack growth prediction has become an important topic in both engineering and non-destructive evaluation. Crack propagation is influenced by the mechanical properties of the material and is conveniently modelled by the Paris-Erdogan equation. The critical crack size and the total number of load cycles are calculated. From a Larson-Miller plot the maximum operational temperature can for a given stress level be determined so that failure does not occur within a given time interval t. The study is used to determine a reasonable inspection cycle and thus enhances operational safety and reduces costs.

Keywords: fracturemechanics, crack growth prediction, lifetime of a component, structural health monitoring

Procedia PDF Downloads 24
6440 Forming Form, Motivation and Their Biolinguistic Hypothesis: The Case of Consonant Iconicity in Tashelhiyt Amazigh and English

Authors: Noury Bakrim

Abstract:

When dealing with motivation/arbitrariness, forming form (Forma Formans) and morphodynamics are to be grasped as relevant implications of enunciation/enactment, schematization within the specificity of language as sound/meaning articulation. Thus, the fact that a language is a form does not contradict stasis/dynamic enunciation (reflexivity vs double articulation). Moreover, some languages exemplify the role of the forming form, uttering, and schematization (roots in Semitic languages, the Chinese case). Beyond the evolutionary biosemiotic process (form/substance bifurcation, the split between realization/representation), non-isomorphism/asymmetry between linguistic form/norm and linguistic realization (phonetics for instance) opens up a new horizon problematizing the role of Brain – sensorimotor contribution in the continuous forming form. Therefore, we hypothesize biotization as both process/trace co-constructing motivation/forming form. Henceforth, referring to our findings concerning distribution and motivation patterns within Berber written texts (pulse based obstruents and nasal-lateral levels in poetry) and oral storytelling (consonant intensity clustering in quantitative and semantic/prosodic motivation), we understand consonant clustering, motivation and schematization as a complex phenomenon partaking in patterns of oral/written iconic prosody and reflexive metalinguistic representation opening the stable form. We focus our inquiry on both Amazigh and English clusters (/spl/, /spr/) and iconic consonant iteration in [gnunnuy] (to roll/tumble), [smummuy] (to moan sadly or crankily). For instance, the syllabic structures of /splaeʃ/ and /splaet/ imply an anamorphic representation of the state of the world: splash, impact on aquatic surfaces/splat impact on the ground. The pair has stridency and distribution as distinctive features which specify its phonetic realization (and a part of its meaning) /ʃ/ is [+ strident] and /t/ is [+ distributed] on the vocal tract. Schematization is then a process relating both physiology/code as an arthron vocal/bodily, vocal/practical shaping of the motor-articulatory system, leading to syntactic/semantic thematization (agent/patient roles in /spl/, /sm/ and other clusters or the tense uvular /qq/ at the initial position in Berber). Furthermore, the productivity of serial syllable sequencing in Berber points out different expressivity forms. We postulate two Components of motivated formalization: i) the process of memory paradigmatization relating to sequence modeling under sensorimotor/verbal specific categories (production/perception), ii) the process of phonotactic selection - prosodic unconscious/subconscious distribution by virtue of iconicity. Basing on multiple tests including a questionnaire, phonotactic/visual recognition and oral/written reproduction, we aim at patterning/conceptualizing consonant schematization and motivation among EFL and Amazigh (Berber) learners and speakers integrating biolinguistic hypotheses.

Keywords: consonant motivation and prosody, language and order of life, anamorphic representation, represented representation, biotization, sensori-motor and brain representation, form, formalization and schematization

Procedia PDF Downloads 131
6439 The Soliton Solution of the Quadratic-Cubic Nonlinear Schrodinger Equation

Authors: Sarun Phibanchon, Yuttakarn Rattanachai

Abstract:

The quadratic-cubic nonlinear Schrodinger equation can be explained the weakly ion-acoustic waves in magnetized plasma with a slightly non-Maxwellian electron distribution by using the Madelung's fluid picture. However, the soliton solution to the quadratic-cubic nonlinear Schrodinger equation is determined by using the direct integration. By the characteristics of a soliton, the solution can be claimed that it's a soliton by considering its time evolution and their collisions between two solutions. These results are shown by applying the spectral method.

Keywords: soliton, ion-acoustic waves, plasma, spectral method

Procedia PDF Downloads 397
6438 Effects of Surface Topography on Roughness of Glazed Ceramic Substrates

Authors: R. Sarjahani, M. Sheikhattar, S. Javadpour, B. Hashemi

Abstract:

Glazes and their surface characterization is an important subject for ceramic industries. Fabrication of a super smooth surface resistant to stains is a big improvement for those industries. In this investigation, surface topography of popular glazes such as Zircon and Titania based opaque glazes, calcium based matte glaze and transparent glaze has been analyzed by Marsurf M300, SEM, EDS and XRD. Results shows that surface roughness of glazes seriously depends on surface crystallinity, crystal size and shapes.

Keywords: crystallinity, glaze, surface roughness, topography

Procedia PDF Downloads 546
6437 Techno-Economic Assessment of Distributed Heat Pumps Integration within a Swedish Neighborhood: A Cosimulation Approach

Authors: Monica Arnaudo, Monika Topel, Bjorn Laumert

Abstract:

Within the Swedish context, the current trend of relatively low electricity prices promotes the electrification of the energy infrastructure. The residential heating sector takes part in this transition by proposing a switch from a centralized district heating system towards a distributed heat pumps-based setting. When it comes to urban environments, two issues arise. The first, seen from an electricity-sector perspective, is related to the fact that existing networks are limited with regards to their installed capacities. Additional electric loads, such as heat pumps, can cause severe overloads on crucial network elements. The second, seen from a heating-sector perspective, has to do with the fact that the indoor comfort conditions can become difficult to handle when the operation of the heat pumps is limited by a risk of overloading on the distribution grid. Furthermore, the uncertainty of the electricity market prices in the future introduces an additional variable. This study aims at assessing the extent to which distributed heat pumps can penetrate an existing heat energy network while respecting the technical limitations of the electricity grid and the thermal comfort levels in the buildings. In order to account for the multi-disciplinary nature of this research question, a cosimulation modeling approach was adopted. In this way, each energy technology is modeled in its customized simulation environment. As part of the cosimulation methodology: a steady-state power flow analysis in pandapower was used for modeling the electrical distribution grid, a thermal balance model of a reference building was implemented in EnergyPlus to account for space heating and a fluid-cycle model of a heat pump was implemented in JModelica to account for the actual heating technology. With the models set in place, different scenarios based on forecasted electricity market prices were developed both for present and future conditions of Hammarby Sjöstad, a neighborhood located in the south-east of Stockholm (Sweden). For each scenario, the technical and the comfort conditions were assessed. Additionally, the average cost of heat generation was estimated in terms of levelized cost of heat. This indicator enables a techno-economic comparison study among the different scenarios. In order to evaluate the levelized cost of heat, a yearly performance simulation of the energy infrastructure was implemented. The scenarios related to the current electricity prices show that distributed heat pumps can replace the district heating system by covering up to 30% of the heating demand. By lowering of 2°C, the minimum accepted indoor temperature of the apartments, this level of penetration can increase up to 40%. Within the future scenarios, if the electricity prices will increase, as most likely expected within the next decade, the penetration of distributed heat pumps can be limited to 15%. In terms of levelized cost of heat, a residential heat pump technology becomes competitive only within a scenario of decreasing electricity prices. In this case, a district heating system is characterized by an average cost of heat generation 7% higher compared to a distributed heat pumps option.

Keywords: cosimulation, distributed heat pumps, district heating, electrical distribution grid, integrated energy systems

Procedia PDF Downloads 138
6436 The Influence of Sulfate and Magnesium Ions on the Growth Kinetics of CaCO3

Authors: Kotbia Labiod, Mohamed Mouldi Tlili

Abstract:

The presence of different mineral salts in natural waters may precipitate and form hard deposits in water distribution systems. In this respect, we have developed numerous works on scaling by Algerian water with a very high hardness of 102 °F. The aim of our work is to study the influence of water dynamics and its composition on mineral salts on the precipitation of calcium carbonate (CaCO3). To achieve this objective, we have adopted two precipitation techniques based on controlled degassing of dissolved CO2. This study will identify the causes and provide answers to this complex phenomenon.

Keywords: calcium carbonate, controlled degassing, precipitation, scaling

Procedia PDF Downloads 212
6435 Mobile Phone Text Reminders and Voice Call Follow-ups Improve Attendance for Community Retail Pharmacy Refills; Learnings from Lango Sub-region in Northern Uganda

Authors: Jonathan Ogwal, Louis H. Kamulegeya, John M. Bwanika, Davis Musinguzi

Abstract:

Introduction: Community retail Pharmacy drug distribution points (CRPDDP) were implemented in the Lango sub-region as part of the Ministry of Health’s response to improving access and adherence to antiretroviral treatment (ART). Clients received their ART refills from nearby local pharmacies; as such, the need for continuous engagement through mobile phone appointment reminders and health messages. We share learnings from the implementation of mobile text reminders and voice call follow-ups among ART clients attending the CRPDDP program in northern Uganda. Methods: A retrospective data review of electronic medical records from four pharmacies allocated for CRPDDP in the Lira and Apac districts of the Lango sub-region in Northern Uganda was done from February to August 2022. The process involved collecting phone contacts of eligible clients from the health facility appointment register and uploading them onto a messaging platform customized by Rapid-pro, an open-source software. Client information, including code name, phone number, next appointment date, and the allocated pharmacy for ART refill, was collected and kept confidential. Contacts received appointment reminder messages and other messages on positive living as an ART client. Routine voice call follow-ups were done to ascertain the picking of ART from the refill pharmacy. Findings: In total, 1,354 clients were reached from the four allocated pharmacies found in urban centers. 972 clients received short message service (SMS) appointment reminders, and 382 were followed up through voice calls. The majority (75%) of the clients returned for refills on the appointed date, 20% returned within four days after the appointment date, and the remaining 5% needed follow-up where they reported that they were not in the district by the appointment date due to other engagements. Conclusion: The use of mobile text reminders and voice call follow-ups improves the attendance of community retail pharmacy refills.

Keywords: antiretroviral treatment, community retail drug distribution points, mobile text reminders, voice call follow-up

Procedia PDF Downloads 88
6434 Issues in Travel Demand Forecasting

Authors: Huey-Kuo Chen

Abstract:

Travel demand forecasting including four travel choices, i.e., trip generation, trip distribution, modal split and traffic assignment constructs the core of transportation planning. In its current application, travel demand forecasting has associated with three important issues, i.e., interface inconsistencies among four travel choices, inefficiency of commonly used solution algorithms, and undesirable multiple path solutions. In this paper, each of the three issues is extensively elaborated. An ideal unified framework for the combined model consisting of the four travel choices and variable demand functions is also suggested. Then, a few remarks are provided in the end of the paper.

Keywords: travel choices, B algorithm, entropy maximization, dynamic traffic assignment

Procedia PDF Downloads 438
6433 A Comparative Study of the Techno-Economic Performance of the Linear Fresnel Reflector Using Direct and Indirect Steam Generation: A Case Study under High Direct Normal Irradiance

Authors: Ahmed Aljudaya, Derek Ingham, Lin Ma, Kevin Hughes, Mohammed Pourkashanian

Abstract:

Researchers, power companies, and state politicians have given concentrated solar power (CSP) much attention due to its capacity to generate large amounts of electricity whereas overcoming the intermittent nature of solar resources. The Linear Fresnel Reflector (LFR) is a well-known CSP technology type for being inexpensive, having a low land use factor, and suffering from low optical efficiency. The LFR was considered a cost-effective alternative option to the Parabolic Trough Collector (PTC) because of its simplistic design, and this often outweighs its lower efficiency. The LFR has been found to be a promising option for directly producing steam to a thermal cycle in order to generate low-cost electricity, but also it has been shown to be promising for indirect steam generation. The purpose of this important analysis is to compare the annual performance of the Direct Steam Generation (DSG) and Indirect Steam Generation (ISG) of LFR power plants using molten salt and other different Heat Transfer Fluids (HTF) to investigate their technical and economic effects. A 50 MWe solar-only system is examined as a case study for both steam production methods in extreme weather conditions. In addition, a parametric analysis is carried out to determine the optimal solar field size that provides the lowest Levelized Cost of Electricity (LCOE) while achieving the highest technical performance. As a result of optimizing the optimum solar field size, the solar multiple (SM) is found to be between 1.2 – 1.5 in order to achieve as low as 9 Cent/KWh for the direct steam generation of the linear Fresnel reflector. In addition, the power plant is capable of producing around 141 GWh annually and up to 36% of the capacity factor, whereas the ISG produces less energy at a higher cost. The optimization results show that the DSG’s performance overcomes the ISG in producing around 3% more annual energy, 2% lower LCOE, and 28% less capital cost.

Keywords: concentrated solar power, levelized cost of electricity, linear Fresnel reflectors, steam generation

Procedia PDF Downloads 94
6432 Observation of Inverse Blech Length Effect during Electromigration of Cu Thin Film

Authors: Nalla Somaiah, Praveen Kumar

Abstract:

Scaling of transistors and, hence, interconnects is very important for the enhanced performance of microelectronic devices. Scaling of devices creates significant complexity, especially in the multilevel interconnect architectures, wherein current crowding occurs at the corners of interconnects. Such a current crowding creates hot-spots at the respective corners, resulting in non-uniform temperature distribution in the interconnect as well. This non-uniform temperature distribution, which is exuberated with continued scaling of devices, creates a temperature gradient in the interconnect. In particular, the increased current density at corners and the associated temperature rise due to Joule heating accelerate the electromigration induced failures in interconnects, especially at corners. This has been the classic reliability issue associated with metallic interconnects. Herein, it is generally understood that electromigration induced damages can be avoided if the length of interconnect is smaller than a critical length, often termed as Blech length. Interestingly, the effect of non-negligible temperature gradients generated at these corners in terms of thermomigration and electromigration-thermomigration coupling has not attracted enough attention. Accordingly, in this work, the interplay between the electromigration and temperature gradient induced mass transport was studied using standard Blech structure. In this particular sample structure, the majority of the current is forcefully directed into the low resistivity metallic film from a high resistivity underlayer film, resulting in current crowding at the edges of the metallic film. In this study, 150 nm thick Cu metallic film was deposited on 30 nm thick W underlayer film in the configuration of Blech structure. Series of Cu thin strips, with lengths of 10, 20, 50, 100, 150 and 200 μm, were fabricated. Current density of ≈ 4 × 1010 A/m² was passed through Cu and W films at a temperature of 250ºC. Herein, along with expected forward migration of Cu atoms from the cathode to the anode at the cathode end of the Cu film, backward migration from the anode towards the center of Cu film was also observed. Interestingly, smaller length samples consistently showed enhanced migration at the cathode end, thus indicating the existence of inverse Blech length effect in presence of temperature gradient. A finite element based model showing the interplay between electromigration and thermomigration driving forces has been developed to explain this observation.

Keywords: Blech structure, electromigration, temperature gradient, thin films

Procedia PDF Downloads 242
6431 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 272
6430 Adaptive Swarm Balancing Algorithms for Rare-Event Prediction in Imbalanced Healthcare Data

Authors: Jinyan Li, Simon Fong, Raymond Wong, Mohammed Sabah, Fiaidhi Jinan

Abstract:

Clinical data analysis and forecasting have make great contributions to disease control, prevention and detection. However, such data usually suffer from highly unbalanced samples in class distributions. In this paper, we target at the binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat-inspired algorithm, and combine both of them with the synthetic minority over-sampling technique (SMOTE) for processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reveal that while the performance improvements obtained by the former methods are not scalable to larger data scales, the later one, which we call Adaptive Swarm Balancing Algorithms, leads to significant efficiency and effectiveness improvements on large datasets. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. Leading to more credible performances of the classifier, and shortening the running time compared with the brute-force method.

Keywords: Imbalanced dataset, meta-heuristic algorithm, SMOTE, big data

Procedia PDF Downloads 428
6429 Method to Calculate the Added Value in Supply Chains of Electric Power Meters

Authors: Andrey Vinajera-Zamora, Norge Coello-Machado, Elke Glistau

Abstract:

The objective of this research is calculate the added value in operations of electric power meters (EPM) supply chains, specifically the EPM of 220v. The tool used is composed by six steps allowing at same time the identification of calibration of EPM as the bottleneck operation according the net added value being at same time the process of higher added value. On the other hand, this methodology allows calculate the amount of money to buy the raw material. The main conclusions are related to the analyze ‘s way and calculating of added value in supply chain integrated by the echelons procurement, production and distribution or any of these.

Keywords: economic value added, supply chain management, value chain, bottleneck detection

Procedia PDF Downloads 284
6428 SOI-Multi-FinFET: Impact of Fins Number Multiplicity on Corner Effect

Authors: A.N. Moulay Khatir, A. Guen-Bouazza, B. Bouazza

Abstract:

SOI-Multifin-FET shows excellent transistor characteristics, ideal sub-threshold swing, low drain induced barrier lowering (DIBL) without pocket implantation and negligible body bias dependency. In this work, we analyzed this combination by a three-dimensional numerical device simulator to investigate the influence of fins number on corner effect by analyzing its electrical characteristics and potential distribution in the oxide and the silicon in the section perpendicular to the flow of the current for SOI-single-fin FET, three-fin and five-fin, and we provide a comparison with a Trigate SOI Multi-FinFET structure.

Keywords: SOI, FinFET, corner effect, dual-gate, tri-gate, Multi-Fin FET

Procedia PDF Downloads 458
6427 Finite Element Analysis of Mini-Plate Stabilization of Mandible Fracture

Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski

Abstract:

The aim of the presented investigation is to recognize the possible mechanical issues of mini-plate connection used to treat mandible fractures and to check the impact of different factors for the stresses and displacements within the bone-stabilizer system. The mini-plate osteosynthesis technique is a common type of internal fixation using metal plates connected to the fractured bone parts by a set of screws. The selected two types of plate application methodology used by maxillofacial surgeons were investigated in the work. Those patterns differ in location and number of plates. The bone geometry was modeled on the base of computed tomography scans of hospitalized patient done just after mini-plate application. The solid volume geometry consisting of cortical and cancellous bone was created based on gained cloud of points. Temporomandibular joint and muscle system were simulated to imitate the real masticatory system behavior. Finite elements mesh and analysis were performed by ANSYS software. To simulate realistic connection behavior nonlinear contact conditions were used between the connecting elements and bones. The influence of the initial compression of the connected bone parts or the gap between them was analyzed. Nonlinear material properties of the bone tissues and elastic-plastic model of titanium alloy were used. The three cases of loading assuming the force of magnitude of 100N acting on the left molars, the right molars and the incisors were investigated. Stress distribution within connecting plate shows that the compression of the bone parts in the connection results in high stress concentration in the plate and the screws, however the maximum stress levels do not exceed material (titanium) yield limit. There are no significant differences between negative offset (gap) and no-offset conditions. The location of the external force influences the magnitude of stresses around both the plate and bone parts. Two-plate system gives generally lower von Misses stress under the same loading than the one-plating approach. Von Mises stress distribution within the cortical bone shows reduction of high stress field for the cases without the compression (neutral initial contact). For the initial prestressing there is a visible significant stress increase around the fixing holes at the bottom mini-plate due to the assembly stress. The local stress concentration may be the reason of bone destruction in those regions. The performed calculations prove that the bone-mini-plate system is able to properly stabilize the fractured mandible bone. There is visible strong dependency between the mini-plate location and stress distribution within the stabilizer structure and the surrounding bone tissue. The results (stresses within the bone tissues and within the devices, relative displacements of the bone parts at the interface) corresponding to different models of the connection provide a basis for the mechanical optimization of the mini-plate connections. The results of the performed numerical simulations were compared to clinical observation. They provide information helpful for better understanding of the load transfer in the mandible with the stabilizer and for improving stabilization techniques.

Keywords: finite element modeling, mandible fracture, mini-plate connection, osteosynthesis

Procedia PDF Downloads 233
6426 Modeling and Simulation of Multiphase Evaporation in High Torque Low Speed Diesel Engine

Authors: Ali Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability, and adaptability. Most of the research and development up till now have been directed towards High Speed Diesel Engine, for Commercial use. In these engines, objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low speed engines, the requirement is altogether different. These types of engines are mostly used in Maritime Industry, Agriculture Industry, Static Engines Compressors Engines, etc. On the contrary, high torque low speed engines are neglected quite often and are eminent for low efficiency and high soot emissions. One of the most effective ways to overcome these issues is by efficient combustion in an engine cylinder. Fuel spray dynamics play a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process in high torque low speed diesel engine is of great importance. Evaporation in the combustion chamber has a rigorous effect on the efficiency of the engine. In this paper, multiphase evaporation of fuel is modeled for high torque low speed engine using the CFD (computational fluid dynamics) codes. Two distinct phases of evaporation are modeled using modeling soft wares. The basic model equations are derived from the energy conservation equation and Naiver-Stokes equation. O’Rourke model is used to model the evaporation phases. The results obtained showed a generous effect on the efficiency of the engine. Evaporation rate of fuel droplet is increased with the increase in vapor pressure. An appreciable reduction in size of droplet is achieved by adding the convective heat effects in the combustion chamber. By and large, an overall increase in efficiency is observed by modeling distinct evaporation phases. This increase in efficiency is due to the fact that droplet size is reduced and vapor pressure is increased in the engine cylinder.

Keywords: diesel fuel, CFD, evaporation, multiphase

Procedia PDF Downloads 323
6425 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller

Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian

Abstract:

The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.

Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature

Procedia PDF Downloads 118
6424 Miniaturization of I-Slot Antenna with Improved Efficiency and Gain

Authors: Mondher Labidi, Fethi Choubani

Abstract:

In this paper, novel miniaturization technique of antenna is proposed using I-slot. Using this technique, gain of antenna can increased for 4dB (antenna only) to 6.6dB for the proposed I-slot antenna and a frequency shift of about 0.45 GHz to 1 GHz is obtained. Also a reduction of the shape size of the antenna is achieved (about 38 %) to operate in the Wi-Fi (2.45 GHz) band.RF Moreover the frequency shift can be controlled by changing the place or the length of the I-slot. Finally the proposed miniature antenna with an improved radiation efficiency and gain was built and tested.

Keywords: slot antenna, miniaturization, RF, electrical equivalent circuit (EEC)

Procedia PDF Downloads 269
6423 Environmental Impact Assessment of Ambient Particle Industrial Complex Upon Vegetation Near Settling at El-Fatyah,Libya

Authors: Ashraf M. S. Soliman, Mohsen Elhasadi

Abstract:

The present study was undertaken to evaluate the impact of ambient particles emitted from an industrial complex located at El-Fatyah on growth, phytomass partitioning and accumulation, pigment content and nutrient uptake of two economically important crop species; barley (Hordeum vulgare L.Family: Poaceae) and broad bean (Vicia faba L. Family: Fabaceae) growing in the region. It was obvious from the present investigation that chlorophyll and carotenoid content showed significant responses to the industrial dust. Generally, the total pigment content of the two investigated crops in the two locations continually increased till the plant age reached 70 days after sowing then begins to decrease till the end of the growing season..The total uptake of N, P and K in the two studied species decreased in response to industrial dust in the study area compared to control location. In conclusion, barley and broad bean are very sensitive to air pollutants, and may consider as bioindicators for atmospheric pollution. Pollutants caused damage of their leaves, impair plant growth, hindered nutrient uptake and consequently limit primary productivity.

Keywords: Effect of Industrial Complex on barley and broad bean

Procedia PDF Downloads 520
6422 Smartphone-Based Human Activity Recognition by Machine Learning Methods

Authors: Yanting Cao, Kazumitsu Nawata

Abstract:

As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained.

Keywords: smart sensors, human activity recognition, artificial intelligence, SVM

Procedia PDF Downloads 132
6421 A Study on Aquatic Bycatch Mortality Estimation Due to Prawn Seed Collection and Alteration of Collection Method through Sustainable Practices in Selected Areas of Sundarban Biosphere Reserve (SBR), India

Authors: Samrat Paul, Satyajit Pahari, Krishnendu Basak, Amitava Roy

Abstract:

Fishing is one of the pivotal livelihood activities, especially in developing countries. Today it is considered an important occupation for human society from the era of human settlement began. In simple terms, non-target catches of any species during fishing can be considered as ‘bycatch,’ and fishing bycatch is neither a new fishery management issue nor a new problem. Sundarban is one of the world’s largest mangrove land expanding up to 10,200 sq. km in India and Bangladesh. This largest mangrove biome resource is used by the local inhabitants commercially to run their livelihood, especially by forest fringe villagers (FFVs). In Sundarban, over-fishing, especially post larvae collection of wild Penaeus monodon, is one of the major concerns, as during the collection of P. monodon, different aquatic species are destroyed as a result of bycatch mortality which changes in productivity and may negatively impact entire biodiversity, of the ecosystem. Wild prawn seed collection gear like a small mesh sized net poses a serious threat to aquatic stocks, where the collection isn’t only limited to prawn seed larvae. As prawn seed collection processes are inexpensive, require less monetary investment, and are lucrative; people are easily engaged here as their source of income. Wildlife Trust of India’s (WTI) intervention in selected forest fringe villages of Sundarban Tiger Reserve (STR) was to estimate and reduce the mortality of aquatic bycatches by involving local communities in newly developed release method and their time engagement in prawn seed collection (PSC) by involving them in Alternate Income Generation (AIG). The study was conducted for their taxonomic identification during the period of March to October 2019. Collected samples were preserved in 70% ethyl alcohol for identification, and all the preserved bycatch samples were identified morphologically by the expertise of the Zoological Survey of India (ZSI), Kolkata. Around 74 different aquatic species, where 11 different species are molluscs, 41 fish species, out of which 31 species were identified, and 22 species of crustacean collected, out of which 18 species were identified. Around 13 different species belong to a different order, and families were unable to identify them morphologically as they were collected in the juvenile stage. The study reveals that for collecting one single prawn seed, eight individual life of associated faunas are being lost. Zero bycatch mortality is not practical; rather, collectors should focus on bycatch reduction by avoiding capturing, allowing escaping, and mortality reduction, and must make changes in their fishing method by increasing net mesh size, which will avoid non-target captures. But as the prawns are small in size (generally 1-1.5 inches in length), thus increase net size making economically less or no profit for collectors if they do so. In this case, returning bycatches is considered one of the best ways to a reduction in bycatch mortality which is a more sustainable practice.

Keywords: bycatch mortality, biodiversity, mangrove biome resource, sustainable practice, Alternate Income Generation (AIG)

Procedia PDF Downloads 128
6420 Designing Financing Schemes to Make Forest Management Units Work in Aceh Province, Indonesia

Authors: Riko Wahyudi, Rezky Lasekti Wicaksono, Ayu Satya Damayanti, Ridhasepta Multi Kenrosa

Abstract:

Implementing Forest Management Unit (FMU) is considered as the best solution for forest management in developing countries. However, when FMU has been formed, many parties then blame the FMU and assume it is not working on. Currently, there are two main issues that make FMU not be functional i.e. institutional and financial issues. This paper is addressing financial issues to make FMUs in Aceh Province can be functional. A mixed financing scheme is proposed here, both direct and indirect financing. The direct financing scheme derived from two components i.e. public funds and businesses. Non-tax instruments of intergovernmental fiscal transfer (IFT) system and FMU’s businesses are assessed. Meanwhile, indirect financing scheme is conducted by assessing public funds within villages around forest estate as about 50% of total villages in Aceh Province are located surrounding forest estate. Potential instruments under IFT system are forest and mining utilization royalties. In order to make these instruments become direct financing for FMU, interventions on allocation and distribution aspects of them are conducted. In the allocation aspect, alteration in proportion of allocation is required as the authority to manage forest has shifted from district to province. In the distribution aspect, Government of Aceh can earmark usage of the funds for FMUs. International funds for climate change also encouraged to be domesticated and then channeled through these instruments or new instrument under public finance system in Indonesia. Based on FMU’s businesses both from forest products and forest services, FMU can impose non-tax fees for each forest product and service utilization. However, for doing business, the FMU need to be a Public Service Agency (PSA). With this status, FMU can directly utilize the non-tax fees without transferring them to the state treasury. FMU only need to report the fees to Ministry of Finance. Meanwhile, indirect financing scheme is conducted by empowering villages around forest estate as villages in Aceh Province is receiving average village fund of IDR 800 million per village in 2017 and the funds will continue to increase in subsequent years. These schemes should be encouraged in parallel to establish a mixed financing scheme in order to ensure sustainable financing for FMU in Aceh Province, Indonesia.

Keywords: forest management, public funds, mixed financing, village

Procedia PDF Downloads 176
6419 Quantitative and Qualitative Analysis of Randomized Controlled Trials in Physiotherapy from India

Authors: K. Hariohm, V. Prakash, J. Saravana Kumar

Abstract:

Introduction and Rationale: Increased scope of Physiotherapy (PT) practice also has contributed to research in the field of PT. It is essential to determine the production and quality of the clinical trials from India since, it may reflect the scientific growth of the profession. These trends can be taken as a baseline to measure our performance and also can be used as a guideline for the future trials. Objective: To quantify and analyze qualitatively the RCT’s from India from the period 2000-2013’ May, and classify data for the information process. Methods: Studies were searched in the Medline database using the key terms “India”, “Indian”, “Physiotherapy”. Clinical trials only with PT authors were included. Trials out of scope of PT practice and on animals were excluded. Retrieved valid articles were analyzed for published year, type of participants, area of study, PEDro score, outcome measure domains of impairment, activity, participation; ‘a priori’ sample size calculation, region, and explanation of the intervention. Result: 45 valid articles were retrieved from the year 2000-2013’ May. The majority of articles were done on symptomatic participants (81%). The frequencies of conditions repeated more were low back pain (n-7) and diabetes (n-4). PEDro score with mode 5 and upper limit of 8 and lower limit 4 was found. 97.2% of studies measure the outcome at the impairment level, 34% in activity level, and 27.8% in participation level. 29.7% of studies did ‘a priori’ sample size calculation. Correlation of year trend and PEDro score found to be not significant (p>.05). Individual PEDro item analysis showed, randomization (100%), concealment (33%) baseline (76%), blinding-subject, therapist, assessor (9.1%, 0%, 10%), follow-up (89%) ITT (15%), statistics between groups (100%), measures of variance (88 %). Conclusion: The trend shows an upward slope in terms of RCTs published from India which is a good indicator. The qualitative analysis showed some gaps in the clinical trial design, which can be expected to be, fulfilled by the future researchers.

Keywords: RCT, PEDro, physical therapy, rehabilitation

Procedia PDF Downloads 328
6418 Alphabet Recognition Using Pixel Probability Distribution

Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay

Abstract:

Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.

Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix

Procedia PDF Downloads 371
6417 Loss Quantification Archaeological Sites in Watershed Due to the Use and Occupation of Land

Authors: Elissandro Voigt Beier, Cristiano Poleto

Abstract:

The main objective of the research is to assess the loss through the quantification of material culture (archaeological fragments) in rural areas, sites explored economically by machining on seasonal crops, and also permanent, in a hydrographic subsystem Camaquã River in the state of Rio Grande do Sul, Brazil. The study area consists of different micro basins and differs in area, ranging between 1,000 m² and 10,000 m², respectively the largest and the smallest, all with a large number of occurrences and outcrop locations of archaeological material and high density in intense farm environment. In the first stage of the research aimed to identify the dispersion of points of archaeological material through field survey through plot points by the Global Positioning System (GPS), within each river basin, was made use of concise bibliography on the topic in the region, helping theoretically in understanding the old landscaping with preferences of occupation for reasons of ancient historical people through the settlements relating to the practice observed in the field. The mapping was followed by the cartographic development in the region through the development of cartographic products of the land elevation, consequently were created cartographic products were to contribute to the understanding of the distribution of the absolute materials; the definition and scope of the material dispersed; and as a result of human activities the development of revolving letter by mechanization of in situ material, it was also necessary for the preparation of materials found density maps, linking natural environments conducive to ancient historical occupation with the current human occupation. The third stage of the project it is for the systematic collection of archaeological material without alteration or interference in the subsurface of the indigenous settlements, thus, the material was prepared and treated in the laboratory to remove soil excesses, cleaning through previous communication methodology, measurement and quantification. Approximately 15,000 were identified archaeological fragments belonging to different periods of ancient history of the region, all collected outside of its environmental and historical context and it also has quite changed and modified. The material was identified and cataloged considering features such as object weight, size, type of material (lithic, ceramic, bone, Historical porcelain and their true association with the ancient history) and it was disregarded its principles as individual lithology of the object and functionality same. As observed preliminary results, we can point out the change of materials by heavy mechanization and consequent soil disturbance processes, and these processes generate loading of archaeological materials. Therefore, as a next step will be sought, an estimate of potential losses through a mathematical model. It is expected by this process, to reach a reliable model of high accuracy which can be applied to an archeological site of lower density without encountering a significant error.

Keywords: degradation of heritage, quantification in archaeology, watershed, use and occupation of land

Procedia PDF Downloads 264
6416 Fabrication of High Energy Hybrid Capacitors from Biomass Waste-Derived Activated Carbon

Authors: Makhan Maharjan, Mani Ulaganathan, Vanchiappan Aravindan, Srinivasan Madhavi, Jing-Yuan Wang, Tuti Mariana Lim

Abstract:

There is great interest to exploit sustainable, low-cost, renewable resources as carbon precursors for energy storage applications. Research on development of energy storage devices has been growing rapidly due to mismatch in power supply and demand from renewable energy sources This paper reported the synthesis of porous activated carbon from biomass waste and evaluated its performance in supercapicators. In this work, we employed orange peel (waste material) as the starting material and synthesized activated carbon by pyrolysis of KOH impregnated orange peel char at 800 °C in argon atmosphere. The resultant orange peel-derived activated carbon (OP-AC) exhibited a high BET surface area of 1,901 m2 g-1, which is the highest surface area so far reported for the orange peel. The pore size distribution (PSD) curve exhibits the pores centered at 11.26 Å pore width, suggesting dominant microporosity. The OP-AC was studied as positive electrode in combination with different negative electrode materials, such as pre-lithiated graphite (LiC6) and Li4Ti5O12 for making different hybrid capacitors. The lithium ion capacitor (LIC) fabricated using OP-AC with pre-lithiated graphite delivered a high energy density of ~106 Wh kg–1. The energy density for OP-AC||Li4Ti5O12 capacitor was ~35 Wh kg–1. For comparison purpose, configuration of OP-AC||OP-AC capacitors were studied in both aqueous (1M H2SO4) and organic (1M LiPF6 in EC-DMC) electrolytes, which delivered the energy density of 6.6 Wh kg-1 and 16.3 Wh kg-1, respectively. The cycling retentions obtained at current density of 1 A g–1 were ~85.8, ~87.0 ~82.2 and ~58.8% after 2500 cycles for OP-AC||OP-AC (aqueous), OP-AC||OP-AC (organic), OP-AC||Li4Ti5O12 and OP-AC||LiC6 configurations, respectively. In addition, characterization studies were performed by elemental and proximate composition, thermogravimetry, field emission-scanning electron microscopy, Raman spectra, X-ray diffraction (XRD) pattern, Fourier transform-infrared, X-ray photoelectron spectroscopy (XPS) and N2 sorption isotherms. The morphological features from FE-SEM exhibited well-developed porous structures. Two typical broad peaks observed in the XRD framework of the synthesized carbon implies amorphous graphitic structure. The ratio of 0.86 for ID/IG in Raman spectra infers high degree of graphitization in the sample. The band spectra of C 1s in XPS display the well resolved peaks related to carbon atoms in various chemical environments; for instances, the characteristics binding energies appeared at ~283.83, ~284.83, ~286.13, ~288.56, and ~290.70 eV which correspond to sp2 -graphitic C, sp3 -graphitic C, C-O, C=O and π-π*, respectively. Characterization studies revealed the synthesized carbon to be promising electrode material towards the application for energy storage devices. The findings opened up the possibility of developing high energy LICs from abundant, low-cost, renewable biomass waste.

Keywords: lithium-ion capacitors, orange peel, pre-lithiated graphite, supercapacitors

Procedia PDF Downloads 222
6415 Application of Response Surface Methodology to Optimize the Thermal Conductivity Enhancement of a Hybrid Nanofluid

Authors: Aminreza Noghrehabadi, Mohammad Behbahani, Ali Pourabbasi

Abstract:

In this experimental work, unlike conventional methods that mix two nanoparticles together, silver nanoparticles have been synthesized on the surface of graphene. In this research, the effect of adding modified graphene nanocomposite-silver nanoparticles to the base fluid (distilled water) was studied. Different transmission electron microscopy (TEM) and field emission scanning electron microscope (FESEM) techniques have been used to examine the surfaces and atomic structure of nanoparticles. An ultrasonic device has been used to disperse the nanocomposite in distilled water. Also, the thermal conductivity coefficient was measured by the transient hot wire method using the KD2-pro device. In addition, the thermal conductivity coefficient was measured in the temperature range of 30°C to 50°C, concentration of 10 ppm to 1000 ppm, and ultrasonic time of 2 minutes to 15 minutes. The results showed that with the increase of all three parameters of temperature, concentration and ultrasonic time, the percentage of increase in thermal conductivity will go up until reaching the optimal point, and after passing the optimal point, the percentage of increase in thermal conductivity will have a downward trend. To calculate the thermal conductivity of this nanofluid, a very accurate experimental equation has been obtained using Design Expert software.

Keywords: thermal conductivity, nanofluids, enhancement, silver nano particle, optimal point

Procedia PDF Downloads 70
6414 Towards Learning Query Expansion

Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier

Abstract:

The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.

Keywords: supervised leaning, classification, query expansion, association rules

Procedia PDF Downloads 309
6413 Thermal Analysis of a Graphite Calorimeter for the Measurement of Absorbed Dose for Therapeutic X-Ray Beam

Authors: I.J. Kim, B.C. Kim, J.H. Kim, C.-Y. Yi

Abstract:

Heat transfer in a graphite calorimeter is analyzed by using the finite elements method. The calorimeter is modeled in 3D geometry. Quasi-adiabatic mode operation is realized in the simulation and the temperature rise by different sources of the ionizing radiation and electric heaters is compared, directly. The temperature distribution caused by the electric power was much different from that by the ionizing radiation because of its point-like localized heating. However, the temperature rise which was finally read by sensing thermistors agreed well to each other within 0.02 %.

Keywords: graphite calorimeter, finite element analysis, heat transfer, quasi-adiabatic mode

Procedia PDF Downloads 417