Search results for: optical flow variation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8526

Search results for: optical flow variation

876 Treatment of Low-Grade Iron Ore Using Two Stage Wet High-Intensity Magnetic Separation Technique

Authors: Moses C. Siame, Kazutoshi Haga, Atsushi Shibayama

Abstract:

This study investigates the removal of silica, alumina and phosphorus as impurities from Sanje iron ore using wet high-intensity magnetic separation (WHIMS). Sanje iron ore contains low-grade hematite ore found in Nampundwe area of Zambia from which iron is to be used as the feed in the steelmaking process. The chemical composition analysis using X-ray Florence spectrometer showed that Sanje low-grade ore contains 48.90 mass% of hematite (Fe2O3) with 34.18 mass% as an iron grade. The ore also contains silica (SiO2) and alumina (Al2O3) of 31.10 mass% and 7.65 mass% respectively. The mineralogical analysis using X-ray diffraction spectrometer showed hematite and silica as the major mineral components of the ore while magnetite and alumina exist as minor mineral components. Mineral particle distribution analysis was done using scanning electron microscope with an X-ray energy dispersion spectrometry (SEM-EDS) and images showed that the average mineral size distribution of alumina-silicate gangue particles is in order of 100 μm and exists as iron-bearing interlocked particles. Magnetic separation was done using series L model 4 Magnetic Separator. The effect of various magnetic separation parameters such as magnetic flux density, particle size, and pulp density of the feed was studied during magnetic separation experiments. The ore with average particle size of 25 µm and pulp density of 2.5% was concentrated using pulp flow of 7 L/min. The results showed that 10 T was optimal magnetic flux density which enhanced the recovery of 93.08% of iron with 53.22 mass% grade. The gangue mineral particles containing 12 mass% silica and 3.94 mass% alumna remained in the concentrate, therefore the concentrate was further treated in the second stage WHIMS using the same parameters from the first stage. The second stage process recovered 83.41% of iron with 67.07 mass% grade. Silica was reduced to 2.14 mass% and alumina to 1.30 mass%. Accordingly, phosphorus was also reduced to 0.02 mass%. Therefore, the two stage magnetic separation process was established using these results.

Keywords: Sanje iron ore, magnetic separation, silica, alumina, recovery

Procedia PDF Downloads 259
875 By Removing High-Performance Aerobic Scope Phenotypes, Capture Fisheries May Reduce the Resilience of Fished Populations to Thermal Variability and Compromise Their Persistence into the Anthropocene.

Authors: Lauren A. Bailey, Amber R. Childs, Nicola C. James, Murray I. Duncan, Alexander Winkler, Warren M. Potts

Abstract:

For the persistence of fished populations in the Anthropocene, it is critical to predict how fished populations will respond to the coupled threats of exploitation and climate change for adaptive management. The resilience of fished populations will depend on their capacity for physiological plasticity and acclimatization in response to environmental shifts. However, there is evidence for the selection of physiological traits by capture fisheries. Hence, fish populations may have a limited scope for the rapid expansion of their tolerance ranges or physiological adaptation under fishing pressures. To determine the physiological vulnerability of fished populations in the Anthropocene, the metabolic performance was compared between a fished and spatially protected Chrysoblephus laticeps population in response to thermal variability. Individual aerobic scope phenotypes were quantified using intermittent flow respirometry by comparing changes in energy expenditure of each individual at ecologically relevant temperatures, mimicking variability experienced as a result of upwelling and downwelling events. The proportion of high and low-performance individuals were compared between the fished and spatially protected population. The fished population had limited aerobic scope phenotype diversity and fewer high-performance phenotypes, resulting in a significantly lower aerobic scope curve across low (10 °C) and high (24 °C) thermal treatments. The performance of fished populations may be compromised with predicted future increases in cold upwelling events. This requires the conservation of the physiologically fittest individuals in spatially protected areas, which can recruit into nearby fished areas, as a climate resilience tool.

Keywords: climate change, fish physiology, metabolic shifts, over-fishing, respirometry

Procedia PDF Downloads 129
874 Treatment of a Galvanization Wastewater in a Fixed-Bed Column Using L. hyperborean and P. canaliculata Macroalgae as Natural Cation Exchangers

Authors: Tatiana A. Pozdniakova, Maria A. P. Cechinel, Luciana P. Mazur, Rui A. R. Boaventura, Vitor J. P. Vilar.

Abstract:

Two brown macroalgae, Laminaria hyperborea and Pelvetia canaliculata, were employed as natural cation exchangers in a fixed-bed column for Zn(II) removal from a galvanization wastewater. The column (4.8 cm internal diameter) was packed with 30-59 g of previously hydrated algae up to a bed height of 17-27 cm. The wastewater or eluent was percolated using a peristaltic pump at a flow rate of 10 mL/min. The effluent used in each experiment presented similar characteristics: pH of 6.7, 55 mg/L of chemical oxygen demand and about 300, 44, 186 and 244 mg/L of sodium, calcium, chloride and sulphate ions, respectively. The main difference was nitrate concentration: 20 mg/L for the effluent used with L. hyperborean and 341 mg/L for the effluent used with P. canaliculata. The inlet zinc concentration also differed slightly: 11.2 mg/L for L. hyperborean and 8.9 mg/L for P. canaliculata experiments. The breakthrough time was approximately 22.5 hours for both macroalgae, corresponding to a service capacity of 43 bed volumes. This indicates that 30 g of biomass is able to treat 13.5 L of the galvanization wastewater. The uptake capacities at the saturation point were similar to that obtained in batch studies (unpublished data) for both algae. After column exhaustion, desorption with 0.1 M HNO3 was performed. Desorption using 9 and 8 bed volumes of eluent achieved an efficiency of 100 and 91%, respectively for L. hyperborean and P. canaliculata. After elution with nitric acid, the column was regenerated using different strategies: i) convert all the binding sites in the sodium form, by passing a solution of 0.5 M NaCl, until achieve a final pH of 6.0; ii) passing only tap water in order to increase the solution pH inside the column until pH 3.0, and in this case the second sorption cycle was performed using protonated algae. In the first approach, in order to remove the excess of salt inside the column, distilled water was passed through the column, leading to the algae structure destruction and the column collapsed. Using the second approach, the algae remained intact during three consecutive sorption/desorption cycles without loss of performance.

Keywords: biosorption, zinc, galvanization wastewater, packed-bed column

Procedia PDF Downloads 312
873 Kirchoff Type Equation Involving the p-Laplacian on the Sierpinski Gasket Using Nehari Manifold Technique

Authors: Abhilash Sahu, Amit Priyadarshi

Abstract:

In this paper, we will discuss the existence of weak solutions of the Kirchhoff type boundary value problem on the Sierpinski gasket. Where S denotes the Sierpinski gasket in R² and S₀ is the intrinsic boundary of the Sierpinski gasket. M: R → R is a positive function and h: S × R → R is a suitable function which is a part of our main equation. ∆p denotes the p-Laplacian, where p > 1. First of all, we will define a weak solution for our problem and then we will show the existence of at least two solutions for the above problem under suitable conditions. There is no well-known concept of a generalized derivative of a function on a fractal domain. Recently, the notion of differential operators such as the Laplacian and the p-Laplacian on fractal domains has been defined. We recall the result first then we will address the above problem. In view of literature, Laplacian and p-Laplacian equations are studied extensively on regular domains (open connected domains) in contrast to fractal domains. In fractal domains, people have studied Laplacian equations more than p-Laplacian probably because in that case, the corresponding function space is reflexive and many minimax theorems which work for regular domains is applicable there which is not the case for the p-Laplacian. This motivates us to study equations involving p-Laplacian on the Sierpinski gasket. Problems on fractal domains lead to nonlinear models such as reaction-diffusion equations on fractals, problems on elastic fractal media and fluid flow through fractal regions etc. We have studied the above p-Laplacian equations on the Sierpinski gasket using fibering map technique on the Nehari manifold. Many authors have studied the Laplacian and p-Laplacian equations on regular domains using this Nehari manifold technique. In general Euler functional associated with such a problem is Frechet or Gateaux differentiable. So, a critical point becomes a solution to the problem. Also, the function space they consider is reflexive and hence we can extract a weakly convergent subsequence from a bounded sequence. But in our case neither the Euler functional is differentiable nor the function space is known to be reflexive. Overcoming these issues we are still able to prove the existence of at least two solutions of the given equation.

Keywords: Euler functional, p-Laplacian, p-energy, Sierpinski gasket, weak solution

Procedia PDF Downloads 235
872 DNA Fragmentation and Apoptosis in Human Colorectal Cancer Cell Lines by Sesamum indicum Dried Seeds

Authors: Mohd Farooq Naqshbandi

Abstract:

The four fractions of aqueous extract of Sesame Seeds (Sesamum indicum L.) were studied for invitro DNA fragmentation, cell migration, and cellular apoptosis on SW480 and HTC116 human colorectal cancer cell lines. The seeds of Sesamum indicum were extracted with six solvents, including Methanol, Ethanol, Aqueous, Chloroform, Acetonitrile, and Hexane. The aqueous extract (IC₅₀ value 154 µg/ml) was found to be the most active in terms of cytotoxicity with SW480 human colorectal cancer cell lines. Further fractionation of this aqueous extract on flash chromatography gave four fractions. These four fractions were studied for anticancer and DNA binding studies. Cell viability was assessed by colorimetric assay (MTT). IC₅₀ values for all these four fractions ranged from 137 to 548 µg/mL for the HTC116 cancer cell line and 141 to 402 µg/mL for the SW480 cancer cell line. The four fractions showed good anticancer and DNA binding properties. The DNA binding constants ranged from 10.4 ×10⁴ 5 to 28.7 ×10⁴, showing good interactions with DNA. The DNA binding interactions were due to intercalative and π-π electron forces. The results indicate that aqueous extract fractions of sesame showed inhibition of cell migration of SW480 and HTC116 human colorectal cancer cell lines and induced DNA fragmentation and apoptosis. This was demonstrated by calculating the low wound closure percentage in cells treated with these fractions as compared to the control (80%). Morphological features of nuclei of cells treated with fractions revealed chromatin compression, nuclear shrinkage, and apoptotic body formation, which indicate cell death by apoptosis. The flow cytometer of fraction-treated cells of SW480 and HTC116 human colorectal cancer cell lines revealed death due to apoptosis. The results of the study indicate that aqueous extract of sesame seeds may be used to treat colorectal cancer.

Keywords: Sesamum indicum, cell migration inhibition, apoptosis induction, anticancer activity, colorectal cancer

Procedia PDF Downloads 88
871 The Effect of Interpersonal Relationships on Eating Patterns and Physical Activity among Asian-American and European-American Adolescents

Authors: Jamil Lane, Jason Freeman

Abstract:

Background: The role of interpersonal relationships is vital predictors of adolescents’ eating habits, exercise activity, and health problems including obesity. The effect of interpersonal relationships (i.e. family, friends, and intimate partners) on individual health behaviors and development have gained considerable attention during the past 10 years. Teenagers eating habits and exercise activities are established through a dynamic course involving internal and external factors such as food preferences, body weight perception, and parental and peer influence. When conceptualizing one’s interpersonal relationships, it is important to understand that how one relates to others is shaped by their culture. East-Asian culture has been characterized as collectivistic, which describes the significant role intergroup relationships play in their construction of the self. Cultures found in North America, on the other hand, can be characterized as individualistic, meaning that these cultures encourage individuals to prioritize their interest over the needs and want of their compatriots. Individuals from collectivistic cultures typically have stronger boundaries between in-group and out-group membership, whereas those from individualistic cultures see themselves as distinct and separate from strangers as well as family or friends. Objective: The purpose of this study is to examine the effect of collectivism and individualism on interpersonal relationships that shapes eating patterns and physical activity among Asian-American and European-American adolescents. Design/Methods: Analyses were based on data from the National Longitudinal Study of Adolescent Health, a nationally representative sample of adolescents in the United States who were surveyed from 1994 through 2008. This data will be used to examine interpersonal relationship factors that shape dietary intake and physical activity patterns within the Asian-American and European-American population in the United States. Factors relating to relationship strength, eating, and exercise behaviors were reported by participants in this first wave of data collection (1995). We plan to analyze our data using intragroup comparisons among those who identified as 'Asian-American' (n = 270) and 'White or European American' (n = 4,294) among the domains of positivity of peer influence and level of physical activity / healthy eating. Further, intergroup comparisons of these relationships will be made to extricate how the role positive peer influence in maintaining healthy eating and exercise habits differs with cultural variation. Results: We hypothesize that East-Asian participants with a higher degree of positivity in their peer and family relationships will experience a significantly greater rise in healthy eating and exercise behaviors than European-American participants with similar degrees of relationship positivity.

Keywords: interpersonal relationships, eating patterns, physical activity, adolescent health

Procedia PDF Downloads 200
870 Powder Assisted Sheet Forming to Fabricate Ti Capsule Magnetic Hyperthermia Implant

Authors: Keigo Nishitani, Kohei Mizuta Mizuta, Kazuyoshi Kurita, Yukinori Taniguchi

Abstract:

To establish mass production process of Ti capsule which has Fe powder inside as magnetic hyperthermia implant, we assumed that Ti thin sheet can be drawn into a φ1.0 mm die hole through the medium of Fe Powder and becomes outer shell of capsule. This study discusses mechanism of powder assisted deep drawing process by both of numerical simulation and experiment. Ti thin sheet blank was placed on die, and was covered by Fe powder layer without pressurizing. Then upper punch was indented on the Fe powder layer, and the blank can be drawn into die cavity as pressurized powder particles were extruded into die cavity from behind of the drawn blank. Distinct Element Method (DEM) has been used to demonstrate the process. To identify bonding parameters on Fe particles which are cohesion, tensile bond stress and inter particle friction angle, axial and diametrical compression failure test of Fe powder compact was conducted. Several density ratios of powder compacts in range of 0.70 - 0.85 were investigated and relationship between mean stress and equivalent stress was calculated with consideration of critical state line which rules failure criterion in consolidation of Fe powder. Since variation of bonding parameters with density ratio has been experimentally identified, and good agreement has been recognized between several failure tests and its simulation, demonstration of powder assisted sheet forming by using DEM becomes applicable. Results of simulation indicated that indent/drawing length of Ti thin sheet is promoted by smaller Fe particle size, larger indent punch diameter, lower friction coefficient between die surface and Ti sheet and certain degrees of die inlet taper angle. In the deep drawing test, we have made die-set with φ2.4 mm punch and φ1.0 mm die bore diameter. Pure Ti sheet with 100 μm thickness, annealed at 650 deg. C has been tested. After indentation, indented/drawn capsule has been observed by microscope, and its length was measured to discuss the feasibility of this capsulation process. Longer drawing length exists on progressive loading pass comparing with the case of single stroke loading. It is expected that progressive loading has an advantage of which extrusion of powder particle into die cavity with Ti sheet is promoted since powder particle layer can be rebuilt while the punch is withdrawn from the layer in each loading steps. This capsulation phenomenon is qualitatively demonstrated by DEM simulation. Finally, we have fabricated Ti capsule which has Fe powder inside for magnetic hyperthermia cancer care treatment. It is concluded that suggested method is possible to use the manufacturing of Ti capsule implant for magnetic hyperthermia cancer care.

Keywords: metal powder compaction, metal forming, distinct element method, cancer care, magnetic hyperthermia

Procedia PDF Downloads 299
869 Effect of Loop Diameter, Height and Insulation on a High Temperature CO2 Based Natural Circulation Loop

Authors: S. Sadhu, M. Ramgopal, S. Bhattacharyya

Abstract:

Natural circulation loops (NCLs) are buoyancy driven flow systems without any moving components. NCLs have vast applications in geothermal, solar and nuclear power industry where reliability and safety are of foremost concern. Due to certain favorable thermophysical properties, especially near supercritical regions, carbon dioxide can be considered as an ideal loop fluid in many applications. In the present work, a high temperature NCL that uses supercritical carbon dioxide as loop fluid is analysed. The effects of relevant design and operating variables on loop performance are studied. The system operating under steady state is modelled taking into account the axial conduction through loop fluid and loop wall, and heat transfer with surroundings. The heat source is considered to be a heater with controlled heat flux and heat sink is modelled as an end heat exchanger with water as the external cold fluid. The governing equations for mass, momentum and energy conservation are normalized and are solved numerically using finite volume method. Results are obtained for a loop pressure of 90 bar with the power input varying from 0.5 kW to 6.0 kW. The numerical results are validated against the experimental results reported in the literature in terms of the modified Grashof number (Grm) and Reynolds number (Re). Based on the results, buoyancy and friction dominated regions are identified for a given loop. Parametric analysis has been done to show the effect of loop diameter, loop height, ambient temperature and insulation. The results show that for the high temperature loop, heat loss to surroundings affects the loop performance significantly. Hence this conjugate heat transfer between the loop and surroundings has to be considered in the analysis of high temperature NCLs.

Keywords: conjugate heat transfer, heat loss, natural circulation loop, supercritical carbon dioxide

Procedia PDF Downloads 241
868 Development and Experimental Evaluation of a Semiactive Friction Damper

Authors: Juan S. Mantilla, Peter Thomson

Abstract:

Seismic events may result in discomfort on occupants of the buildings, structural damage or even buildings collapse. Traditional design aims to reduce dynamic response of structures by increasing stiffness, thus increasing the construction costs and the design forces. Structural control systems arise as an alternative to reduce these dynamic responses. A commonly used control systems in buildings are the passive friction dampers, which adds energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Passive friction dampers are usually implemented on the diagonal of braced buildings, but such devices have the disadvantage that are optimal for a range of sliding force and out of that range its efficiency decreases. The above implies that each passive friction damper is designed, built and commercialized for a specific sliding/clamping force, in which the damper shift from a locked state to a slip state, where dissipates energy through friction. The risk of having a variation in the efficiency of the device according to the sliding force is that the dynamic properties of the building can change as result of many factor, even damage caused by a seismic event. In this case the expected forces in the building can change and thus considerably reduce the efficiency of the damper (that is designed for a specific sliding force). It is also evident than when a seismic event occurs the forces in each floor varies in the time what means that the damper's efficiency is not the best at all times. Semi-Active Friction devices adapt its sliding force trying to maintain its motion in the slipping phase as much as possible, because of this, the effectiveness of the device depends on the control strategy used. This paper deals with the development and performance evaluation of a low cost Semiactive Variable Friction Damper (SAVFD) in reduced scale to reduce vibrations of structures subject to earthquakes. The SAVFD consist in a (1) hydraulic brake adapted to (2) a servomotor which is controlled with an (3) Arduino board and acquires accelerations or displacement from (4) sensors in the immediately upper and lower floors and a (5) power supply that can be a pair of common batteries. A test structure, based on a Benchmark structure for structural control, was design and constructed. The SAVFD and the structure are experimentally characterized. A numerical model of the structure and the SAVFD is developed based on the dynamic characterization. Decentralized control algorithms were modeled and later tested experimentally using shaking table test using earthquake and frequency chirp signals. The controlled structure with the SAVFD achieved reductions greater than 80% in relative displacements and accelerations in comparison to the uncontrolled structure.

Keywords: earthquake response, friction damper, semiactive control, shaking table

Procedia PDF Downloads 378
867 Mechanism of Veneer Colouring for Production of Multilaminar Veneer from Plantation-Grown Eucalyptus Globulus

Authors: Ngoc Nguyen

Abstract:

There is large plantation of Eucalyptus globulus established which has been grown to produce pulpwood. This resource is not suitable for the production of decorative products, principally due to low grades of wood and “dull” appearance but many trials have been already undertaken for the production of veneer and veneer-based engineered wood products, such as plywood and laminated veneer lumber (LVL). The manufacture of veneer-based products has been recently identified as an unprecedented opportunity to promote higher value utilisation of plantation resources. However, many uncertainties remain regarding the impacts of inferior wood quality of young plantation trees on product recovery and value, and with respect to optimal processing techniques. Moreover, the quality of veneer and veneer-based products is far from optimal as trees are young and have small diameters; and the veneers have the significant colour variation which affects to the added value of final products. Developing production methods which would enhance appearance of low-quality veneer would provide a great potential for the production of high-value wood products such as furniture, joinery, flooring and other appearance products. One of the methods of enhancing appearance of low quality veneer, developed in Italy, involves the production of multilaminar veneer, also named “reconstructed veneer”. An important stage of the multilaminar production is colouring the veneer which can be achieved by dyeing veneer with dyes of different colours depending on the type of appearance products, their design and market demand. Although veneer dyeing technology has been well advanced in Italy, it has been focused on poplar veneer from plantation which wood is characterized by low density, even colour, small amount of defects and high permeability. Conversely, the majority of plantation eucalypts have medium to high density, have a lot of defects, uneven colour and low permeability. Therefore, detailed study is required to develop dyeing methods suitable for colouring eucalypt veneers. Brown reactive dye is used for veneer colouring process. Veneers from sapwood and heartwood of two moisture content levels are used to conduct colouring experiments: green veneer and veneer dried to 12% MC. Prior to dyeing, all samples are treated. Both soaking (dipping) and vacuum pressure methods are used in the study to compare the results and select most efficient method for veneer dyeing. To date, the results of colour measurements by CIELAB colour system showed significant differences in the colour of the undyed veneers produced from heartwood part. The colour became moderately darker with increasing of Sodium chloride, compared to control samples according to the colour measurements. It is difficult to conclude a suitable dye solution used in the experiments at this stage as the variables such as dye concentration, dyeing temperature or dyeing time have not been done. The dye will be used with and without UV absorbent after all trials are completed using optimal parameters in colouring veneers.

Keywords: Eucalyptus globulus, veneer colouring/dyeing, multilaminar veneer, reactive dye

Procedia PDF Downloads 350
866 Design of Nano-Reinforced Carbon Fiber Reinforced Plastic Wheel for Lightweight Vehicles with Integrated Electrical Hub Motor

Authors: Davide Cocchi, Andrea Zucchelli, Luca Raimondi, Maria Brugo Tommaso

Abstract:

The increasing attention is given to the issues of environmental pollution and climate change is exponentially stimulating the development of electrically propelled vehicles powered by renewable energy, in particular, the solar one. Given the small amount of solar energy that can be stored and subsequently transformed into propulsive energy, it is necessary to develop vehicles with high mechanical, electrical and aerodynamic efficiencies along with reduced masses. The reduction of the masses is of fundamental relevance especially for the unsprung masses, that is the assembly of those elements that do not undergo a variation of their distance from the ground (wheel, suspension system, hub, upright, braking system). Therefore, the reduction of unsprung masses is fundamental in decreasing the rolling inertia and improving the drivability, comfort, and performance of the vehicle. This principle applies even more in solar propelled vehicles, equipped with an electric motor that is connected directly to the wheel hub. In this solution, the electric motor is integrated inside the wheel. Since the electric motor is part of the unsprung masses, the development of compact and lightweight solutions is of fundamental importance. The purpose of this research is the design development and optimization of a CFRP 16 wheel hub motor for solar propulsion vehicles that can carry up to four people. In addition to trying to maximize aspects of primary importance such as mass, strength, and stiffness, other innovative constructive aspects were explored. One of the main objectives has been to achieve a high geometric packing in order to ensure a reduced lateral dimension, without reducing the power exerted by the electric motor. In the final solution, it was possible to realize a wheel hub motor assembly completely comprised inside the rim width, for a total lateral overall dimension of less than 100 mm. This result was achieved by developing an innovative connection system between the wheel and the rotor with a double purpose: centering and transmission of the driving torque. This solution with appropriate interlocking noses allows the transfer of high torques and at the same time guarantees both the centering and the necessary stiffness of the transmission system. Moreover, to avoid delamination in critical areas, evaluated by means of FEM analysis using 3D Hashin damage criteria, electrospun nanofibrous mats have been interleaved between CFRP critical layers. In order to reduce rolling resistance, the rim has been designed to withstand high inflation pressure. Laboratory tests have been performed on the rim using the Digital Image Correlation technique (DIC). The wheel has been tested for fatigue bending according to E/ECE/324 R124e.

Keywords: composite laminate, delamination, DIC, lightweight vehicle, motor hub wheel, nanofiber

Procedia PDF Downloads 214
865 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 63
864 Numerical Study of Laminar Separation Bubble Over an Airfoil Using γ-ReθT SST Turbulence Model on Moderate Reynolds Number

Authors: Younes El Khchine

Abstract:

A parametric study has been conducted to analyse the flow around S809 airfoil of a wind turbine in order to better understand the characteristics and effects of laminar separation bubble (LSB) on aerodynamic design for maximizing wind turbine efficiency. Numerical simulations were performed at low Reynolds numbers by solving the Unsteady Reynolds Averaged Navier-Stokes (URANS) equations based on C-type structural mesh and using the γ-Reθt turbulence model. A two-dimensional study was conducted for the chord Reynolds number of 1×105 and angles of attack (AoA) between 0 and 20.15 degrees. The simulation results obtained for the aerodynamic coefficients at various angles of attack (AoA) were compared with XFoil results. A sensitivity study was performed to examine the effects of Reynolds number and free-stream turbulence intensity on the location and length of the laminar separation bubble and the aerodynamic performances of wind turbines. The results show that increasing the Reynolds number leads to a delay in the laminar separation on the upper surface of the airfoil. The increase in Reynolds number leads to an accelerated transition process, and the turbulent reattachment point moves closer to the leading edge owing to an earlier reattachment of the turbulent shear layer. This leads to a considerable reduction in the length of the separation bubble as the Reynolds number is increased. The increase in the level of free-stream turbulence intensity leads to a decrease in separation bubble length and an increase in the lift coefficient while having negligible effects on the stall angle. When the AoA increased, the bubble on the suction airfoil surface was found to move upstream to the leading edge of the airfoil, causing earlier laminar separation.

Keywords: laminar separation bubble, turbulence intensity, s809 airfoil, transition model, Reynolds number

Procedia PDF Downloads 70
863 Assessing Brain Targeting Efficiency of Ionisable Lipid Nanoparticles Encapsulating Cas9 mRNA/gGFP Following Different Routes of Administration in Mice

Authors: Meiling Yu, Nadia Rouatbi, Khuloud T. Al-Jamal

Abstract:

Background: Treatment of neurological disorders with modern medical and surgical approaches remains difficult. Gene therapy, allowing the delivery of genetic materials that encodes potential therapeutic molecules, represents an attractive option. The treatment of brain diseases with gene therapy requires the gene-editing tool to be delivered efficiently to the central nervous system. In this study, we explored the efficiency of different delivery routes, namely intravenous (i.v.), intra-cranial (i.c.), and intra-nasal (i.n.), to deliver stable nucleic acid-lipid particles (SNALPs) containing gene-editing tools namely Cas9 mRNA and sgRNA encoding for GFP as a reporter protein. We hypothesise that SNALPs can reach the brain and perform gene-editing to different extents depending on the administration route. Intranasal administration (i.n.) offers an attractive and non-invasive way to access the brain circumventing the blood–brain barrier. Successful delivery of gene-editing tools to the brain offers a great opportunity for therapeutic target validation and nucleic acids therapeutics delivery to improve treatment options for a range of neurodegenerative diseases. In this study, we utilised Rosa26-Cas9 knock-in mice, expressing GFP, to study brain distribution and gene-editing efficiency of SNALPs after i.v.; i.c. and i.n. routes of administration. Methods: Single guide RNA (sgRNA) against GFP has been designed and validated by in vitro nuclease assay. SNALPs were formulated and characterised using dynamic light scattering. The encapsulation efficiency of nucleic acids (NA) was measured by RiboGreen™ assay. SNALPs were incubated in serum to assess their ability to protect NA from degradation. Rosa26-Cas9 knock-in mice were i.v., i.n., or i.c. administered with SNALPs to test in vivo gene-editing (GFP knockout) efficiency. SNALPs were given as three doses of 0.64 mg/kg sgGFP following i.v. and i.n. or a single dose of 0.25 mg/kg sgGFP following i.c.. knockout efficiency was assessed after seven days using Sanger Sequencing and Inference of CRISPR Edits (ICE) analysis. In vivo, the biodistribution of DiR labelled SNALPs (SNALPs-DiR) was assessed at 24h post-administration using IVIS Lumina Series III. Results: Serum-stable SNALPs produced were 130-140 nm in diameter with ~90% nucleic acid loading efficiency. SNALPs could reach and stay in the brain for up to 24h following i.v.; i.n. and i.c. administration. Decreasing GFP expression (around 50% after i.v. and i.c. and 20% following i.n.) was confirmed by optical imaging. Despite the small number of mice used, ICE analysis confirmed GFP knockout in mice brains. Additional studies are currently taking place to increase mice numbers. Conclusion: Results confirmed efficient gene knockout achieved by SNALPs in Rosa26-Cas9 knock-in mice expressing GFP following different routes of administrations in the following order i.v.= i.c.> i.n. Each of the administration routes has its pros and cons. The next stages of the project involve assessing gene-editing efficiency in wild-type mice and replacing GFP as a model target with therapeutic target genes implicated in Motor Neuron Disease pathology.

Keywords: CRISPR, nanoparticles, brain diseases, administration routes

Procedia PDF Downloads 103
862 Effects of Gamma-Tocotrienol Supplementation on T-Regulatory Cells in Syngeneic Mouse Model of Breast Cancer

Authors: S. Subramaniam, J. S. A. Rao, P. Ramdas, K. R. Selvaduray, N. M. Han, M. K. Kutty, A. K. Radhakrishnan

Abstract:

Immune system is a complex system where the immune cells have the capability to respond against a wide range of immune challenges including cancer progression. However, in the event of cancer development, tumour cells trigger immunosuppressive environment via activation of myeloid-derived suppressor cells and T regulatory (Treg) cells. The Treg cells are subset of CD4+ T lymphocytes, known to have crucial roles in regulating immune homeostasis and promoting the establishment and maintenance of peripheral tolerance. Dysregulation of these mechanisms could lead to cancer progression and immune suppression. Recently, there are many studies reporting on the effects of natural bioactive compounds on immune responses against cancer. It was known that tocotrienol-rich-fraction consisting 70% tocotrienols and 30% α-tocopherol is able to exhibit immunomodulatory as well as anti-cancer properties. Hence, this study was designed to evaluate the effects of gamma-tocotrienol (G-T3) supplementation on T-reg cells in a syngeneic mouse model of breast cancer. In this study, female BALB/c mice were divided into two groups and fed with either soy oil (vehicle) or gamma-tocotrienol (G-T3) for two weeks followed by inoculation with tumour cells. All the mice continued to receive the same supplementation until day 49. The results showed a significant reduction in tumour volume and weight in G-T3 fed mice compared to vehicle-fed mice. Lung and liver histology showed reduced evidence of metastasis in tumour-bearing G-T3 fed mice. Besides that, flow cytometry analysis revealed T-helper cell population was increased, and T-regulatory cell population was suppressed following G-T3 supplementation. Moreover, immunohistochemistry analysis showed that there was a marked decrease in the expression of FOXP3 in the G-T3 fed tumour bearing mice. In conclusion, the G-T3 supplementation showed good prognosis towards breast cancer by enhancing the immune response in tumour-bearing mice. Therefore, gamma-T3 can be used as immunotherapy agent for the treatment of breast cancer.

Keywords: breast cancer, gamma tocotrienol, immune suppression, supplement

Procedia PDF Downloads 223
861 Organizational Inertia: As a Control Mechanism for Organizational Creativity And Agility In Disruptive Environment

Authors: Doddy T. P. Enggarsyah, Soebowo Musa

Abstract:

Covid-19 pandemic has changed business environments and has spread economic contagion rapidly, as the stringent lockdowns and social distancing, which were initially intended to cut off the spread, have instead cut off the flow of economies. With no existing experience or playbook to deal with such a crisis, the prolonged pandemic can lead to bankruptcies, despite the fact that there are cases of companies that are not only able to survive but also to increase sales and create more jobs amid the economic crisis. This quantitative research study clarifies conflicting findings on organizational inertia whether it is a better strategy to implement during a disruptive environment. 316 respondents who worked in diverse firms operating in various industry types in Indonesia have completed the survey with a response rate of 63.2%. Further, this study clarifies the roles and relationships between organizational inertia, organizational creativity, organizational agility, and organizational resilience that potentially have determinants factors on firm performance in a disruptive environment. The findings of the study confirm that the organizational inertia of the firm will set up strong protection on the organization's fundamental orientation, which eventually will confine organizations to build adequate creative and adaptability responses—such fundamental orientation built from path dependency along with past success and prolonged firm performance. Organizational inertia acts like a control mechanism to ensure the adequacy of the given responses. The term adequate is important, as being overly creative during a disruptive environment may have a contradictory result since it can burden the firm performance. During a disruptive environment, organizations will limit creativity by focusing more on creativity that supports the resilience and new technology adoption will be limited since the cost of learning and implementation are perceived as greater than the potential gains. The optimal path towards firm performance is gained through organizational resilience, as in a disruptive environment, the survival of the organization takes precedence over firm performance.

Keywords: disruptive environment, organizational agility, organizational creativity, organizational inertia, organizational resilience

Procedia PDF Downloads 114
860 Kosovar Teachers' Understanding of Literacy Education

Authors: Anemonë Zeneli

Abstract:

Classrooms composed of students with varied linguistic repertoires, in combination with new technologies, have shifted what it means to be literate and how literacy is taught. At the same time, definitions of literacy matter greatly as they shape literacy education curricula, national literacy agendas, and pedagogical choices. Grounded in the theoretical frameworks of New Literacy Studies and Critical Literacy, this research investigates how Kosovar teachers make sense of literacy. The study employed a qualitative research design involving classroom observations, teacher interviews, and document analysis in a public school in the capital city of Kosovo, Prishtina. Data was collected from 5 Albanian language teachers. Classroom observations allowed for the documentation of how teachers applied literacy and language pedagogies to their teaching. Teacher interviews provided insights into teachers’ understanding of literacy education and the rationale behind their chosen pedagogies. Document analysis, more specifically, lesson plan analysis, further explained teachers’ content and instructional choices. The findings suggest that teachers understand literacy as standardized language instruction. They spoke to the challenges of language instruction in standardized Albanian in a Gheg (dialect) dominant society. Teachers’ narratives described the tension that students face in navigating standardized language expectations while being unable to use their home (Gheg) literacies. Teachers’ narratives were imbued with moral contestation as they explained the lack of an infrastructure that allows students to apply their home language and literacies in the classroom. Furthermore, teachers expressed their insistence on teaching “the words of the book.” While this viewpoint on language and literacy is generally aligned with normative and colonial expectations on language, at the same time, it reveals teachers’ intention to ‘equip’ their students with skills and practices that they will be tested on. Some of the teachers also articulated the need for a pedagogy of correction that the work of upholding the standardized language variation necessitates. Here, teachers also utilized discourses of neoliberalism when discussing students’ English repertoire and its value in “opening doors” and advancement opportunities in life while further framing students’ home literacies, the Gheg dialect, in a deficit manner. If educators and policymakers are to make informed decisions about efforts to improve schools, it is important to improve our knowledge of what informs teachers’ pedagogical choices in teaching literacy. This study contributes to and expands the current knowledge base on teachers’ understanding of literacy education and their role in shaping literacy education. As schools continue to navigate (growing) diverse forms of literacy, this study highlights the importance of equipping educators with the knowledge and tools to apply literacy pedagogies that reflect the ever-shifting definitions of literacy education.

Keywords: literacy education, standardized language, critical narrative analysis, literacy teaching

Procedia PDF Downloads 19
859 Spatial Planning and Tourism Development with Sustainability Model of the Territorial Tourist with Land Use Approach

Authors: Mehrangiz Rezaee, Zabih Charrahi

Abstract:

In the last decade, with increasing tourism destinations and tourism growth, we are witnessing the widespread impacts of tourism on the economy, environment and society. Tourism and its related economy are now undergoing a transformation and as one of the key pillars of business economics, it plays a vital role in the world economy. Activities related to tourism and providing services appropriate to it in an area, like many economic sectors, require the necessary context on its origin. Given the importance of tourism industry and tourism potentials of Yazd province in Iran, it is necessary to use a proper procedure for prioritizing different areas for proper and efficient planning. One of the most important goals of planning is foresight and creating balanced development in different geographical areas. This process requires an accurate study of the areas and potential and actual talents, as well as evaluation and understanding of the relationship between the indicators affecting the development of the region. At the global and regional level, the development of tourist resorts and the proper distribution of tourism destinations are needed to counter environmental impacts and risks. The main objective of this study is the sustainable development of suitable tourism areas. Given that tourism activities in different territorial areas require operational zoning, this study deals with the evaluation of territorial tourism using concepts such as land use, fitness and sustainable development. It is essential to understand the structure of tourism development and the spatial development of tourism using land use patterns, spatial planning and sustainable development. Tourism spatial planning implements different approaches. However, the development of tourism as well as the spatial development of tourism is complex, since tourist activities can be carried out in different areas with different purposes. Multipurpose areas have great important for tourism because it determines the flow of tourism. Therefore, in this paper, by studying the development and determination of tourism suitability that is related to spatial development, it is possible to plan tourism spatial development by developing a model that describes the characteristics of tourism. The results of this research determine the suitability of multi-functional territorial tourism development in line with spatial planning of tourism.

Keywords: land use change, spatial planning, sustainability, territorial tourist, Yazd

Procedia PDF Downloads 183
858 Multidisciplinary Approach for a Tsunami Reconstruction Plan in Coquimbo, Chile

Authors: Ileen Van den Berg, Reinier J. Daals, Chris E. M. Heuberger, Sven P. Hildering, Bob E. Van Maris, Carla M. Smulders, Rafael Aránguiz

Abstract:

Chile is located along the subduction zone of the Nazca plate beneath the South American plate, where large earthquakes and tsunamis have taken place throughout history. The last significant earthquake (Mw 8.2) occurred in September 2015 and generated a destructive tsunami, which mainly affected the city of Coquimbo (71.33°W, 29.96°S). The inundation area consisted of a beach, damaged seawall, damaged railway, wetland and old neighborhood; therefore, local authorities started a reconstruction process immediately after the event. Moreover, a seismic gap has been identified in the same area, and another large event could take place in the near future. The present work proposed an integrated tsunami reconstruction plan for the city of Coquimbo that considered several variables such as safety, nature & recreation, neighborhood welfare, visual obstruction, infrastructure, construction process, and durability & maintenance. Possible future tsunami scenarios are simulated by means of the Non-hydrostatic Evolution of Ocean WAVEs (NEOWAVE) model with 5 nested grids and a higher grid resolution of ~10 m. Based on the score from a multi-criteria analysis, the costs of the alternatives and a preference for a multifunctional solution, the alternative that includes an elevated coastal road with floodgates to reduce tsunami overtopping and control the return flow of a tsunami was selected as the best solution. It was also observed that the wetlands are significantly restored to their former configuration; moreover, the dynamic behavior of the wetlands is stimulated. The numerical simulation showed that the new coastal protection decreases damage and the probability of loss of life by delaying tsunami arrival time. In addition, new evacuation routes and a smaller inundation zone in the city increase safety for the area.

Keywords: tsunami, Coquimbo, Chile, reconstruction, numerical simulation

Procedia PDF Downloads 242
857 Dynamic Modeling of Advanced Wastewater Treatment Plants Using BioWin

Authors: Komal Rathore, Aydin Sunol, Gita Iranipour, Luke Mulford

Abstract:

Advanced wastewater treatment plants have complex biological kinetics, time variant influent flow rates and long processing times. Due to these factors, the modeling and operational control of advanced wastewater treatment plants become complicated. However, development of a robust model for advanced wastewater treatment plants has become necessary in order to increase the efficiency of the plants, reduce energy costs and meet the discharge limits set by the government. A dynamic model was designed using the Envirosim (Canada) platform software called BioWin for several wastewater treatment plants in Hillsborough County, Florida. Proper control strategies for various parameters such as mixed liquor suspended solids, recycle activated sludge and waste activated sludge were developed for models to match the plant performance. The models were tuned using both the influent and effluent data from the plant and their laboratories. The plant SCADA was used to predict the influent wastewater rates and concentration profiles as a function of time. The kinetic parameters were tuned based on sensitivity analysis and trial and error methods. The dynamic models were validated by using experimental data for influent and effluent parameters. The dissolved oxygen measurements were taken to validate the model by coupling them with Computational Fluid Dynamics (CFD) models. The Biowin models were able to exactly mimic the plant performance and predict effluent behavior for extended periods. The models are useful for plant engineers and operators as they can take decisions beforehand by predicting the plant performance with the use of BioWin models. One of the important findings from the model was the effects of recycle and wastage ratios on the mixed liquor suspended solids. The model was also useful in determining the significant kinetic parameters for biological wastewater treatment systems.

Keywords: BioWin, kinetic modeling, flowsheet simulation, dynamic modeling

Procedia PDF Downloads 155
856 Numerical Study of Laminar Separation Bubble Over an Airfoil Using γ-ReθT SST Turbulence Model on Moderate Reynolds Number

Authors: Younes El Khchine, Mohammed Sriti

Abstract:

A parametric study has been conducted to analyse the flow around S809 airfoil of wind turbine in order to better understand the characteristics and effects of laminar separation bubble (LSB) on aerodynamic design for maximizing wind turbine efficiency. Numerical simulations were performed at low Reynolds number by solving the Unsteady Reynolds Averaged Navier-Stokes (URANS) equations based on C-type structural mesh and using γ-Reθt turbulence model. Two-dimensional study was conducted for the chord Reynolds number of 1×105 and angles of attack (AoA) between 0 and 20.15 degrees. The simulation results obtained for the aerodynamic coefficients at various angles of attack (AoA) were compared with XFoil results. A sensitivity study was performed to examine the effects of Reynolds number and free-stream turbulence intensity on the location and length of laminar separation bubble and aerodynamic performances of wind turbine. The results show that increasing the Reynolds number leads to a delay in the laminar separation on the upper surface of the airfoil. The increase in Reynolds number leads to an accelerate transition process and the turbulent reattachment point move closer to the leading edge owing to an earlier reattachment of the turbulent shear layer. This leads to a considerable reduction in the length of the separation bubble as the Reynolds number is increased. The increase of the level of free-stream turbulence intensity leads to a decrease in separation bubble length and an increase the lift coefficient while having negligible effects on the stall angle. When the AoA increased, the bubble on the suction airfoil surface was found to moves upstream to leading edge of the airfoil that causes earlier laminar separation.

Keywords: laminar separation bubble, turbulence intensity, S809 airfoil, transition model, Reynolds number

Procedia PDF Downloads 87
855 The Study of Tourists’ Behavior in Water Usage in Hotel Business: Case Study of Phuket Province, Thailand

Authors: A. Pensiri, K. Nantaporn, P. Parichut

Abstract:

Tourism is very important to the economy of many countries due to the large contribution in the areas of employment and income generation. However, the rapid growth of tourism can also be considered as one of the major uses of water user, and therefore also have a significant and detrimental impact on the environment. Guest behavior in water usage can be used to manage water in hotels for sustainable water resources management. This research presents a study of hotel guest water usage behavior at two hotels, namely Hotel A (located in Kathu district) and Hotel B (located in Muang district) in Phuket Province, Thailand, as case studies. Primary and secondary data were collected from the hotel manager through interview and questionnaires. The water flow rate was measured in-situ from each water supply device in the standard room type at each hotel, including hand washing faucets, bathroom faucets, shower and toilet flush. For the interview, the majority of respondents (n = 204 for Hotel A and n = 244 for Hotel B) were aged between 21 years and 30 years (53% for Hotel A and 65% for Hotel B) and the majority were foreign (78% in Hotel A, and 92% in Hotel B) from American, France and Austria for purposes of tourism (63% in Hotel A, and 55% in Hotel B). The data showed that water consumption ranged from 188 litres to 507 liters, and 383 litres to 415 litres per overnight guest in Hotel A and Hotel B (n = 244), respectively. These figures exceed the water efficiency benchmark set for Tropical regions by the International Tourism Partnership (ITP). It is recommended that guest water saving initiatives should be implemented at hotels. Moreover, the results showed that guests have high satisfaction for the hotels, the front office service reveal the top rates of average score of 4.35 in Hotel A and 4.20 in Hotel B, respectively, while the luxury decoration and room cleanliness exhibited the second satisfaction scored by the guests in Hotel A and B, respectively. On the basis of this information, the findings can be very useful to improve customer service satisfaction and pay attention to this particular aspect for better hotel management.

Keywords: hotel, tourism, Phuket, water usage

Procedia PDF Downloads 257
854 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software

Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi

Abstract:

Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.

Keywords: climate change, GIS, interpolation, co-kriging

Procedia PDF Downloads 130
853 Evaluation of the Impact of Reducing the Traffic Light Cycle for Cars to Improve Non-Vehicular Transportation: A Case of Study in Lima

Authors: Gheyder Concha Bendezu, Rodrigo Lescano Loli, Aldo Bravo Lizano

Abstract:

In big urbanized cities of Latin America, motor vehicles have priority over non-motor vehicles and pedestrians. There is an important problem that affects people's health and quality of life; lack of inclusion towards pedestrians makes it difficult for them to move smoothly and safely since the city has been planned for the transit of motor vehicles. Faced with the new trend for sustainable and economical transport, the city is forced to develop infrastructure in order to incorporate pedestrians and users with non-motorized vehicles in the transport system. The present research aims to study the influence of non-motorized vehicles on an avenue, the optimization of a cycle using traffic lights based on simulation in Synchro software, to improve the flow of non-motor vehicles. The evaluation is of the microscopic type; for this reason, field data was collected, such as vehicular, pedestrian, and non-motor vehicle user demand. With the values of speed and travel time, it is represented in the current scenario that contains the existing problem. These data allow to create a microsimulation model in Vissim software, later to be calibrated and validated so that it has a behavior similar to reality. The results of this model are compared with the efficiency parameters of the proposed model; these parameters are the queue length, the travel speed, and mainly the travel times of the users at this intersection. The results reflect a reduction of 27% in travel time, that is, an improvement between the proposed model and the current one for this great avenue. The tail length of motor vehicles is also reduced by 12.5%, a considerable improvement. All this represents an improvement in the level of service and in the quality of life of users.

Keywords: bikeway, microsimulation, pedestrians, queue length, traffic light cycle, travel time

Procedia PDF Downloads 177
852 Comparison of Cyclone Design Methods for Removal of Fine Particles from Plasma Generated Syngas

Authors: Mareli Hattingh, I. Jaco Van der Walt, Frans B. Waanders

Abstract:

A waste-to-energy plasma system was designed by Necsa for commercial use to create electricity from unsorted municipal waste. Fly ash particles must be removed from the syngas stream at operating temperatures of 1000 °C and recycled back into the reactor for complete combustion. A 2D2D high efficiency cyclone separator was chosen for this purpose. During this study, two cyclone design methods were explored: The Classic Empirical Method (smaller cyclone) and the Flow Characteristics Method (larger cyclone). These designs were optimized with regard to efficiency, so as to remove at minimum 90% of the fly ash particles of average size 10 μm by 50 μm. Wood was used as feed source at a concentration of 20 g/m3 syngas. The two designs were then compared at room temperature, using Perspex test units and three feed gases of different densities, namely nitrogen, helium and air. System conditions were imitated by adapting the gas feed velocity and particle load for each gas respectively. Helium, the least dense of the three gases, would simulate higher temperatures, whereas air, the densest gas, simulates a lower temperature. The average cyclone efficiencies ranged between 94.96% and 98.37%, reaching up to 99.89% in individual runs. The lowest efficiency attained was 94.00%. Furthermore, the design of the smaller cyclone proved to be more robust, while the larger cyclone demonstrated a stronger correlation between its separation efficiency and the feed temperatures. The larger cyclone can be assumed to achieve slightly higher efficiencies at elevated temperatures. However, both design methods led to good designs. At room temperature, the difference in efficiency between the two cyclones was almost negligible. At higher temperatures, however, these general tendencies are expected to be amplified so that the difference between the two design methods will become more obvious. Though the design specifications were met for both designs, the smaller cyclone is recommended as default particle separator for the plasma system due to its robust nature.

Keywords: Cyclone, design, plasma, renewable energy, solid separation, waste processing

Procedia PDF Downloads 214
851 Reduction of Defects Using Seven Quality Control Tools for Productivity Improvement at Automobile Company

Authors: Abdul Sattar Jamali, Imdad Ali Memon, Maqsood Ahmed Memon

Abstract:

Quality of production near to zero defects is an objective of every manufacturing and service organization. In order to maintain and improve the quality by reduction in defects, Statistical tools are being used by any organizations. There are many statistical tools are available to assess the quality. Keeping in view the importance of many statistical tools, traditional 7QC tools has been used in any manufacturing and automobile Industry. Therefore, the 7QC tools have been successfully applied at one of the Automobile Company Pakistan. Preliminary survey has been done for the implementation of 7QC tool in the assembly line of Automobile Industry. During preliminary survey two inspection points were decided to collect the data, which are Chassis line and trim line. The data for defects at Chassis line and trim line were collected for reduction in defects which ultimately improve productivity. Every 7QC tools has its benefits observed from the results. The flow charts developed for better understanding about inspection point for data collection. The check sheets developed for helps for defects data collection. Histogram represents the severity level of defects. Pareto charts show the cumulative effect of defects. The Cause and Effect diagrams developed for finding the root causes of each defects. Scatter diagram developed the relation of defects increasing or decreasing. The P-Control charts developed for showing out of control points beyond the limits for corrective actions. The successful implementation of 7QC tools at the inspection points at Automobile Industry concluded that the considerable amount of reduction on defects level, as in Chassis line from 132 defects to 13 defects. The total 90% defects were reduced in Chassis Line. In Trim line defects were reduced from 157 defects to 28 defects. The total 82% defects were reduced in Trim Line. As the Automobile Company exercised only few of the 7 QC tools, not fully getting the fruits by the application of 7 QC tools. Therefore, it is suggested the company may need to manage a mechanism for the application of 7 QC tools at every section.

Keywords: check sheet, cause and effect diagram, control chart, histogram

Procedia PDF Downloads 327
850 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors

Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin

Abstract:

IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).

Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)

Procedia PDF Downloads 141
849 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method

Authors: Rui Wu

Abstract:

In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.

Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning

Procedia PDF Downloads 108
848 Investigation into the Optimum Hydraulic Loading Rate for Selected Filter Media Packed in a Continuous Upflow Filter

Authors: A. Alzeyadi, E. Loffill, R. Alkhaddar

Abstract:

Continuous upflow filters can combine the nutrient (nitrogen and phosphate) and suspended solid removal in one unit process. The contaminant removal could be achieved chemically or biologically; in both processes the filter removal efficiency depends on the interaction between the packed filter media and the influent. In this paper a residence time distribution (RTD) study was carried out to understand and compare the transfer behaviour of contaminants through a selected filter media packed in a laboratory-scale continuous up flow filter; the selected filter media are limestone and white dolomite. The experimental work was conducted by injecting a tracer (red drain dye tracer –RDD) into the filtration system and then measuring the tracer concentration at the outflow as a function of time; the tracer injection was applied at hydraulic loading rates (HLRs) (3.8 to 15.2 m h-1). The results were analysed according to the cumulative distribution function F(t) to estimate the residence time of the tracer molecules inside the filter media. The mean residence time (MRT) and variance σ2 are two moments of RTD that were calculated to compare the RTD characteristics of limestone with white dolomite. The results showed that the exit-age distribution of the tracer looks better at HLRs (3.8 to 7.6 m h-1) and (3.8 m h-1) for limestone and white dolomite respectively. At these HLRs the cumulative distribution function F(t) revealed that the residence time of the tracer inside the limestone was longer than in the white dolomite; whereas all the tracer took 8 minutes to leave the white dolomite at 3.8 m h-1. On the other hand, the same amount of the tracer took 10 minutes to leave the limestone at the same HLR. In conclusion, the determination of the optimal level of hydraulic loading rate, which achieved the better influent distribution over the filtration system, helps to identify the applicability of the material as filter media. Further work will be applied to examine the efficiency of the limestone and white dolomite for phosphate removal by pumping a phosphate solution into the filter at HLRs (3.8 to 7.6 m h-1).

Keywords: filter media, hydraulic loading rate, residence time distribution, tracer

Procedia PDF Downloads 278
847 A Paradigm Shift towards Personalized and Scalable Product Development and Lifecycle Management Systems in the Aerospace Industry

Authors: David E. Culler, Noah D. Anderson

Abstract:

Integrated systems for product design, manufacturing, and lifecycle management are difficult to implement and customize. Commercial software vendors, including CAD/CAM and third party PDM/PLM developers, create user interfaces and functionality that allow their products to be applied across many industries. The result is that systems become overloaded with functionality, difficult to navigate, and use terminology that is unfamiliar to engineers and production personnel. For example, manufacturers of automotive, aeronautical, electronics, and household products use similar but distinct methods and processes. Furthermore, each company tends to have their own preferred tools and programs for controlling work and information flow and that connect design, planning, and manufacturing processes to business applications. This paper presents a methodology and a case study that addresses these issues and suggests that in the future more companies will develop personalized applications that fit to the natural way that their business operates. A functioning system has been implemented at a highly competitive U.S. aerospace tooling and component supplier that works with many prominent airline manufacturers around the world including The Boeing Company, Airbus, Embraer, and Bombardier Aerospace. During the last three years, the program has produced significant benefits such as the automatic creation and management of component and assembly designs (parametric models and drawings), the extensive use of lightweight 3D data, and changes to the way projects are executed from beginning to end. CATIA (CAD/CAE/CAM) and a variety of programs developed in C#, VB.Net, HTML, and SQL make up the current system. The web-based platform is facilitating collaborative work across multiple sites around the world and improving communications with customers and suppliers. This work demonstrates that the creative use of Application Programming Interface (API) utilities, libraries, and methods is a key to automating many time-consuming tasks and linking applications together.

Keywords: PDM, PLM, collaboration, CAD/CAM, scalable systems

Procedia PDF Downloads 176