Search results for: unsteady non-equilibrium distribution functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7508

Search results for: unsteady non-equilibrium distribution functions

698 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 324
697 Predicting the Effect of Vibro Stone Column Installation on Performance of Reinforced Foundations

Authors: K. Al Ammari, B. G. Clarke

Abstract:

Soil improvement using vibro stone column techniques consists of two main parts: (1) the installed load bearing columns of well-compacted, coarse-grained material and (2) the improvements to the surrounding soil due to vibro compaction. Extensive research work has been carried out over the last 20 years to understand the improvement in the composite foundation performance due to the second part mentioned above. Nevertheless, few of these studies have tried to quantify some of the key design parameters, namely the changes in the stiffness and stress state of the treated soil, or have consider these parameters in the design and calculation process. Consequently, empirical and conservative design methods are still being used by ground improvement companies with a significant variety of results in engineering practice. Two-dimensional finite element study to develop an axisymmetric model of a single stone column reinforced foundation was performed using PLAXIS 2D AE to quantify the effect of the vibro installation of this column in soft saturated clay. Settlement and bearing performance were studied as an essential part of the design and calculation of the stone column foundation. Particular attention was paid to the large deformation in the soft clay around the installed column caused by the lateral expansion. So updated mesh advanced option was taken in the analysis. In this analysis, different degrees of stone column lateral expansions were simulated and numerically analyzed, and then the changes in the stress state, stiffness, settlement performance and bearing capacity were quantified. It was found that application of radial expansion will produce a horizontal stress in the soft clay mass that gradually decrease as the distance from the stone column axis increases. The excess pore pressure due to the undrained conditions starts to dissipate immediately after finishing the column installation, allowing the horizontal stress to relax. Changes in the coefficient of the lateral earth pressure K ٭, which is very important in representing the stress state, and the new stiffness distribution in the reinforced clay mass, were estimated. More encouraging results showed that increasing the expansion during column installation has a noticeable effect on improving the bearing capacity and reducing the settlement of reinforced ground, So, a design method should include this significant effect of the applied lateral displacement during the stone column instillation in simulation and numerical analysis design.

Keywords: bearing capacity, design, installation, numerical analysis, settlement, stone column

Procedia PDF Downloads 373
696 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 134
695 Teaching Children about Their Brains: Evaluating the Role of Neuroscience Undergraduates in Primary School Education

Authors: Clea Southall

Abstract:

Many children leave primary school having formed preconceptions about their relationship with science. Thus, primary school represents a critical window for stimulating scientific interest in younger children. Engagement relies on the provision of hands-on activities coupled with an ability to capture a child’s innate curiosity. This requires children to perceive science topics as interesting and relevant to their everyday life. Teachers and pupils alike have suggested the school curriculum be tailored to help stimulate scientific interest. Young children are naturally inquisitive about the human body; the brain is one topic which frequently engages pupils, although it is not currently included in the UK primary curriculum. Teaching children about the brain could have wider societal impacts such as increasing knowledge of neurological disorders. However, many primary school teachers do not receive formal neuroscience training and may feel apprehensive about delivering lessons on the nervous system. This is exacerbated by a lack of educational neuroscience resources. One solution is for undergraduates to form partnerships with schools - delivering engaging lessons and supplementing teacher knowledge. The aim of this project was to evaluate the success of a short lesson on the brain delivered by an undergraduate neuroscientist to primary school pupils. Prior to entering schools, semi-structured online interviews were conducted with teachers to gain pedagogical advice and relevant websites were searched for neuroscience resources. Subsequently, a single lesson plan was created comprising of four hands-on activities. The activities were devised in a top-down manner, beginning with learning about the brain as an entity, before focusing on individual neurons. Students were asked to label a ‘brain map’ to assess prior knowledge of brain structure and function. They viewed animal brains and created ‘pipe-cleaner neurons’ which were later used to depict electrical transmission. The same session was delivered by an undergraduate student to 570 key stage 2 (KS2) pupils across five schools in Leeds, UK. Post-session surveys, designed for teachers and pupils respectively, were used to evaluate the session. Children in all year groups had relatively poor knowledge of brain structure and function at the beginning of the session. When asked to label four brain regions with their respective functions, older pupils labeled a mean of 1.5 (± 1.0) brain regions compared to 0.8 (± 0.96) for younger pupils (p=0.002). However, by the end of the session, 95% of pupils felt their knowledge of the brain had increased. Hands-on activities were rated most popular by pupils and were considered the most successful aspect of the session by teachers. Although only half the teachers were aware of neuroscience educational resources, nearly all (95%) felt they would have more confidence in teaching a similar session in the future. All teachers felt the session was engaging and that the content could be linked to the current curriculum. Thus, a short fifty-minute session can successfully enhance pupils’ knowledge of a new topic: the brain. Partnerships with an undergraduate student can provide an alternative method for supplementing teacher knowledge, increasing their confidence in delivering future lessons on the nervous system.

Keywords: education, neuroscience, primary school, undergraduate

Procedia PDF Downloads 207
694 Knowledge Creation and Diffusion Dynamics under Stable and Turbulent Environment for Organizational Performance Optimization

Authors: Jessica Gu, Yu Chen

Abstract:

Knowledge Management (KM) is undoubtable crucial to organizational value creation, learning, and adaptation. Although the rapidly growing KM domain has been fueled with full-fledged methodologies and technologies, studies on KM evolution that bridge the organizational performance and adaptation to the organizational environment are still rarely attempted. In particular, creation (or generation) and diffusion (or share/exchange) of knowledge are of the organizational primary concerns on the problem-solving perspective, however, the optimized distribution of knowledge creation and diffusion endeavors are still unknown to knowledge workers. This research proposed an agent-based model of knowledge creation and diffusion in an organization, aiming at elucidating how the intertwining knowledge flows at microscopic level lead to optimized organizational performance at macroscopic level through evolution, and exploring what exogenous interventions by the policy maker and endogenous adjustments of the knowledge workers can better cope with different environmental conditions. With the developed model, a series of simulation experiments are conducted. Both long-term steady-state and time-dependent developmental results on organizational performance, network and structure, social interaction and learning among individuals, knowledge audit and stocktaking, and the likelihood of choosing knowledge creation and diffusion by the knowledge workers are obtained. One of the interesting findings reveals a non-monotonic phenomenon on organizational performance under turbulent environment while a monotonic phenomenon on organizational performance under a stable environment. Hence, whether the environmental condition is turbulence or stable, the most suitable exogenous KM policy and endogenous knowledge creation and diffusion choice adjustments can be identified for achieving the optimized organizational performance. Additional influential variables are further discussed and future work directions are finally elaborated. The proposed agent-based model generates evidence on how knowledge worker strategically allocates efforts on knowledge creation and diffusion, how the bottom-up interactions among individuals lead to emerged structure and optimized performance, and how environmental conditions bring in challenges to the organization system. Meanwhile, it serves as a roadmap and offers great macro and long-term insights to policy makers without interrupting the real organizational operation, sacrificing huge overhead cost, or introducing undesired panic to employees.

Keywords: knowledge creation, knowledge diffusion, agent-based modeling, organizational performance, decision making evolution

Procedia PDF Downloads 237
693 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit

Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili

Abstract:

Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.

Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain

Procedia PDF Downloads 174
692 Spatial Analysis of the Socio-Environmental Vulnerability in Medium-Sized Cities: Case Study of Municipality of Caraguatatuba SP-Brazil

Authors: Katia C. Bortoletto, Maria Isabel C. de Freitas, Rodrigo B. N. de Oliveira

Abstract:

The environmental vulnerability studies are essential for priority actions to the reduction of disasters risk. The aim of this study is to analyze the socio-environmental vulnerability obtained through a Census survey, followed by both a statistical analysis (PCA/SPSS/IBM) and a spatial analysis by GIS (ArcGis/ESRI), taking as a case study the Municipality of Caraguatatuba-SP, Brazil. In the municipal development plan analysis the emphasis was given to the Special Zone of Social Interest (ZEIS), the Urban Expansion Zone (ZEU) and the Environmental Protection Zone (ZPA). For the mapping of the social and environmental vulnerabilities of the study area the exposure of people (criticality) and of the place (support capacity) facing disaster risk were obtained from the 2010 Census from the Brazilian Institute of Geography and Statistics (IBGE). Considering the criticality, the variables of greater influence were related to literate persons responsible for the household and literate persons with 5 or more years of age; persons with 60 years or more of age and income of the person responsible for the household. In the Support Capacity analysis, the predominant influence was on the good household infrastructure in districts with low population density and also the presence of neighborhoods with little urban infrastructure and inadequate housing. The results of the comparative analysis show that the areas with high and very high vulnerability classes cover the classes of the ZEIS and the ZPA, whose zoning includes: Areas occupied by low-income population, presence of children and young people, irregular occupations and land suitable to urbanization but underutilized. The presence of zones of urban sprawl (ZEU) in areas of high to very high socio-environmental vulnerability reflects the inadequate use of the urban land in relation to the spatial distribution of the population and the territorial infrastructure, which favors the increase of disaster risk. It can be concluded that the study allowed observing the convergence between the vulnerability analysis and the classified areas in urban zoning. The occupation of areas unsuitable for housing due to its characteristics of risk was confirmed, thus concluding that the methodologies applied are agile instruments to subsidize actions to the reduction disasters risk.

Keywords: socio-environmental vulnerability, urban zoning, reduction disasters risk, methodologies

Procedia PDF Downloads 295
691 Extent of Fruit and Vegetable Waste at Wholesaler Stage of the Food Supply Chain in Western Australia

Authors: P. Ghosh, S. B. Sharma

Abstract:

The growing problem of food waste is causing unacceptable economic, environmental and social impacts across the globe. In Australia, food waste is estimated at about AU$8 billion per year; however, information on the extent of wastage at different stages of the food value chain from farm to fork is very limited. This study aims to identify causes for and extent of food waste at wholesaler stage of the food value chain in the state of Western Australia. It also explores approaches applied to reduce and utilize food waste by the wholesalers. The study was carried out at Perth city market in Caning Vale, the main wholesale distribution centre for fruits and vegetables in Western Australia. A survey questionnaire was prepared and shared with 51 wholesalers and their responses to 10 targeted questions on quantity of produce (fruits and vegetables) delivery received and further supplied, reasons for waste generation and innovations applied or being considered to reduce and utilize food waste. Data were computed using the Statistical Package for the Social Sciences (SPSS version 21). Among the wholesalers 52% were primary wholesalers (buy produce directly from growers) and 48% were secondary wholesalers (buy produce in bulk from major wholesalers and supply to the local retail market, caterers, and customers with specific requirements). Average fruit and vegetable waste was 180 Kilogram per week per primary wholesaler and 30 Kilogram per secondary wholesaler. Based on this survey, the fruit and vegetable waste at wholesaler stage was estimated at about 286 tonnes per year. The secondary wholesalers distributed pre-ordered commodities, which minimized the potential to cause waste. Non-parametric test (Mann Whitney test) was carried out to assess contributions of wholesalers to waste generation. Over 56% of secondary wholesalers generally had nothing to bin as waste. Pearson’s correlation coefficient analysis showed positive correlation (r = 0.425; P=0.01) between the quantity of produce received and waste generated. Low market demand was the predominant reason identified by the wholesalers for waste generation. About a third of the wholesalers suggested that high cosmetic standards for fruits and vegetables - appearance, shape, and size - should be relaxed to reduce waste. Donation of unutilized fruits and vegetables to charity was overwhelmingly (95%) considered as one of the best options for utilization of discarded produce. The extent of waste at other stages of fruit and vegetable supply chain is currently being studied.

Keywords: food waste, fruits and vegetables, supply chain, waste generation

Procedia PDF Downloads 308
690 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 180
689 Improving the Dielectric Strength of Transformer Oil for High Health Index: An FEM Based Approach Using Nanofluids

Authors: Fatima Khurshid, Noor Ul Ain, Syed Abdul Rehman Kashif, Zainab Riaz, Abdullah Usman Khan, Muhammad Imran

Abstract:

As the world is moving towards extra-high voltage (EHV) and ultra-high voltage (UHV) power systems, the performance requirements of power transformers are becoming crucial to the system reliability and security. With the transformers being an essential component of a power system, low health index of transformers poses greater risks for safe and reliable operation. Therefore, to meet the rising demands of the power system and transformer performance, researchers are being prompted to provide solutions for enhanced thermal and electrical properties of transformers. This paper proposes an approach to improve the health index of a transformer by using nano-technology in conjunction with bio-degradable oils. Vegetable oils can serve as potential dielectric fluid alternatives to the conventional mineral oils, owing to their numerous inherent benefits; namely, higher fire and flashpoints, and being environment-friendly in nature. Moreover, the addition of nanoparticles in the dielectric fluid further serves to improve the dielectric strength of the insulation medium. In this research, using the finite element method (FEM) in COMSOL Multiphysics environment, and a 2D space dimension, three different oil samples have been modelled, and the electric field distribution is computed for each sample at various electric potentials, i.e., 90 kV, 100 kV, 150 kV, and 200 kV. Furthermore, each sample has been modified with the addition of nanoparticles of different radii (50 nm and 100 nm) and at different interparticle distance (5 mm and 10 mm), considering an instant of time. The nanoparticles used are non-conductive and have been modelled as alumina (Al₂O₃). The geometry has been modelled according to IEC standard 60897, with a standard electrode gap distance of 25 mm. For an input supply voltage of 100 kV, the maximum electric field stresses obtained for the samples of synthetic vegetable oil, olive oil, and mineral oil are 5.08 ×10⁶ V/m, 5.11×10⁶ V/m and 5.62×10⁶ V/m, respectively. It is observed that for the unmodified samples, vegetable oils have a greater dielectric strength as compared to the conventionally used mineral oils because of their higher flash points and higher values of relative permittivity. Also, for the modified samples, the addition of nanoparticles inhibits the streamer propagation inside the dielectric medium and hence, serves to improve the dielectric properties of the medium.

Keywords: dielectric strength, finite element method, health index, nanotechnology, streamer propagation

Procedia PDF Downloads 138
688 Phosphate Tailings in View of a Better Waste Disposal And/or Valorization: Case of Tunisian Phosphates Mines

Authors: Mouna Ettoumi, Jouini Marouen, Carmen Mihaela Neculita, Salah Bouhlel, Lucie Coudert, Mostafa Benzaazoua, Y. Taha

Abstract:

In the context of sustainable development and circular economy, waste valorization is considered a promising alternative to overcome issues related to their disposal or elimination. The aim of this study is to evaluate the potential use of phosphate sludges (tailings) from the Kef Shfeir mine site (Gafsa, Tunisia) as an alternative material in the production of fired bricks. To do so, representative samples of raw phosphate treatment sludges were collected and characterized for their physical, chemical, mineralogical and environmental characteristics. Then, the raw materials were baked at different temperatures (900°C, 1000°C, and 1100°C) for bricks making. Afterward, fired bricks were characterized for their physical (particle size distribution, density, and plasticity), chemical (XRF and digestion), mineralogical (XRD) and mechanical (flexural strength) properties as well as for their environmental behavior (TCLP, SPLP, and CTEU-9) to ensure whether they meet the required construction standards. Results showed that the raw materials had low density (2.47g/cm 3), were non-plastic and were mainly composed of fluoroapatite (15.6%), calcite (23.1%) and clays (22.2% - mainly as heulandite, vermiculite and palygorskite). With respect to the environmental behavior, all metals (e.g., Pb, Zn, As, Cr, Ba, Cd) complied with the requirements set by the USEPA. In addition, fired bricks had varying porosity (9-13%), firing shrinking (5.2-7.5%), water absorption (12.5-17.2%) and flexural strength (3.86-13.4 MPa). Noteworthy, an improvement in the properties (porosity, firing shrinking, water absorption, and flexural strength) of manufactured fired bricks was observed with the increase of firing temperature from 900 to 1100°C. All the measured properties complied with the construction norms and requirements. Moreover, regardless of the firing temperature, the environmental behavior of metals obeyed the requirements of the USEPA standards. Finally, fired bricks could be produced at high temperatures (1000°C) based on 100% of phosphate sludge without any substitution or addition of either chemical agents or binders. This sustainable brick-making process could be a promising approach for the Phosphate Company to partially manage these wastes, which are considered “non-profitable” for the moment and preserve soils that are exploited presently.

Keywords: phosphate treatment sludge, mine waste, backed bricks, waste valorization

Procedia PDF Downloads 201
687 Accessing Motional Quotient for All Round Development

Authors: Zongping Wang, Chengjun Cui, Jiacun Wang

Abstract:

The concept of intelligence has been widely used to access an individual's cognitive abilities to learn, form concepts, understand, apply logic, and reason. According to the multiple intelligence theory, there are eight distinguished types of intelligence. One of them is the bodily-kinaesthetic intelligence that links to the capacity of an individual controlling his body and working with objects. Motor intelligence, on the other hand, reflects the capacity to understand, perceive and solve functional problems by motor behavior. Both bodily-kinaesthetic intelligence and motor intelligence refer directly or indirectly to bodily capacity. Inspired by these two intelligence concepts, this paper introduces motional intelligence (MI). MI is two-fold. (1) Body strength, which is the capacity of various organ functions manifested by muscle activity under the control of the central nervous system during physical exercises. It can be measured by the magnitude of muscle contraction force, the frequency of repeating a movement, the time to finish a movement of body position, the duration to maintain muscles in a working status, etc. Body strength reflects the objective of MI. (2) Level of psychiatric willingness to physical events. It is a subjective thing and determined by an individual’s self-consciousness to physical events and resistance to fatigue. As such, we call it subjective MI. Subjective MI can be improved through education and proper social events. The improvement of subjective MI can lead to that of objective MI. A quantitative score of an individual’s MI is motional quotient (MQ). MQ is affected by several factors, including genetics, physical training, diet and lifestyle, family and social environment, and personal awareness of the importance of physical exercise. Genes determine one’s body strength potential. Physical training, in general, makes people stronger, faster and swifter. Diet and lifestyle have a direct impact on health. Family and social environment largely affect one’s passion for physical activities, so does personal awareness of the importance of physical exercise. The key to the success of the MQ study is developing an acceptable and efficient system that can be used to assess MQ objectively and quantitatively. We should apply different accessing systems to different groups of people according to their ages and genders. Field test, laboratory test and questionnaire are among essential components of MQ assessment. A scientific interpretation of MQ score is part of an MQ assessment system as it will help an individual to improve his MQ. IQ (intelligence quotient) and EQ (emotional quotient) and their test have been studied intensively. We argue that IQ and EQ study alone is not sufficient for an individual’s all round development. The significance of MQ study is that it offsets IQ and EQ study. MQ reflects an individual’s mental level as well as bodily level of intelligence in physical activities. It is well-known that the American Springfield College seal includes the Luther Gulick triangle with the words “spirit,” “mind,” and “body” written within it. MQ, together with IQ and EQ, echoes this education philosophy. Since its inception in 2012, the MQ research has spread rapidly in China. By now, six prestigious universities in China have established research centers on MQ and its assessment.

Keywords: motional Intelligence, motional quotient, multiple intelligence, motor intelligence, all round development

Procedia PDF Downloads 158
686 Evaluating the Characteristics of Paediatric Accidental Poisonings

Authors: Grace Fangmin Tan, Elaine Yiling Tay, Elizabeth Huiwen Tham, Andrea Wei Ching Yeo

Abstract:

Background: While accidental poisonings in children may seem unavoidable, knowledge of circumstances surrounding such incidents and identification of risk factors is important in the development of secondary prevention strategies. Some risk factors include age of the child, lack of adequate supervision and improper storage of substances. The aim of this study is to assess risk factors and circumstances influencing outcomes in these children. Methodology: A retrospective medical record review of all accidental poisoning cases presenting to the Children’s Emergency at National University Hospital (NUH), Singapore between January 2014 and December 2015 was conducted. Information on demographics, poisoning circumstances and clinical outcomes were collected. Results: Ninety-nine of a total of 186 poisoning cases were accidental ingestions, with a mean age of 4.7 (range 0.4 to 18.3 years). The gender distribution is rather equal with 52(52.5%) females and 47(47.5%) males. Seventy-nine (79.8%) were self-administered by the child and in 20 cases (20.2%), the substance was administered erroneously by caregivers 12/20 (60.0%) of whom were given the wrong drug dose while 8/20 (40.0%) were given the wrong substance. Self-administration was associated with presentation to the ED within 12 hours (p=0.027, OR 6.65, 95% CI 1.24-35.72). Notably, 94.9% of the cases involved substances kept within reach of the child. Sixty-nine (82.1%) had the substance kept in the original container, 3(3.6%) in food containers, 8(9.5%) in other containers and 4(4.8%) without a container. Of the 50 cases with information on labelling, 40/50(80.0%) were accurately labelled, 2/50 (4.0%) wrongly labelled, and 8/50 (16.0%) were unlabelled. Implicated substances included personal care products (11.1%), household cleaning products (3.0%), and different classes of drugs such as paracetamol (22.2%), antihistamines (17.2%) and sympathomimetics (8.1%). Children < 3 years of age were 4.8 times more likely to be poisoned by household substances than children >3 years of age (p=0.009, 95% CI 1.48-15.77). Prehospital interventions were more likely to have been done in poisoning with household substances (p=0.005, OR 6.12 95% CI 1.73-21.68). Fifty-nine (59.6%) were asymptomatic, 34 (34.3%) had a Poisoning Severity Score (PSS) grade of 1 (minor) and 6 (6.1%) grade 2 (moderate). Older children were 9.3 times more likely to be symptomatic (p<0.001, 95% CI 3.15-27.25). Thirty (32%) required admission. Conclusion: A significant proportion of accidental poisoning cases were due to medication administration errors by caregivers, which should be preventable. Risk factors for accidental poisoning included lack of adequate caregiver supervision, improper labelling and young age of the child. There is an urgent need to improve caregiver counselling during medication dispensing as well as to educate caregivers on basic child safety measures in the home to prevent future accidental poisonings.

Keywords: accidental, caregiver, paediatrics, poisoning

Procedia PDF Downloads 208
685 Nanoparticles Modification by Grafting Strategies for the Development of Hybrid Nanocomposites

Authors: Irati Barandiaran, Xabier Velasco-Iza, Galder Kortaberria

Abstract:

Hybrid inorganic/organic nanostructured materials based on block copolymers are of considerable interest in the field of Nanotechnology, taking into account that these nanocomposites combine the properties of polymer matrix and the unique properties of the added nanoparticles. The use of block copolymers as templates offers the opportunity to control the size and the distribution of inorganic nanoparticles. This research is focused on the surface modification of inorganic nanoparticles to reach a good interface between nanoparticles and polymer matrices which hinders the nanoparticle aggregation. The aim of this work is to obtain a good and selective dispersion of Fe3O4 magnetic nanoparticles into different types of block copolymers such us, poly(styrene-b-methyl methacrylate) (PS-b-PMMA), poly(styrene-b-ε-caprolactone) (PS-b-PCL) poly(isoprene-b-methyl methacrylate) (PI-b-PMMA) or poly(styrene-b-butadiene-b-methyl methacrylate) (SBM) by using different grafting strategies. Fe3O4 magnetic nanoparticles have been surface-modified with polymer or block copolymer brushes following different grafting methods (grafting to, grafting from and grafting through) to achieve a selective location of nanoparticles into desired domains of the block copolymers. Morphology of fabricated hybrid nanocomposites was studied by means of atomic force microscopy (AFM) and with the aim to reach well-ordered nanostructured composites different annealing methods were used. Additionally, nanoparticle amount has been also varied in order to investigate the effect of the nanoparticle content in the morphology of the block copolymer. Nowadays different characterization methods were using in order to investigate magnetic properties of nanometer-scale electronic devices. Particularly, two different techniques have been used with the aim of characterizing synthesized nanocomposites. First, magnetic force microscopy (MFM) was used to investigate qualitatively the magnetic properties taking into account that this technique allows distinguishing magnetic domains on the sample surface. On the other hand, magnetic characterization by vibrating sample magnetometer and superconducting quantum interference device. This technique demonstrated that magnetic properties of nanoparticles have been transferred to the nanocomposites, exhibiting superparamagnetic behavior similar to that of the maghemite nanoparticles at room temperature. Obtained advanced nanostructured materials could found possible applications in the field of dye-sensitized solar cells and electronic nanodevices.

Keywords: atomic force microscopy, block copolymers, grafting techniques, iron oxide nanoparticles

Procedia PDF Downloads 259
684 Depositional Environment and Diagenetic Alterations, Influences of Facies and Fine Kaolinite Formation Migration on Sandstones’ Reservoir Quality, Sarir Formation, Sirt Basin Libya

Authors: Faraj M. Elkhatri, Hana Ali Allafi

Abstract:

The spatial and temporal distribution of diagenetic alterations related impact on the reservoir quality of the Sarir Formation. (present day burial depth of about 9000 feet) Depositional facies and diagenetic alterations are the main controls on reservoir quality of Sarir Formation Sirt Ba-sin Libya; these based on lithology and grain size as well as authigenic clay mineral types and their distributions. However, petrology investigation obtained on study area with five sandstone wells concentrated on main rock components and the parameters that may have impacts on reservoirs. the main authigenic clay minerals are kaolinite and dickite, these investigations have confirmed by X.R.D analysis and clay fraction. mainly Kaolinite and Dickite were extensively presented on all of wells with high amounts. As well as trace of detrital smectite and less amounts of illitized mud-matrix are possibly find by SEM image. Thin layers of clay presented as clay-grain coatings in local depth interpreted as remains of dissolved clay matrix is partly transformed into kaolinite adjacent and towards pore throat. This also may have impacts on most of the pore throats of this sandstone which are open and relatively clean with some of fine martial have been formed on occluded pores. This material is identified by EDS analysis to be collections of not only kaolinite booklets, but also small, disaggregated kaolinite platelets derived from the dis-aggregation of larger kaolinite booklets. These patches of kaolinite not only fill this pore, but also coat some of the sur-rounding framework grains. Quartz grains often enlarged by authigenic quartz overgrowths partially occlude and re-duce porosity. Scanning Electron Microscopy with Energy Dispersive Spectroscopy (SEM) was conducted on the post-test samples to examine any mud filtrate particles that may be in the pore throats. Semi-qualitative elemental data on select-ed minerals observed during the SEM study were obtained using an Energy Dispersive Spectroscopy (EDS) unit. The samples showed mostly clean open pore throats, with limited occlusion by kaolinite.

Keywords: por throat, formation damage, porosity lose, solids plugging

Procedia PDF Downloads 54
683 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka

Authors: Sakshi Dhumale, Madhushree C., Amba Shetty

Abstract:

The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.

Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability

Procedia PDF Downloads 52
682 Characterization of Petrophysical Properties of Reservoirs in Bima Formation, Northeastern Nigeria: Implication for Hydrocarbon Exploration

Authors: Gabriel Efomeh Omolaiye, Jimoh Ajadi, Olatunji Seminu, Yusuf Ayoola Jimoh, Ubulom Daniel

Abstract:

Identification and characterization of petrophysical properties of reservoirs in the Bima Formation were undertaken to understand their spatial distribution and impacts on hydrocarbon saturation in the highly heterolithic siliciclastic sequence. The study was carried out using nine well logs from Maiduguri and Baga/Lake sub-basins within the Borno Basin. The different log curves were combined to decipher the lithological heterogeneity of the serrated sand facies and to aid the geologic correlation of sand bodies within the sub-basins. Evaluation of the formation reveals largely undifferentiated to highly serrated and lenticular sand bodies from which twelve reservoirs named Bima Sand-1 to Bima Sand-12 were identified. The reservoir sand bodies are bifurcated by shale beds, which reduced their thicknesses variably from 0.61 to 6.1 m. The shale content in the sand bodies ranged from 11.00% (relatively clean) to high shale content of 88.00%. The formation also has variable porosity values, with calculated total porosity ranged as low as 10.00% to as high as 35.00%. Similarly, effective porosity values spanned between 2.00 to 24.00%. The irregular porosity values also accounted for a wide range of field average permeability estimates computed for the formation, which measured between 0.03 to 319.49 mD. Hydrocarbon saturation (Sh) in the thin lenticular sand bodies also varied from 40.00 to 78.00%. Hydrocarbon was encountered in three intervals in Ga-1, four intervals in Da-1, two intervals in Ar-1, and one interval in Ye-1. Ga-1 well encountered 30.78 m thick of hydrocarbon column in 14 thin sand lobes in Bima Sand-1, with thicknesses from 0.60 m to 5.80 m and average saturation of 51.00%, while Bima Sand-2 intercepted 45.11 m thick of hydrocarbon column in 12 thin sand lobes with an average saturation of 61.00% and Bima Sand-9 has 6.30 m column in 4 thin sand lobes. Da-1 has hydrocarbon in Bima Sand-8 (5.30 m, Sh of 58.00% in 5 sand lobes), Bima Sand-10 (13.50 m, Sh of 52.00% in 6 sand lobes), Bima Sand-11 (6.20 m, Sh of 58.00% in 2 sand lobes) and Bima Sand-12 (16.50 m, Sh of 66% in 6 sand lobes). In the Ar-1 well, hydrocarbon occurs in Bima Sand-3 (2.40 m column, Sh of 48% in a sand lobe) and Bima Sand-9 (6.0 m, Sh of 58% in a sand lobe). Ye-1 well only intersected 0.5 m hydrocarbon in Bima Sand-1 with 78% saturation. Although Bima Formation has variable saturation of hydrocarbon, mainly gas in Maiduguri, and Baga/Lake sub-basins of the research area, its highly thin serrated sand beds, coupled with very low effective porosity and permeability in part, would pose a significant exploitation challenge. The sediments were deposited in a fluvio-lacustrine environment, resulting in a very thinly laminated or serrated alternation of sand and shale beds lithofacies.

Keywords: Bima, Chad Basin, fluvio-lacustrine, lithofacies, serrated sand

Procedia PDF Downloads 164
681 Exploring the Impact of Domestic Credit Extension, Government Claims, Inflation, Exchange Rates, and Interest Rates on Manufacturing Output: A Financial Analysis.

Authors: Ojo Johnson Adelakun

Abstract:

This study explores the long-term relationships between manufacturing output (MO) and several economic determinants, interest rate (IR), inflation rate (INF), exchange rate (EX), credit to the private sector (CPSM), gross claims on the government sector (GCGS), using monthly data from March 1966 to December 2023. Employing advanced econometric techniques including Fully Modified Ordinary Least Squares (FMOLS), Dynamic Ordinary Least Squares (DOLS), and Canonical Cointegrating Regression (CCR), the analysis provides several key insights. The findings reveal a positive association between interest rates and manufacturing output, which diverges from traditional economic theory that predicts a negative correlation due to increased borrowing costs. This outcome is attributed to the financial resilience of large enterprises, allowing them to sustain investment in production despite higher interest rates. In addition, inflation demonstrates a positive relationship with manufacturing output, suggesting that stable inflation within target ranges creates a favourable environment for investment in productivity-enhancing technologies. Conversely, the exchange rate shows a negative relationship with manufacturing output, reflecting the adverse effects of currency depreciation on the cost of imported raw materials. The negative impact of CPSM underscores the importance of directing credit efficiently towards productive sectors rather than speculative ventures. Moreover, increased government borrowing appears to crowd out private sector credit, negatively affecting manufacturing output. Overall, the study highlights the need for a coordinated policy approach integrating monetary, fiscal, and financial sector strategies. Policymakers should account for the differential impacts of interest rates, inflation, exchange rates, and credit allocation on various sectors. Ensuring stable inflation, efficient credit distribution, and mitigating exchange rate volatility are critical for supporting manufacturing output and promoting sustainable economic growth. This research provides valuable insights into the economic dynamics influencing manufacturing output and offers policy recommendations tailored to South Africa’s economic context.

Keywords: domestic credit, government claims, financial variables, manufacturing output, financial analysis

Procedia PDF Downloads 13
680 Revealing Thermal Degradation Characteristics of Distinctive Oligo-and Polisaccharides of Prebiotic Relevance

Authors: Attila Kiss, Erzsébet Némedi, Zoltán Naár

Abstract:

As natural prebiotic (non-digestible) carbohydrates stimulate the growth of colon microflora and contribute to maintain the health of the host, analytical studies aiming at revealing the chemical behavior of these beneficial food components came to the forefront of interest. Food processing (especially baking) may lead to a significant conversion of the parent compounds, hence it is of utmost importance to characterize the transformation patterns and the plausible decomposition products formed by thermal degradation. The relevance of this work is confirmed by the wide-spread use of these carbohydrates (fructo-oligosaccharides, cyclodextrins, raffinose and resistant starch) in the food industry. More and more functional foodstuffs are being developed based on prebiotics as bioactive components. 12 different types of oligosaccharides have been investigated in order to reveal their thermal degradation characteristics. Different carbohydrate derivatives (D-fructose and D-glucose oligomers and polymers) have been exposed to elevated temperatures (150 °C 170 °C, 190 °C, 210 °C, and 220 °C) for 10 min. An advanced HPLC method was developed and used to identify the decomposition products of carbohydrates formed as a consequence of thermal treatment. Gradient elution was applied with binary solvent elution (acetonitrile, water) through amine based carbohydrate column. Evaporative light scattering (ELS) proved to be suitable for the reliable detection of the UV/VIS inactive carbohydrate degradation products. These experimental conditions and applied advanced techniques made it possible to survey all the formed intermediers. Change in oligomer distribution was established in cases of all studied prebiotics throughout the thermal treatments. The obtained results indicate increased extent of chain degradation of the carbohydrate moiety at elevated temperatures. Prevalence of oligomers with shorter chain length and even the formation of monomer sugars (D-glucose and D-fructose) might be observed at higher temperatures. Unique oligomer distributions, which have not been described previously are revealed in the case of each studied, specific carbohydrate, which might result in various prebiotic activities. Resistant starches exhibited high stability when being thermal treated. The degradation process has been modeled by a plausible reaction mechanism, in which proton catalyzed degradation and chain cleavage take place.

Keywords: prebiotics, thermal degradation, fructo-oligosaccharide, HPLC, ELS detection

Procedia PDF Downloads 300
679 Effect of Chitosan Oligosaccharide from Tenebrio Molitor on Prebiotics

Authors: Hyemi Kim, Jay Kim, Kyunghoon Han, Ra-Yeong Choi, In-Woo Kim, Hyung Joo Suh, Ki-Bae Hong, Sung Hee Han

Abstract:

Chitosan is used in various industries such as food and medical care because it is known to have various functions such as anti-obesity, anti-inflammatory and anti-cancer benefits. Most of the commercial chitosan is extracted from crustaceans. As the harvest rate of snow crabs and red snow crabs decreases and safety issues arise due to environmental pollution, research is underway to extract chitosan from insects. In this study, we used Response Surface Methodology (RSM) to predict the optimal conditions to produce chitosan oligosaccharides from mealworms (MCOS), which can be absorbed through the intestine as low-molecular-weight chitosan. The experimentally confirmed optimal conditions for MCOS production using chitosanase were found to be a substrate concentration of 2.5%, enzyme addition of 30 mg/g and a reaction time of 6 hours. The chemical structure and physicochemical properties of the produced MCOS were measured using MALDI-TOF mass spectra and FTIR spectra. The MALDI-TOF mass spectra revealed peaks corresponding to the dimer (375.045), trimer (525.214), tetramer (693.243), pentamer (826.296), and hexamer (987.360). In the FTIR spectra, commercial chitosan oligosaccharides exhibited a weak peak pattern at 3500-2500 cm-1, unlike chitosan or chitosan oligosaccharides. There was a difference in the peak at 3200~3500 cm-1, where different vibrations corresponding to OH and amine groups overlapped. Chitosan, chitosan oligosaccharide, and commercial chitosan oligosaccharide showed peaks at 2849, 2884, and 2885 cm-1, respectively, attributed to the absorption of the C-H stretching vibration of methyl or methine. The amide I, amide II, and amide III bands of chitosan, chitosan oligosaccharide, and commercial chitosan oligosaccharide exhibited peaks at 1620/1620/1602, 1553/1555/1505, and 1310/1309/1317 cm-1, respectively. Furthermore, the solubility of MCOS was 45.15±3.43, water binding capacity (WBC) was 299.25±4.57, and fat binding capacity (FBC) was 325.61±2.28 and the solubility of commercial chitosan oligosaccharides was 49.04±9.52, WBC was 280.55±0.50, and FBC was 157.22±18.15. Thus, the characteristics of MCOS and commercial chitosan oligosaccharides are similar. The results of investigating the impact of chitosan oligosaccharide on the proliferation of probiotics revealed increased growth in L. casei, L. acidophilus, and Bif. Bifidum. Therefore, the major short-chain fatty acids produced by gut microorganisms, such as acetic acid, propionic acid, and butyric acid, increased within 24 hours of adding 1% (p<0.01) and 2% (p<0.001) MCOS. The impact of MCOS on the overall gut microbiota was assessed, revealing that the Chao1 index did not show significant differences, but the Simpson index decreased in a concentration-dependent manner, indicating a higher species diversity. The addition of MCOS resulted in changes in the overall microbial composition, with an increase in Firmicutes and Verrucomicrobia (p<0.05) compared to the control group, while Proteobacteria and Actinobacteria (p<0.05) decreased. At the genus level, changes in microbiota due to MCOS supplementation showed an increase in beneficial bacteria like lactobacillus, Romboutsia, Turicibacter, and Akkermansia (p<0.0001) while harmful bacteria like Enterococcus, Morganella, Proterus, and Bacteroides (p<0.0001) decreased. In this study, chitosan oligosaccharides were successfully produced under established conditions from mealworms, and these chitosan oligosaccharides are expected to have prebiotic effects, similar to those obtained from crabs.

Keywords: mealworms, chitosan, chitosan oligosaccharide, prebiotics

Procedia PDF Downloads 63
678 Snake Locomotion: From Sinusoidal Curves and Periodic Spiral Formations to the Design of a Polymorphic Surface

Authors: Ennios Eros Giogos, Nefeli Katsarou, Giota Mantziorou, Elena Panou, Nikolaos Kourniatis, Socratis Giannoudis

Abstract:

In the context of the postgraduate course Productive Design, Department of Interior Architecture of the University of West Attica in Athens, under the guidance of Professors Nikolaos Koyrniatis and Socratis Giannoudis, kinetic mechanisms with parametric models were examined for their further application in the design of objects. In the first phase, the students studied a motion mechanism that they chose from daily experience and then analyzed its geometric structure in relation to the geometric transformations that exist. In the second phase, the students tried to design it through a parametric model in Grasshopper3d for Rhino algorithmic processor and plan the design of its application in an everyday object. For the project presented, our team began by studying the movement of living beings, specifically the snake. By studying the snake and the role that the environment has in its movement, four basic typologies were recognized: serpentine, concertina, sidewinding and rectilinear locomotion, as well as its ability to perform spiral formations. Most typologies are characterized by ripples, a series of sinusoidal curves. For the application of the snake movement in a polymorphic space divider, the use of a coil-type joint was studied. In the Grasshopper program, the simulation of the desired motion for the polymorphic surface was tested by applying a coil on a sinusoidal curve and a spiral curve. It was important throughout the process that the points corresponding to the nodes of the real object remain constant in number, as well as the distances between them and the elasticity of the construction had to be achieved through a modular movement of the coil and not some elastic element (material) at the nodes. Using mesh (repeating coil), the whole construction is transformed into a supporting body and combines functionality with aesthetics. The set of elements functions as a vertical spatial network, where each element participates in its coherence and stability. Depending on the positions of the elements in terms of the level of support, different perspectives are created in terms of the visual perception of the adjacent space. For the implementation of the model on the scale (1:3), (0.50m.x2.00m.), the load-bearing structure that was studied has aluminum rods for the basic pillars Φ6mm and Φ 2.50 mm, for the secondary columns. Filling elements and nodes are of similar material and were made of MDF surfaces. During the design process, four trapezoidal patterns were picketed, which function as filling elements, while in order to support their assembly, a different engraving facet was done. The nodes have holes that can be pierced by the rods, while their connection point with the patterns has a half-carved recess. The patterns have a corresponding recess. The nodes are of two different types depending on the column that passes through them. The patterns and knots were designed to be cut and engraved using a Laser Cutter and attached to the knots using glue. The parameters participate in the design as mechanisms that generate complex forms and structures through the repetition of constantly changing versions of the parts that compose the object.

Keywords: polymorphic, locomotion, sinusoidal curves, parametric

Procedia PDF Downloads 102
677 The Problem of Suffering: Job, The Servant and Prophet of God

Authors: Barbara Pemberton

Abstract:

Now that people of all faiths are experiencing suffering due to many global issues, shared narratives may provide common ground in which true understanding of each other may take root. This paper will consider the all too common problem of suffering and address how adherents of the three great monotheistic religions seek understanding and the appropriate believer’s response from the same story found within their respective sacred texts. Most scholars from each of these three traditions—Judaism, Christianity, and Islam— consider the writings of the Tanakh/Old Testament to at least contain divine revelation. While they may not agree on the extent of the revelation or the method of its delivery, they do share stories as well as a common desire to glean God’s message for God’s people from the pages of the text. One such shared story is that of Job, the servant of Yahweh--called Ayyub, the prophet of Allah, in the Qur’an. Job is described as a pious, righteous man who loses everything—family, possessions, and health—when his faith is tested. Three friends come to console him. Through it, all Job remains faithful to his God who rewards him by restoring all that was lost. All three hermeneutic communities consider Job to be an archetype of human response to suffering, regarding Job’s response to his situation as exemplary. The story of Job addresses more than the distribution of the evil problem. At stake in the story is Job’s very relationship to his God. Some exegetes believe that Job was adapted into the Jewish milieu by a gifted redactor who used the original ancient tale as the “frame” for the biblical account (chapters 1, 2, and 4:7-17) and then enlarged the story with the complex center section of poetic dialogues creating a complex work with numerous possible interpretations. Within the poetic center, Job goes so far as to question God, a response to which Jews relate, finding strength in dialogue—even in wrestling with God. Muslims only embrace the Job of the biblical narrative frame, as further identified through the Qur’an and the prophetic traditions, considering the center section an errant human addition not representative of a true prophet of Islam. The Qur’anic injunction against questioning God also renders the center theologically suspect. Christians also draw various responses from the story of Job. While many believers may agree with the Islamic perspective of God’s ultimate sovereignty, others would join their Jewish neighbors in questioning God, not anticipating answers but rather an awareness of his presence—peace and hope becoming a reality experienced through the indwelling presence of God’s Holy Spirit. Related questions are as endless as the possible responses. This paper will consider a few of the many Jewish, Christian, and Islamic insights from the ancient story, in hopes adherents within each tradition will use it to better understand the other faiths’ approach to suffering.

Keywords: suffering, Job, Qur'an, tanakh

Procedia PDF Downloads 181
676 Bi-Component Particle Segregation Studies in a Spiral Concentrator Using Experimental and CFD Techniques

Authors: Prudhvinath Reddy Ankireddy, Narasimha Mangadoddy

Abstract:

Spiral concentrators are commonly used in various industries, including mineral and coal processing, to efficiently separate materials based on their density and size. In these concentrators, a mixture of solid particles and fluid (usually water) is introduced as feed at the top of a spiral channel. As the mixture flows down the spiral, centrifugal and gravitational forces act on the particles, causing them to stratify based on their density and size. Spiral flows exhibit complex fluid dynamics, and interactions involve multiple phases and components in the process. Understanding the behavior of these phases within the spiral concentrator is crucial for achieving efficient separation. An experimental bi-component particle interaction study is conducted in this work utilizing magnetite (heavier density) and silica (lighter density) with different proportions processed in the spiral concentrator. The observation separation reveals that denser particles accumulate towards the inner region of the spiral trough, while a significant concentration of lighter particles are found close to the outer edge. The 5th turn of the spiral trough is partitioned into five zones to achieve a comprehensive distribution analysis of bicomponent particle segregation. Samples are then gathered from these individual streams using an in-house sample collector, and subsequent analysis is conducted to assess component segregation. Along the trough, there was a decline in the concentration of coarser particles, accompanied by an increase in the concentration of lighter particles. The segregation pattern indicates that the heavier coarse component accumulates in the inner zone, whereas the lighter fine component collects in the outer zone. The middle zone primarily consists of heavier fine particles and lighter coarse particles. The zone-wise results reveal that there is a significant fraction of segregation occurs in inner and middle zones. Finer magnetite and silica particles predominantly accumulate in outer zones with the smallest fraction of segregation. Additionally, numerical simulations are also carried out using the computational fluid dynamics (CFD) model based on the volume of fluid (VOF) approach incorporating the RSM turbulence model. The discrete phase model (DPM) is employed for particle tracking, thereby understanding the particle segregation of magnetite and silica along the spiral trough.

Keywords: spiral concentrator, bi-component particle segregation, computational fluid dynamics, discrete phase model

Procedia PDF Downloads 62
675 Research on Tight Sandstone Oil Accumulation Process of the Third Member of Shahejie Formation in Dongpu Depression, China

Authors: Hui Li, Xiongqi Pang

Abstract:

In recent years, tight oil has become a hot spot for unconventional oil and gas exploration and development in the world. Dongpu Depression is a typical hydrocarbon-rich basin in the southwest of Bohai Bay Basin, in which tight sandstone oil and gas have been discovered in deep reservoirs, most of which are buried more than 3500m. The distribution and development characteristics of deep tight sandstone reservoirs need to be studied. The main source rocks in study area are dark mudstone and shale of the middle and lower third sub-member of Shahejie Formation. Total Organic Carbon (TOC) content of source rock is between 0.08-11.54%, generally higher than 0.6% and the value of S1+S2 is between 0.04–72.93 mg/g, generally higher than 2 mg/g. It can be evaluated as middle to fine level overall. The kerogen type of organic matter is predominantly typeⅡ1 andⅡ2. Vitrinite reflectance (Ro) is mostly greater than 0.6% indicating that the source rock entered the hydrocarbon generation threshold. The physical property of reservoir was poor, the most reservoir has a porosity lower than 12% and a permeability of less than 1×10⁻³μm. The rocks in this area showed great heterogeneity, some areas developed desserts with high porosity and permeability. According to SEM, thin section image, inclusion test and so on, the reservoir was affected by compaction and cementation during early diagenesis stage (44-31Ma). The diagenesis caused the tight reservoir in Huzhuangji, Pucheng, Weicheng Area while the porosity in Machang, Qiaokou, Wenliu Area was still over 12%. In the process of middle diagenesis phase stage A (31-17Ma), the reservoir porosity in Machang, Pucheng, Huzhuangji Area increased due to dissolution; after that the oil generation window of source rock was achieved for the first phase hydrocarbon charging (31-23Ma), formed the conventional oil deposition in Machang, Qiaokou, Wenliu, Huzhuangji Area and unconventional tight reservoir in Pucheng, Weicheng Area. Then came to stage B of middle diagenesis phase (17-7Ma), in this stage, the porosity of reservoir continued to decrease after the dissolution and led to a situation that the reservoirs were generally compacted. And since then, the second hydrocarbon filling has been processing since 7Ma. Most of the pools charged and formed in this procedure are tight sandstone oil reservoir. In conclusion, tight sandstone oil was formed in two patterns in Dongpu Depression, which could be concluded as ‘density fist then accumulation’ pattern and ‘accumulation fist next density’ pattern.

Keywords: accumulation process, diagenesis, dongpu depression, tight sandstone oil

Procedia PDF Downloads 114
674 Porcelain Paste Processing by Robocasting 3D: Parameters Tuning

Authors: A. S. V. Carvalho, J. Luis, L. S. O. Pires, J. M. Oliveira

Abstract:

Additive manufacturing technologies (AM) experienced a remarkable growth in the latest years due to the development and diffusion of a wide range of three-dimensional (3D) printing techniques. Nowadays we can find techniques available for non-industrial users, like fused filament fabrication, but techniques like 3D printing, polyjet, selective laser sintering and stereolithography are mainly spread in the industry. Robocasting (R3D) shows a great potential due to its ability to shape materials with a wide range of viscosity. Industrial porcelain compositions showing different rheological behaviour can be prepared and used as candidate materials to be processed by R3D. The use of this AM technique in industry is very residual. In this work, a specific porcelain composition with suitable rheological properties will be processed by R3D, and a systematic study of the printing parameters tuning will be shown. The porcelain composition was formulated based on an industrial spray dried porcelain powder. The powder particle size and morphology was analysed. The powders were mixed with water and an organic binder on a ball mill at 200 rpm/min for 24 hours. The batch viscosity was adjusted by the addition of an acid solution and mixed again. The paste density, viscosity, zeta potential, particle size distribution and pH were determined. In a R3D system, different speed and pressure settings were studied to access their impact on the fabrication of porcelain models. These models were dried at 80 °C, during 24 hours and sintered in air at 1350 °C for 2 hours. The stability of the models, its walls and surface quality were studied and their physical properties were accessed. The microstructure and layer adhesion were observed by SEM. The studied processing parameters have a high impact on the models quality. Moreover, they have a high impact on the stacking of the filaments. The adequate tuning of the parameters has a huge influence on the final properties of the porcelain models. This work contributes to a better assimilation of AM technologies in ceramic industry. Acknowledgments: The RoboCer3D project – project of additive rapid manufacturing through 3D printing ceramic material (POCI-01-0247-FEDER-003350) financed by Compete 2020, PT 2020, European Regional Development Fund – FEDER through the International and Competitive Operational Program (POCI) under the PT2020 partnership agreement.

Keywords: additive manufacturing, porcelain, robocasting, R3D

Procedia PDF Downloads 159
673 Tiebout and Crime: How Crime Affect the Income Tax Capacity

Authors: Nik Smits, Stijn Goeminne

Abstract:

Despite the extensive literature on the relation between crime and migration, not much is known about how crime affects the tax capacity of local communities. This paper empirically investigates whether the Flemish local income tax base yield is sensitive to changes in the local crime level. The underlying assumptions are threefold. In a Tiebout world, rational voters holding the local government accountable for the safety of its citizens, move out when the local level of security gets too much alienated from what they want it to be (first assumption). If migration is due to crime, then the more wealthy citizens are expected to move first (second assumption). Looking for a place elsewhere implies transaction costs, which the more wealthy citizens are more likely to be able to pay. As a consequence, the average income per capita and so the income distribution will be affected, which in turn, will influence the local income tax base yield (third assumption). The decreasing average income per capita, if not compensated by increasing earnings by the citizens that are staying or by the new citizens entering the locality, must result in a decreasing local income tax base yield. In the absence of a higher level governments’ compensation, decreasing local tax revenues could prove to be disastrous for a crime-ridden municipality. When communities do not succeed in forcing back the number of offences, this can be the onset of a cumulative process of urban deterioration. A spatial panel data model containing several proxies for the local level of crime in 306 Flemish municipalities covering the period 2000-2014 is used to test the relation between crime and the local income tax base yield. In addition to this direct relation, the underlying assumptions are investigated as well. Preliminary results show a modest, but positive relation between local violent crime rates and the efflux of citizens, persistent up until a 2 year lag. This positive effect is dampened by possible increasing crime rates in neighboring municipalities. The change in violent crimes -and to a lesser extent- thefts and extortions reduce the influx of citizens with a one year lag. Again this effect is diminished by external effects from neighboring municipalities, meaning that increasing crime rates in neighboring municipalities (especially violent crimes) have a positive effect on the local influx of citizens. Crime also has a depressing effect on the average income per capita within a municipality, whereas increasing crime rates in neighboring municipalities increase it. Notwithstanding the previous results, crime does not seem to significantly affect the local tax base yield. The results suggest that the depressing effect of crime on the income basis has to be compensated by a limited, but a wealthier influx of new citizens.

Keywords: crime, local taxes, migration, Tiebout mobility

Procedia PDF Downloads 302
672 Patterns of Private Transfers in the Philippines: An Analysis of Who Gives and Receives More

Authors: Rutcher M. Lacaza, Stephen Jun V. Villejo

Abstract:

This paper investigated the patterns of private transfers in the Philippines using the Family Income Expenditure Survey (FIES) 2009, conducted by the Philippine government’s National Statistics Office (NSO) every three years. The paper performed bivariate analysis on net transfers, using the identified determinants for a household to be either a net receiver or a net giver. The household characteristics considered are the following: age, sex, marital status, employment status and educational attainment of the household head, and also size, location, pre-transfer income and the number of employed members of the household. The variables net receiver and net giver are determined by computing the net transfer, subtracting total gifts from total receipts. The receipts are defined as the sum of cash received from abroad, cash received from domestic sources, total gifts received and inheritance. While gifts are defined as the sum of contributions and donations to church and other religious institutions, contributions and donations to other institutions, gifts and contributions to others, and gifts and assistance to private individuals outside the family. Both in kind and in cash transfers are considered in the analysis. It also performed a multiple regression analysis on transfers received and income including other household characteristics to examine the motives for giving transfers – whether altruism or exchanged. It also used the binary logistic regression to estimate the probability of being a net receiver or net giver given the household characteristics. The study revealed that receiving tends to be universal – both the non-poor and the poor benefit although the poor receive substantially less than the non-poor. Regardless of whether households are net receivers or net givers, households in the upper deciles generally give and receive more than those in the lower deciles. It also appears that private transfers may just flow within economic groups. Big amounts of transfers are, therefore, directed to the non-poor and the small amounts go to the poor. This was also supported by the increasing function of gross transfers received and the income of households – the poor receiving less and the non-poor receiving more. This is contrary to the theory that private transfers can help equalize the distribution of income. This suggested that private transfers in the Philippines are not altruistically motivated but exchanged. However, bilateral data on transfers received or given is needed to test this theory directly. The results showed that transfers are much needed by the poor and it is important to understand the nature of private transfers, to ensure that government transfer programs are properly designed and targeted so as to prevent the duplication of private safety nets already present among the non-poor.

Keywords: private transfers, net receiver, net giver, altruism, exchanged.

Procedia PDF Downloads 212
671 Emerging Identities: A Transformative ‘Green Zone’

Authors: Alessandra Swiny, Yiorgos Hadjichristou

Abstract:

There exists an on-going geographical scar creating a division through the Island of Cyprus and its capital, Nicosia. The currently amputated city center is accessed legally by the United Nations convoys, infiltrated only by Turkish and Greek Cypriot army scouts and illegal traders and scavengers. On Christmas day 1963 in Nicosia, Captain M. Hobden of the British Army took a green chinagraph pencil and on a large scale Joint Army-RAF map ‘marked’ the division. From then on this ‘buffer zone’ was called the ‘green line.' This once dividing form, separating the main communities of Greek and Turkish Cypriots from one another, has now been fully reclaimed by an autonomous intruder. It's currently most captivating inhabitant is nature. She keeps taking over, for the past fifty years indigenous and introduced fauna and flora thrive; trees emerge from rooftops and plants, bushes and flowers grow randomly through the once bustling market streets, allowing this ‘no man’s land’ to teem with wildlife. And where are its limits? The idea of fluidity is ever present; it encroaches into the urban and built environment that surrounds it, and notions of ownership and permanence are questioned. Its qualities have contributed significantly in the search for new ‘identities,' expressed in the emergence of new living conditions, be they real or surreal. Without being physically reachable, it can be glimpsed at through punctured peepholes, military bunker windows that act as enticing portals into an emotional and conceptual level of inhabitation. The zone is mystical and simultaneously suspended in time, it triggers people’s imagination, not just that of the two prevailing communities but also of immigrants, refugees, and visitors; it mesmerizes all who come within its proximity. The paper opens a discussion on the issues and the binary questions raised. What is natural and artificial; what is private and public; what is ephemeral and permanent? The ‘green line’ exists in a central fringe condition and can serve in mixing generations and groups of people; mingling functions of living with work and social interaction; merging nature and the human being in a new-found synergy of human hope and survival, allowing thus for new notions of place to be introduced. Questions seek to be answered, such as, “Is the impossibility of dwelling made possible, by interweaving these ‘in-between conditions’ into eloquently traced spaces?” The methodologies pursued are developed through academic research, professional practice projects, and students’ research/design work. Realized projects, case studies and other examples cited both nationally and internationally hold global and local applications. Both paths of the research deal with the explorative understanding of the impossibility of dwelling, testing the limits of its autonomy. The expected outcome of the experience evokes in the user a sense of a new urban landscape, created from human topographies that echo the voice of an emerging identity.

Keywords: urban wildlife, human topographies, buffer zone, no man’s land

Procedia PDF Downloads 193
670 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose

Authors: Kumar Shashvat, Amol P. Bhondekar

Abstract:

In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.

Keywords: odor classification, generative models, naive bayes, linear discriminant analysis

Procedia PDF Downloads 384
669 Using GIS and AHP Model to Explore the Parking Problem in Khomeinishahr

Authors: Davood Vatankhah, Reza Mokhtari Malekabadi, Mohsen Saghaei

Abstract:

Function of urban transportation systems depends on the existence of the required infrastructures, appropriate placement of different components, and the cooperation of these components with each other. Establishing various neighboring parking spaces in city neighborhood in order to prevent long-term and inappropriate parking of cars in the allies is one of the most effective operations in reducing the crowding and density of the neighborhoods. Every place with a certain application attracts a number of daily travels which happen throughout the city. A large percentage of the people visiting these places go to these travels by their own cars; therefore, they need a space to park their cars. The amount of this need depends on the usage function and travel demand of the place. The study aims at investigating the spatial distribution of the public parking spaces, determining the effective factors in locating, and their combination in GIS environment in Khomeinishahr of Isfahan city. Ultimately, the study intends to create an appropriate pattern for locating parking spaces, determining the request for parking spaces of the traffic areas, choosing the proper places for providing the required public parking spaces, and also proposing new spots in order to promote quality and quantity aspects of the city in terms of enjoying public parking spaces. Regarding the method, the study is based on applied purpose and regarding nature, it is analytic-descriptive. The population of the study includes people of the center of Khomeinishahr which is located on Northwest of Isfahan having about 5000 hectares of geographic area and the population of 241318 people are in the center of Komeinishahr. In order to determine the sample size, Cochran formula was used and according to the population of 26483 people of the studied area, 231 questionnaires were used. Data analysis was carried out by usage of SPSS software and after estimating the required space for parking spaces, initially, the effective criteria in locating the public parking spaces are weighted by the usage of Analytic Hierarchical Process in the Arc GIS software. Then, appropriate places for establishing parking spaces were determined by fuzzy method of Order Weighted Average (OWA). The results indicated that locating of parking spaces in Khomeinishahr have not been carried out appropriately and per capita of the parking spaces is not desirable in relation to the population and request; therefore, in addition to the present parking lots, 1434 parking lots are needed in the area of the study for each day; therefore, there is not a logical proportion between parking request and the number of parking lots in Khomeinishahr.

Keywords: GIS, locating, parking, khomeinishahr

Procedia PDF Downloads 305