Search results for: stretched exponential
133 Vortex Control by a Downstream Splitter Plate in Psudoplastic Fluid Flow
Authors: Sudipto Sarkar, Anamika Paul
Abstract:
Pseudoplastic (n<1, n is the power index) fluids have great importance in food, pharmaceutical and chemical process industries which require a lot of attention. Unfortunately, due to its complex flow behavior inadequate research works can be found even in laminar flow regime. A practical problem is solved in the present research work by numerical simulation where we tried to control the vortex shedding from a square cylinder using a horizontal splitter plate placed at the downstream flow region. The position of the plate is at the centerline of the cylinder with varying distance from the cylinder to calculate the critical gap-ratio. If the plate is placed inside this critical gap, the vortex shedding from the cylinder suppressed completely. The Reynolds number considered here is in unsteady laminar vortex shedding regime, Re = 100 (Re = U∞a/ν, where U∞ is the free-stream velocity of the flow, a is the side of the cylinder and ν is the maximum value of kinematic viscosity of the fluid). Flow behavior has been studied for three different gap-ratios (G/a = 2, 2.25 and 2.5, where G is the gap between cylinder and plate) and for a fluid with three different flow behavior indices (n =1, 0.8 and 0.5). The flow domain is constructed using Gambit 2.2.30 and this software is also used to generate the mesh and to impose the boundary conditions. For G/a = 2, the domain size is considered as 37.5a × 16a with 316 × 208 grid points in the streamwise and flow-normal directions respectively after a thorough grid independent study. Fine and equal grid spacing is used close to the geometry to capture the vortices shed from the cylinder and the boundary layer developed over the flat plate. Away from the geometry meshes are unequal in size and stretched out. For other gap-ratios, proportionate domain size and total grid points are used with similar kind of mesh distribution. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition) at upper and lower domain boundary conditions are used for the simulation. Wall boundary condition (u = v = 0) is considered both on the cylinder and the splitter plate surfaces. Discretized forms of fully conservative 2-D unsteady Navier Stokes equations are then solved by Ansys Fluent 14.5. SIMPLE algorithm written in finite volume method is selected for this purpose which is a default solver inculcate in Fluent. The results obtained for Newtonian fluid flow agree well with previous works supporting Fluent’s usefulness in academic research. A thorough analysis of instantaneous and time-averaged flow fields are depicted both for Newtonian and pseudoplastic fluid flow. It has been observed that as the value of n reduces the stretching of shear layers also reduce and these layers try to roll up before the plate. For flow with high pseudoplasticity (n = 0.5) the nature of vortex shedding changes and the value of critical gap-ratio reduces. These are the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.Keywords: CFD, pseudoplastic fluid flow, wake-boundary layer interactions, critical gap-ratio
Procedia PDF Downloads 111132 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction
Authors: Bruce Wrightsman
Abstract:
Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.Keywords: wood building systems, material histories, monocoque systems, construction waste
Procedia PDF Downloads 78131 Assessment of Commercial Antimicrobials Incorporated into Gelatin Coatings and Applied to Conventional Heat-Shrinking Material for the Prevention of Blown Pack Spoilage in Vacuum Packaged Beef Cuts
Authors: Andrey A. Tyuftin, Rachael Reid, Paula Bourke, Patrick J. Cullen, Seamus Fanning, Paul Whyte, Declan Bolton , Joe P. Kerry
Abstract:
One of the primary spoilage issues associated with vacuum-packed beef products is blown pack spoilage (BPS) caused by the psychrophilic spore-forming strain of Clostridium spp. Spores derived from this organism can be activated after heat-shrinking (eg. 90°C for 3 seconds). To date, research into the control of Clostridium spp in beef packaging is limited. Active packaging in the form of antimicrobially-active coatings may be one approach to its control. Antimicrobial compounds may be incorporated into packaging films or coated onto the internal surfaces of packaging films using a carrier matrix. Three naturally-sourced, commercially-available antimicrobials, namely; Auranta FV (AFV) (bitter oranges extract) from Envirotech Innovative Products Ltd, Ireland; Inbac-MDA (IMDA) from Chemital LLC, Spain, mixture of different organic acids and sodium octanoate (SO) from Sigma-Aldrich, UK, were added into gelatin solutions at 2 concentrations: 2.5 and 3.5 times their minimum inhibition concentration (MIC) against Clostridium estertheticum (DSMZ 8809). These gelatin solutions were coated onto the internal polyethylene layer of cold plasma treated, heat-shrinkable laminates conventionally used for meat packaging applications. Atmospheric plasma was used in order to enhance adhesion between packaging films and gelatin coatings. Pouches were formed from these coated packaging materials, and beef cuts which had been inoculated with C. estertheticum were vacuum packaged. Inoculated beef was vacuum packaged without employing active films and this treatment served as the control. All pouches were heat-sealed and then heat-shrunk at 90°C for 3 seconds and incubated at 2°C for 100 days. During this storage period, packs were monitored for the indicators of blown pack spoilage as follows; gas bubbles in drip, loss of vacuum (onset of BPS), blown, the presence of sufficient gas inside the packs to produce pack distension and tightly stretched, “overblown” packs/ packs leaking. Following storage and assessment of indicator date, it was concluded that AFV- and SO-containing packaging inhibited the growth of C. estertheticum, significantly delaying the blown pack spoilage of beef primals. IMDA did not inhibit the growth of C. estertheticum. This may be attributed to differences in release rates and possible reactions with gelatin. Overall, active films were successfully produced following plasma surface treatment, and experimental data demonstrated clearly that the use of antimicrobially-active films could significantly prolong the storage stability of beef primals through the effective control of BPS.Keywords: active packaging, blown pack spoilage, Clostridium, antimicrobials, edible coatings, food packaging, gelatin films, meat science
Procedia PDF Downloads 265130 Optimization of Personnel Selection Problems via Unconstrained Geometric Programming
Authors: Vildan Kistik, Tuncay Can
Abstract:
From a business perspective, cost and profit are two key factors for businesses. The intent of most businesses is to minimize the cost to maximize or equalize the profit, so as to provide the greatest benefit to itself. However, the physical system is very complicated because of technological constructions, rapid increase of competitive environments and similar factors. In such a system it is not easy to maximize profits or to minimize costs. Businesses must decide on the competence and competence of the personnel to be recruited, taking into consideration many criteria in selecting personnel. There are many criteria to determine the competence and competence of a staff member. Factors such as the level of education, experience, psychological and sociological position, and human relationships that exist in the field are just some of the important factors in selecting a staff for a firm. Personnel selection is a very important and costly process in terms of businesses in today's competitive market. Although there are many mathematical methods developed for the selection of personnel, unfortunately the use of these mathematical methods is rarely encountered in real life. In this study, unlike other methods, an exponential programming model was established based on the possibilities of failing in case the selected personnel was started to work. With the necessary transformations, the problem has been transformed into unconstrained Geometrical Programming problem and personnel selection problem is approached with geometric programming technique. Personnel selection scenarios for a classroom were established with the help of normal distribution and optimum solutions were obtained. In the most appropriate solutions, the personnel selection process for the classroom has been achieved with minimum cost.Keywords: geometric programming, personnel selection, non-linear programming, operations research
Procedia PDF Downloads 269129 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes
Authors: Jihad Daba, Jean-Pierre Dubois
Abstract:
Multi path fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper have utilized a Poisson modulated and weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multi-diversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent specular Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.Keywords: cellular communication, femto and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process
Procedia PDF Downloads 448128 A Case Study on the Census of Technological Capacities in Health Care in Rural Sanitary Institutions in South Cameroon
Authors: Doriane Micaela Andeme Bikoro, Samuel Fosso Wamba, Jean Robert Kala Kamdjoug
Abstract:
Currently one of the leading fields in the market of technological innovation is digital health. In developed countries, this booming innovation is experiencing an exponential speed. We understand that in developed countries, e-health could also revolutionize the practice of medicine and therefore fill the many failures observed in medical care. Everything leads to believe that future technology is oriented towards the medical sector. The aim of this work is to explore at the same time the technological resources and the potential of health care based on new technologies; it is a case study in a rural area of Southern Cameroon. Among other things, we will make a census of the shortcomings and problems encountered, and we will propose various appropriate solutions. The work methodology used here is essentially qualitative. We used two qualitative data collection techniques, direct observation, and interviews. In fact, we spent two weeks in the field observing and conducting some semi-directive interviews with some of those responsible for these health structures. This study was conducted in three health facilities in the south of the country; including two health centers and a rural hospital. Many technological failures have been identified in the day-to-day management of these health facilities and especially in the administration of health care to patients. We note major problems such as the digital divide, the lack of qualified personnel, the state of isolation of this area. This is why various proposals are made to improve the health sector in Cameroon both technologically and medically.Keywords: Cameroon, capacities, census, digital health, qualitative method, rural area
Procedia PDF Downloads 144127 Study and Solving High Complex Non-Linear Differential Equations Applied in the Engineering Field by Analytical New Approach AGM
Authors: Mohammadreza Akbari, Sara Akbari, Davood Domiri Ganji, Pooya Solimani, Reza Khalili
Abstract:
In this paper, three complicated nonlinear differential equations(PDE,ODE) in the field of engineering and non-vibration have been analyzed and solved completely by new method that we have named it Akbari-Ganji's Method (AGM) . As regards the previous published papers, investigating this kind of equations is a very hard task to do and the obtained solution is not accurate and reliable. This issue will be emerged after comparing the achieved solutions by Numerical Method. Based on the comparisons which have been made between the gained solutions by AGM and Numerical Method (Runge-Kutta 4th), it is possible to indicate that AGM can be successfully applied for various differential equations particularly for difficult ones. Furthermore, It is necessary to mention that a summary of the excellence of this method in comparison with the other approaches can be considered as follows: It is noteworthy that these results have been indicated that this approach is very effective and easy therefore it can be applied for other kinds of nonlinear equations, And also the reasons of selecting the mentioned method for solving differential equations in a wide variety of fields not only in vibrations but also in different fields of sciences such as fluid mechanics, solid mechanics, chemical engineering, etc. Therefore, a solution with high precision will be acquired. With regard to the afore-mentioned explanations, the process of solving nonlinear equation(s) will be very easy and convenient in comparison with the other methods. And also one of the important position that is explored in this paper is: Trigonometric and exponential terms in the differential equation (the method AGM) , is no need to use Taylor series Expansion to enhance the precision of the result.Keywords: new method (AGM), complex non-linear partial differential equations, damping ratio, energy lost per cycle
Procedia PDF Downloads 469126 The Effect of Magnetite Particle Size on Methane Production by Fresh and Degassed Anaerobic Sludge
Authors: E. Al-Essa, R. Bello-Mendoza, D. G. Wareham
Abstract:
Anaerobic batch experiments were conducted to investigate the effect of magnetite-supplementation (7 mM) on methane production from digested sludge undergoing two different microbial growth phases, namely fresh sludge (exponential growth phase) and degassed sludge (endogenous decay phase). Three different particle sizes were assessed: small (50 - 150 nm), medium (168 – 490 nm) and large (800 nm - 4.5 µm) particles. Results show that, in the case of the fresh sludge, magnetite significantly enhanced the methane production rate (up to 32%) and reduced the lag phase (by 15% - 41%) as compared to the control, regardless of the particle size used. However, the cumulative methane produced at the end of the incubation was comparable in all treatment and control bottles. In the case of the degassed sludge, only the medium-sized magnetite particles increased significantly the methane production rate (12% higher) as compared to the control. Small and large particles had little effect on the methane production rate but did result in an extended lag phase which led to significantly lower cumulative methane production at the end of the incubation period. These results suggest that magnetite produces a clear and positive effect on methane production only when an active and balanced microbial community is present in the anaerobic digester. It is concluded that, (i) the effect of magnetite particle size on increasing the methane production rate and reducing lag phase duration is strongly influenced by the initial metabolic state of the microbial consortium, and (ii) the particle size would positively affect the methane production if it is provided within the nanometer size range.Keywords: anaerobic digestion, iron oxide, methanogenesis, nanoparticle
Procedia PDF Downloads 140125 Numerical Analyses of Dynamics of Deployment of PW-Sat2 Deorbit Sail Compared with Results of Experiment under Micro-Gravity and Low Pressure Conditions
Authors: P. Brunne, K. Ciechowska, K. Gajc, K. Gawin, M. Gawin, M. Kania, J. Kindracki, Z. Kusznierewicz, D. Pączkowska, F. Perczyński, K. Pilarski, D. Rafało, E. Ryszawa, M. Sobiecki, I. Uwarowa
Abstract:
Big amount of space debris constitutes nowadays a real thread for operating space crafts; therefore the main purpose of PW-Sat2’ team was to create a system that could help cleanse the Earth’s orbit after each small satellites’ mission. After 4 years of development, the motorless, low energy consumption and low weight system has been created. During series of tests, the system has shown high reliable efficiency. The PW-Sat2’s deorbit system is a square-shaped sail which covers an area of 4m². The sail surface is made of 6 μm aluminized Mylar film which is stretched across 4 diagonally placed arms, each consisting of two C-shaped flat springs and enveloped in Mylar sleeves. The sail is coiled using a special, custom designed folding stand that provides automation and repeatability of the sail unwinding tests and placed in a container with inner diameter of 85 mm. In the final configuration the deorbit system weights ca. 600 g and occupies 0.6U (in accordance with CubeSat standard). The sail’s releasing system requires minimal amount of power based on thermal knife that burns out the Dyneema wire, which holds the system before deployment. The Sail is being pushed out of the container within a safe distance (20 cm away) from the satellite. The energy for the deployment is completely assured by coiled C-shaped flat springs, which during the release, unfold the sail surface. To avoid dynamic effects on the satellite’s structure, there is the rotational link between the sail and satellite’s main body. To obtain complete knowledge about complex dynamics of the deployment, a number of experiments have been performed in varied environments. The numerical model of the dynamics of the Sail’s deployment has been built and is still under continuous development. Currently, the integration of the flight model and Deorbit Sail is performed. The launch is scheduled for February 2018. At the same time, in cooperation with United Nations Office for Outer Space Affairs, sail models and requested facilities are being prepared for the sail deployment experiment under micro-gravity and low pressure conditions at Bremen Drop Tower, Germany. Results of those tests will provide an ultimate and wide knowledge about deployment in space environment to which system will be exposed during its mission. Outcomes of the numerical model and tests will be compared afterwards and will help the team in building a reliable and correct model of a very complex phenomenon of deployment of 4 c-shaped flat springs with surface attached. The verified model could be used inter alia to investigate if the PW-Sat2’s sail is scalable and how far is it possible to go with enlarging when creating systems for bigger satellites.Keywords: cubesat, deorbitation, sail, space, debris
Procedia PDF Downloads 290124 Application of Interferometric Techniques for Quality Control Oils Used in the Food Industry
Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich
Abstract:
The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.Keywords: food industry, interferometric, oils, quality control
Procedia PDF Downloads 372123 Traditionalism and Modernity in Seoul’s Urban Planning for the Disabled
Authors: Helena Park
Abstract:
For the last three decades, Seoul has experienced an exponential increase in population and concomitant rapid urbanization. With such development, Korea adopted a predominantly Western style of architecture but still based the structures on Korea’s traditionalism and Confucian precepts of pung su (feng shui). While Korean urban planning is focusing on balancing out the modernism and traditionalism in its city architecture, particularly in and landmark sites like The Seoul N Tower and Gyeongbok Palace, the accessibility and convenience concerns of minorities in social groups like the disabled are habitually disregarded. With the implementations of ramps and elevators, the welfare of all citizens seemed to improve. According to the dictates of traditional Korean culture, it was crucial for those construed as “disabled” or “underprivileged” to feel natural in the city of Seoul, which is planned and built with the background aesthetic theory of being harmonized with nature. It was interesting and also alarming to see the extent to which Korean landmarks were lacking facilities for the disabled throughout the city. Standards set by the Ministry of Health and Welfare and the Seoul Metropolitan City insist that buildings accommodate the needs of the disabled as well as the non-disabled equally, but it was hard to find buildings in Seoul - old or new - that fulfilled all the requirements. If fulfilled, some of the facilities were hard to find or not well maintained. There is thus a serious concern for planning reform in connection with Seoul’s 2030 Urban Plan. This paper argues that alternative planning could better integrate Korea’s traditionalist architecture and concepts of pung su rather than insist on the necessity of Western-style modernism as the sole modality for achieving accessibility for the disabled in Korea.Keywords: accessibility, architecture of Seoul , Pung Su (Feng Shui), traditionalism, modernism in Seoul
Procedia PDF Downloads 234122 Advances in Sesame Molecular Breeding: A Comprehensive Review
Authors: Micheale Yifter Weldemichael
Abstract:
Sesame (Sesamum indicum L.) is among the most important oilseed crops for its high edible oil quality and quantity. Sesame is grown for food, medicinal, pharmaceutical, and industrial uses. Sesame is also cultivated as a main cash crop in Asia and Africa by smallholder farmers. Despite the global exponential increase in sesame cultivation area, its production and productivity remain low, mainly due to biotic and abiotic constraints. Notwithstanding the efforts to solve these problems, a low level of genetic variation and inadequate genomic resources hinder the progress of sesame improvement. The objective of this paper is, therefore, to review recent advances in the area of molecular breeding and transformation to overcome major production constraints and could result in enhanced and sustained sesame production. This paper reviews various researches conducted to date on molecular breeding and genetic transformation in sesame focusing on molecular markers used in assessing the available online database resources, genes responsible for key agronomic traits as well as transgenic technology and genome editing. The review concentrates on quantitative and semi-quantitative studies on molecular breeding for key agronomic traits such as improvement of yield components, oil and oil-related traits, disease and insect/pest resistance, and drought, waterlogging and salt tolerance, as well as sesame genetic transformation and genome editing techniques. Pitfalls and limitations of existing studies and methodologies used so far are identified and some priorities for future research directions in sesame genetic improvement are identified in this review.Keywords: abiotic stress, biotic stress, improvement, molecular breeding, oil, sesame, shattering
Procedia PDF Downloads 35121 Simplified Stress Gradient Method for Stress-Intensity Factor Determination
Authors: Jeries J. Abou-Hanna
Abstract:
Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.Keywords: fracture mechanics, finite element method, stress intensity factor, stress gradient
Procedia PDF Downloads 135120 The Bayesian Premium Under Entropy Loss
Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita
Abstract:
Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation
Procedia PDF Downloads 334119 Cars in a Neighborhood: A Case of Sustainable Living in Sector 22 Chandigarh
Authors: Maninder Singh
Abstract:
The Chandigarh city is under the strain of exponential growth of car density across various neighborhood. The consumerist nature of society today is to be blamed for this menace because everyone wants to own and ride a car. Car manufacturers are busy selling two or more cars per household. The Regional Transport Offices are busy issuing as many licenses to new vehicles as they can in order to generate revenue in the form of Road Tax. The car traffic in the neighborhoods of Chandigarh has reached a tipping point. There needs to be a more empirical and sustainable model of cars per household, which should be based on specific parameters of livable neighborhoods. Sector 22 in Chandigarh is one of the first residential sectors to be established in the city. There is scope to think, reflect, and work out a method to know how many cars we need to sell our citizens before we lose the argument to traffic problems, parking problems, and road rage. This is where the true challenge of a planner or a designer of the city lies. Currently, in Chandigarh city, there are no clear visible answers to this problem. The way forward is to look at spatial mapping, planning, and design of car parking units to address the problem, rather than suggesting extreme measures of banning cars (short-term) or promoting plans for citywide transport (very long-term). This is a chance to resolve the problem with a pragmatic approach from a citizen’s perspective, instead of an orthodox development planner’s methodology. Since citizens are at the center of how the problem is to be addressed, acceptable solutions are more likely to emerge from the car and traffic problem as defined by the citizens. Thus, the idea and its implementation would be interesting in comparison to the known academic methodologies. The novel and innovative process would lead to a more acceptable and sustainable approach to the issue of number of car parks in the neighborhood of Chandigarh city.Keywords: cars, Chandigarh, neighborhood, sustainable living, walkability
Procedia PDF Downloads 148118 Simulating the Dynamics of E-waste Production from Mobile Phone: Model Development and Case Study of Rwanda
Authors: Rutebuka Evariste, Zhang Lixiao
Abstract:
Mobile phone sales and stocks showed an exponential growth in the past years globally and the number of mobile phones produced each year was surpassing one billion in 2007, this soaring growth of related e-waste deserves sufficient attentions paid to it regionally and globally as long as 40% of its total weight is made from metallic which 12 elements are identified to be highly hazardous and 12 are less harmful. Different research and methods have been used to estimate the obsolete mobile phones but none has developed a dynamic model and handle the discrepancy resulting from improper approach and error in the input data. The study aim was to develop a comprehensive dynamic system model for simulating the dynamism of e-waste production from mobile phone regardless the country or region and prevail over the previous errors. The logistic model method combined with STELLA program has been used to carry out this study. Then the simulation for Rwanda has been conducted and compared with others countries’ results as model testing and validation. Rwanda is about 1.5 million obsoletes mobile phone with 125 tons of waste in 2014 with e-waste production peak in 2017. It is expected to be 4.17 million obsoletes with 351.97 tons by 2020 along with environmental impact intensity of 21times to 2005. Thus, it is concluded through the model testing and validation that the present dynamic model is competent and able deal with mobile phone e-waste production the fact that it has responded to the previous studies questions from Czech Republic, Iran, and China.Keywords: carrying capacity, dematerialization, logistic model, mobile phone, obsolescence, similarity, Stella, system dynamics
Procedia PDF Downloads 344117 Critically Sampled Hybrid Trigonometry Generalized Discrete Fourier Transform for Multistandard Receiver Platform
Authors: Temidayo Otunniyi
Abstract:
This paper presents a low computational channelization algorithm for the multi-standards platform using poly phase implementation of a critically sampled hybrid Trigonometry generalized Discrete Fourier Transform, (HGDFT). An HGDFT channelization algorithm exploits the orthogonality of two trigonometry Fourier functions, together with the properties of Quadrature Mirror Filter Bank (QMFB) and Exponential Modulated filter Bank (EMFB), respectively. HGDFT shows improvement in its implementation in terms of high reconfigurability, lower filter length, parallelism, and medium computational activities. Type 1 and type 111 poly phase structures are derived for real-valued HGDFT modulation. The design specifications are decimated critically and over-sampled for both single and multi standards receiver platforms. Evaluating the performance of oversampled single standard receiver channels, the HGDFT algorithm achieved 40% complexity reduction, compared to 34% and 38% reduction in the Discrete Fourier Transform (DFT) and tree quadrature mirror filter (TQMF) algorithm. The parallel generalized discrete Fourier transform (PGDFT) and recombined generalized discrete Fourier transform (RGDFT) had 41% complexity reduction and HGDFT had a 46% reduction in oversampling multi-standards mode. While in the critically sampled multi-standard receiver channels, HGDFT had complexity reduction of 70% while both PGDFT and RGDFT had a 34% reduction.Keywords: software defined radio, channelization, critical sample rate, over-sample rate
Procedia PDF Downloads 147116 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data
Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa
Abstract:
A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation
Procedia PDF Downloads 202115 DNA Damage and Apoptosis Induced in Drosophila melanogaster Exposed to Different Duration of 2400 MHz Radio Frequency-Electromagnetic Fields Radiation
Authors: Neha Singh, Anuj Ranjan, Tanu Jindal
Abstract:
Over the last decade, the exponential growth of mobile communication has been accompanied by a parallel increase in density of electromagnetic fields (EMF). The continued expansion of mobile phone usage raises important questions as EMF, especially radio frequency (RF), have long been suspected of having biological effects. In the present experiments, we studied the effects of RF-EMF on cell death (apoptosis) and DNA damage of a well- tested biological model, Drosophila melanogaster exposed to 2400 MHz frequency for different time duration i.e. 2 hrs, 4 hrs, 6 hrs,8 hrs, 10 hrs, and 12 hrs each day for five continuous days in ambient temperature and humidity conditions inside an exposure chamber. The flies were grouped into control, sham-exposed, and exposed with 100 flies in each group. In this study, well-known techniques like Comet Assay and TUNEL (Terminal deoxynucleotide transferase dUTP Nick End Labeling) Assay were used to detect DNA damage and for apoptosis studies, respectively. Experiments results showed DNA damage in the brain cells of Drosophila which increases as the duration of exposure increases when observed under the observed when we compared results of control, sham-exposed, and exposed group which indicates that EMF radiation-induced stress in the organism that leads to DNA damage and cell death. The process of apoptosis and mutation follows similar pathway for all eukaryotic cells; therefore, studying apoptosis and genotoxicity in Drosophila makes similar relevance for human beings as well.Keywords: cell death, apoptosis, Comet Assay, DNA damage, Drosophila, electromagnetic fields, EMF, radio frequency, RF, TUNEL assay
Procedia PDF Downloads 169114 Time-Dependent Reliability Analysis of Corrosion Affected Cast Iron Pipes with Mixed Mode Fracture
Authors: Chun-Qing Li, Guoyang Fu, Wei Yang
Abstract:
A significant portion of current water networks is made of cast iron pipes. Due to aging and deterioration with corrosion being the most predominant mechanism, the failure rate of cast iron pipes is very high. Although considerable research has been carried out in the past few decades, most are on the effect of corrosion on the structural capacity of pipes using strength theory as the failure criterion. This paper presents a reliability-based methodology for the assessment of corrosion affected cast iron pipe cracking failures. A nonlinear limit state function taking into account all three fracture modes is proposed for brittle metal pipes with mixed mode fracture. A stochastic model of the load effect is developed, and time-dependent reliability method is employed to quantify the probability of failure and predict the remaining service life. A case study is carried out using the proposed methodology, followed by sensitivity analysis to investigate the effects of the random variables on the probability of failure. It has been found that the larger the inclination angle or the Mode I fracture toughness is, the smaller the probability of pipe failure is. It has also been found that the multiplying and exponential coefficients k and n in the power law corrosion model and the internal pressure have the most influence on the probability of failure for cast iron pipes. The methodology presented in this paper can assist pipe engineers and asset managers in developing a risk-informed and cost-effective strategy for better management of corrosion-affected pipelines.Keywords: corrosion, inclined surface cracks, pressurized cast iron pipes, stress intensity
Procedia PDF Downloads 321113 Technological Challenges for First Responders in Civil Protection; the RESPOND-A Solution
Authors: Georgios Boustras, Cleo Varianou Mikellidou, Christos Argyropoulos
Abstract:
Summer 2021 was marked by a number of prolific fires in the EU (Greece, Cyprus, France) as well as outside the EU (USA, Turkey, Israel). This series of dramatic events have stretched national civil protection systems and first responders in particular. Despite the introduction of National, Regional and International frameworks (e.g. rescEU), a number of challenges have arisen, not only related to climate change. RESPOND-A (funded by the European Commission by Horizon 2020, Contract Number 883371) introduces a unique five-tier project architectural structure for best associating modern telecommunications technology with novel practices for First Responders of saving lives, while safeguarding themselves, more effectively and efficiently. The introduced architecture includes Perception, Network, Processing, Comprehension, and User Interface layers, which can be flexibly elaborated to support multiple levels and types of customization, so, the intended technologies and practices can adapt to any European Environment Agency (EEA)-type disaster scenario. During the preparation of the RESPOND-A proposal, some of our First Responder Partners expressed the need for an information management system that could boost existing emergency response tools, while some others envisioned a complete end-to-end network management system that would offer high Situational Awareness, Early Warning and Risk Mitigation capabilities. The intuition behind these needs and visions sits on the long-term experience of these Responders, as well, their smoldering worry that the evolving threat of climate change and the consequences of industrial accidents will become more frequent and severe. Three large-scale pilot studies are planned in order to illustrate the capabilities of the RESPOND-A system. The first pilot study will focus on the deployment and operation of all available technologies for continuous communications, enhanced Situational Awareness and improved health and safety conditions for First Responders, according to a big fire scenario in a Wildland Urban Interface zone (WUI). An important issue will be examined during the second pilot study. Unobstructed communication in the form of the flow of information is severely affected during a crisis; the flow of information between the wider public, from the first responders to the public and vice versa. Call centers are flooded with requests and communication is compromised or it breaks down on many occasions, which affects in turn – the effort to build a common operations picture for all firstr esponders. At the same time the information that reaches from the public to the operational centers is scarce, especially in the aftermath of an incident. Understandably traffic if disrupted leaves no other way to observe but only via aerial means, in order to perform rapid area surveys. Results and work in progress will be presented in detail and challenges in relation to civil protection will be discussed.Keywords: first responders, safety, civil protection, new technologies
Procedia PDF Downloads 142112 Introduction of Acute Paediatric Services in Primary Care: Evaluating the Impact on GP Education
Authors: Salman Imran, Chris Healey
Abstract:
Traditionally, medical care of children in England and Wales starts from primary care with a referral to secondary care paediatricians who may not investigate further. Many primary care doctors do not undergo a paediatric rotation/exposure in training. As a result, there are many who have not acquired the necessary skills to manage children hence increasing hospital referral. With the current demand on hospitals in the National Health Service managing more problems in the community is needed. One way of handling this is to set up clinics, meetings and huddles in GP surgeries where professionals involved (general practitioner, paediatrician, health visitor, community nurse, dietician, school nurse) come together and share information which can help improve communication and care. The increased awareness and education that paediatricians can impart in this way will help boost confidence for primary care professionals to be able to be more self-sufficient. This has been tried successfully in other regions e.g., St. Mary’s Hospital in London but is crucial for a more rural setting like ours. The primary aim of this project would be to educate specifically GP’s and generally all other health professionals involved. Additional benefits would be providing care nearer home, increasing patient’s confidence in their local surgery, improving communication and reducing unnecessary patient flow to already stretched hospital resources. Methods: This was done as a plan do study act cycle (PDSA). Three clinics were delivered in different practices over six months where feedback from staff and patients was collected. Designated time for teaching/discussion was used which involved some cases from the actual clinics. Both new and follow up patients were included. Two clinics were conducted by a paediatrician and nurse whilst the 3rd involved paediatrician and local doctor. The distance from hospital to clinics varied from two miles to 22 miles approximately. All equipment used was provided by primary care. Results: A total of 30 patients were seen. All patients found the location convenient as it was nearer than the hospital. 70-90% clearly understood the reason for a change in venue. 95% agreed to the importance of their local doctor being involved in their care. 20% needed to be seen in the hospital for further investigations. Patients felt this to be a more personalised, in-depth, friendly and polite experience. Local physicians felt this to be a more relaxed, familiar and local experience for their patients and they managed to get immediate feedback regarding their own clinical management. 90% felt they gained important learning from the discussion time and the paediatrician also learned about their understanding and gaps in knowledge/focus areas. 80% felt this time was valuable for targeted learning. Equipment, information technology, and office space could be improved for the smooth running of any future clinics. Conclusion: The acute paediatric outpatient clinic can be successfully established in primary care facilities. Careful patient selection and adequate facilities are important. We have demonstrated a further step in the reduction of patient flow to hospitals and upskilling primary care health professionals. This service is expected to become more efficient with experience.Keywords: clinics, education, paediatricians, primary care
Procedia PDF Downloads 163111 Gender, Agency, and Health: An Exploratory Study Using an Ethnographic Material for Illustrative Reasons
Authors: S. Gustafsson
Abstract:
The aim of this paper is to explore the connection between gender, agency, and health on personal and social levels over time. The use of gender as an analytical tool for health research has been shown to be useful to explore thoughts and ideas that are taken for granted, which have relevance for health. The paper highlights the following three issues. There are multiple forms of femininity and masculinity. Agency and social structure are closely related and referred to in this paper as 'gender agency'. Gender is illuminated as a product of history but also treated as a social factor and a producer of history. As a prominent social factor in the process of shaping living conditions, gender is highlighted as being significant for understanding health. To make health explicit as a dynamic and complex concept and not merely the opposite of disease requires a broader alliance with feminist theory and a post-Bourdieusian framework. A personal story, included with other ethnographic material about women’s networking in rural Sweden, is used as an empirical illustration. Ethnographic material was chosen for its ability to illustrate historical, local, and cultural ways of doing gendered and capitalized health. New concepts characterize ethnography, exemplified in this study by 'processes of transformation'. The semi-structured interviews followed an interview guide drafted with reference to the background theory of gender. The interviews lasted about an hour and were recorded and transcribed verbatim. The transcribed interviews and the author’s field notes formed the basis for the writing up of this paper. Initially, the participants' interests in weaving, sewing, and various handicrafts became obvious foci for networking activities and seemed at first to shape compliance with patriarchy, which generally does the opposite of promoting health. However, a significant event disrupted the stability of this phenomenon. What was permissible for the women began to crack and new spaces opened up. By exploiting these new spaces, the participants found opportunities to try out alternatives to emphasized femininity. Over time, they began combining feminized activities with degrees of masculinity, as leadership became part of the activities. In response to this, masculine enactment was gradually transformed and became increasingly gender neutral. As the tasks became more gender neutral the activities assumed a more formal character and the women stretched the limits of their capacity by enacting gender agency, a process the participants referred to as 'personal growth' and described as health promotion. What was described in terms of 'personal growth' can be interpreted as the effects of a raised status. Participation in women’s networking strengthened the participants’ structural position. More specifically, it was the gender-neutral position that was rewarded. To clarify the connection between gender, agency, and health on personal and social levels over time the concept processes of transformation is used. This concept is suggested as a dynamic equivalent to habitus. Health is thus seen as resulting from situational access to social recognition, prestige, capital assets and not least, meanings of gender.Keywords: a cross-gender bodily hexis, gender agency, gender as analytical tool, processes of transformation
Procedia PDF Downloads 158110 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy
Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan
Abstract:
For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue
Procedia PDF Downloads 362109 Vibration Based Damage Detection and Stiffness Reduction of Bridges: Experimental Study on a Small Scale Concrete Bridge
Authors: Mirco Tarozzi, Giacomo Pignagnoli, Andrea Benedetti
Abstract:
Structural systems are often subjected to degradation processes due to different kind of phenomena like unexpected loadings, ageing of the materials and fatigue cycles. This is true especially for bridges, in which their safety evaluation is crucial for the purpose of a design of planning maintenance. This paper discusses the experimental evaluation of the stiffness reduction from frequency changes due to uniform damage scenario. For this purpose, a 1:4 scaled bridge has been built in the laboratory of the University of Bologna. It is made of concrete and its cross section is composed by a slab linked to four beams. This concrete deck is 6 m long and 3 m wide, and its natural frequencies have been identified dynamically by exciting it with an impact hammer, a dropping weight, or by walking on it randomly. After that, a set of loading cycles has been applied to this bridge in order to produce a uniformly distributed crack pattern. During the loading phase, either cracking moment and yielding moment has been reached. In order to define the relationship between frequency variation and loss in stiffness, the identification of the natural frequencies of the bridge has been performed, before and after the occurrence of the damage, corresponding to each load step. The behavior of breathing cracks and its effect on the natural frequencies has been taken into account in the analytical calculations. By using a sort of exponential function given from the study of lot of experimental tests in the literature, it has been possible to predict the stiffness reduction through the frequency variation measurements. During the load test also crack opening and middle span vertical displacement has been monitored.Keywords: concrete bridge, damage detection, dynamic test, frequency shifts, operational modal analysis
Procedia PDF Downloads 184108 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options
Authors: Wajih Abbassi, Zouhaier Ben Khelifa
Abstract:
The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options
Procedia PDF Downloads 429107 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI
Authors: Ananya Ananya, Karthik Rao
Abstract:
Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net
Procedia PDF Downloads 261106 Spectroscopic Study of Tb³⁺ Doped Calcium Aluminozincate Phosphor for Display and Solid-State Lighting Applications
Authors: Sumandeep Kaur, Allam Srinivasa Rao, Mula Jayasimhadri
Abstract:
In recent years, rare earth (RE) ions doped inorganic luminescent materials are seeking great attention due to their excellent physical and chemical properties. These materials offer high thermal and chemical stability and exhibit good luminescence properties due to the presence of RE ions. The luminescent properties of these materials are attributed to their intra-configurational f-f transitions in RE ions. A series of Tb³⁺ doped calcium aluminozincate has been synthesized via sol-gel method. The structural and morphological studies have been carried out by recording X-ray diffraction patterns and SEM image. The luminescent spectra have been recorded for a comprehensive study of their luminescence properties. The XRD profile reveals the single-phase orthorhombic crystal structure with an average crystallite size of 65 nm as calculated by using DebyeScherrer equation. The SEM image exhibits completely random, irregular morphology of micron size particles of the prepared samples. The optimization of luminescence has been carried out by varying the dopant Tb³⁺ concentration within the range from 0.5 to 2.0 mol%. The as-synthesized phosphors exhibit intense emission at 544 nm pumped at 478 nm excitation wavelength. The optimized Tb³⁺ concentration has been found to be 1.0 mol% in the present host lattice. The decay curves show bi-exponential fitting for the as-synthesized phosphor. The colorimetric studies show green emission with CIE coordinates (0.334, 0.647) lying in green region for the optimized Tb³⁺ concentration. This report reveals the potential utility of Tb³⁺ doped calcium aluminozincate phosphors for display and solid-state lighting devices.Keywords: concentration quenching, phosphor, photoluminescence, XRD
Procedia PDF Downloads 154105 Valorization Cascade Approach of Fish By-Products towards a Zero-Waste Future: A Review
Authors: Joana Carvalho, Margarida Soares, André Ribeiro, Lucas Nascimento, Nádia Valério, Zlatina Genisheva
Abstract:
Following the exponential growth of human population, a remarkable increase in the amount of fish waste has been produced worldwide. The fish processing industry generates a considerable amount of by-products which represents a considerable environmental problem. Accordingly, the reuse and valorisation of these by-products is a key process for marine resource preservation. The significant volume of fish waste produced worldwide, along with its environmental impact, underscores the urgent need for the adoption of sustainable practices. The transformative potential of utilizing fish processing waste to create industrial value is gaining recognition. The substantial amounts of waste generated by the fish processing industry present both environmental challenges and economic inefficiencies. Different added-value products can be recovered by the valorisation industries, whereas fishing companies can save costs associated with the management of those wastes, with associated advantages, not only in terms of economic income but also considering the environmental impacts. Fish processing by-products have numerous applications; the target portfolio of products will be fish oil, fish protein hydrolysates, bacteriocins, pigments, vitamins, collagen, and calcium-rich powder, targeting food products, additives, supplements, and nutraceuticals. This literature review focuses on the main valorisation ways of fish wastes and different compounds with a high commercial value obtained by fish by-products and their possible applications in different fields. Highlighting its potential in sustainable resource management strategies can play and important role in reshaping the fish processing industry, driving it towards circular economy and consequently more sustainable future.Keywords: fish process industry, fish wastes, by-products, circular economy, sustainability
Procedia PDF Downloads 17104 Effect of Aging Time and Mass Concentration on the Rheological Behavior of Vase of Dam
Authors: Hammadi Larbi
Abstract:
Water erosion, the main cause of the siltation of a dam, is a natural phenomenon governed by natural physical factors such as aggressiveness, climate change, topography, lithology, and vegetation cover. Currently, a vase from certain dams is released downstream of the dikes during devastation by hydraulic means. The vases are characterized by complex rheological behaviors: rheofluidification, yield stress, plasticity, and thixotropy. In this work, we studied the effect of the aging time of the vase in the dam and the mass concentration of the vase on the flow behavior of a vase from the Fergoug dam located in the Mascara region. In order to test the reproducibility of results, two replicates were performed for most of the experiments. The flow behavior of the vase studied as a function of storage time and mass concentration is analyzed by the Herschel Bulkey model. The increase in the aging time of the vase in the dam causes an increase in the yield stress and the consistency index of the vase. This phenomenon can be explained by the adsorption of the water by the vase and the increase in volume by swelling, which modifies the rheological parameters of the vase. The increase in the mass concentration in the vase leads to an increase in the yield stress and the consistency index as a function of the concentration. This behavior could be explained by interactions between the granules of the vase suspension. On the other hand, the increase in the aging time and the mass concentration of the vase in the dam causes a reduction in the flow index of the vase. The study also showed an exponential decrease in apparent viscosity with the increase in the aging time of the vase in the dam. If a vase is allowed to age long enough for the yield stress to be close to infinity, its apparent viscosity is also close to infinity; then the apparent viscosity also tends towards infinity; this can, for example, subsequently pose problems when dredging dams. For good dam management, it could be then deduced to reduce the dredging time of the dams as much as possible.Keywords: vase of dam, aging time, rheological behavior, yield stress, apparent viscosity, thixotropy
Procedia PDF Downloads 28