Search results for: multipoint optimal minimum entropy deconvolution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5291

Search results for: multipoint optimal minimum entropy deconvolution

551 A Network Economic Analysis of Friendship, Cultural Activity, and Homophily

Authors: Siming Xie

Abstract:

In social networks, the term homophily refers to the tendency of agents with similar characteristics to link with one another and is so robustly observed across many contexts and dimensions. The starting point of my research is the observation that the “type” of agents is not a single exogenous variable. Agents, despite their differences in race, religion, and other hard to alter characteristics, may share interests and engage in activities that cut across those predetermined lines. This research aims to capture the interactions of homophily effects in a model where agents have two-dimension characteristics (i.e., race and personal hobbies such as basketball, which one either likes or dislikes) and with biases in meeting opportunities and in favor of same-type friendships. A novel feature of my model is providing a matching process with biased meeting probability on different dimensions, which could help to understand the structuring process in multidimensional networks without missing layer interdependencies. The main contribution of this study is providing a welfare based matching process for agents with multi-dimensional characteristics. In particular, this research shows that the biases in meeting opportunities on one dimension would lead to the emergence of homophily on the other dimension. The objective of this research is to determine the pattern of homophily in network formations, which will shed light on our understanding of segregation and its remedies. By constructing a two-dimension matching process, this study explores a method to describe agents’ homophilous behavior in a social network with multidimension and construct a game in which the minorities and majorities play different strategies in a society. It also shows that the optimal strategy is determined by the relative group size, where society would suffer more from social segregation if the two racial groups have a similar size. The research also has political implications—cultivating the same characteristics among agents helps diminishing social segregation, but only if the minority group is small enough. This research includes both theoretical models and empirical analysis. Providing the friendship formation model, the author first uses MATLAB to perform iteration calculations, then derives corresponding mathematical proof on previous results, and last shows that the model is consistent with empirical evidence from high school friendships. The anonymous data comes from The National Longitudinal Study of Adolescent Health (Add Health).

Keywords: homophily, multidimension, social networks, friendships

Procedia PDF Downloads 172
550 Optimization Approach to Integrated Production-Inventory-Routing Problem for Oxygen Supply Chains

Authors: Yena Lee, Vassilis M. Charitopoulos, Karthik Thyagarajan, Ian Morris, Jose M. Pinto, Lazaros G. Papageorgiou

Abstract:

With globalisation, the need to have better coordination of production and distribution decisions has become increasingly important for industrial gas companies in order to remain competitive in the marketplace. In this work, we investigate a problem that integrates production, inventory, and routing decisions in a liquid oxygen supply chain. The oxygen supply chain consists of production facilities, external third-party suppliers, and multiple customers, including hospitals and industrial customers. The product produced by the plants or sourced from the competitors, i.e., third-party suppliers, is distributed by a fleet of heterogenous vehicles to satisfy customer demands. The objective is to minimise the total operating cost involving production, third-party, and transportation costs. The key decisions for production include production and inventory levels and product amount from third-party suppliers. In contrast, the distribution decisions involve customer allocation, delivery timing, delivery amount, and vehicle routing. The optimisation of the coordinated production, inventory, and routing decisions is a challenging problem, especially when dealing with large-size problems. Thus, we present a two-stage procedure to solve the integrated problem efficiently. First, the problem is formulated as a mixed-integer linear programming (MILP) model by simplifying the routing component. The solution from the first-stage MILP model yields the optimal customer allocation, production and inventory levels, and delivery timing and amount. Then, we fix the previous decisions and solve a detailed routing. In the second stage, we propose a column generation scheme to address the computational complexity of the resulting detailed routing problem. A case study considering a real-life oxygen supply chain in the UK is presented to illustrate the capability of the proposed models and solution method. Furthermore, a comparison of the solutions from the proposed approach with the corresponding solutions provided by existing metaheuristic techniques (e.g., guided local search and tabu search algorithms) is presented to evaluate the efficiency.

Keywords: production planning, inventory routing, column generation, mixed-integer linear programming

Procedia PDF Downloads 113
549 Study of Elastic-Plastic Fatigue Crack in Functionally Graded Materials

Authors: Somnath Bhattacharya, Kamal Sharma, Vaibhav Sonkar

Abstract:

Composite materials emerged in the middle of the 20th century as a promising class of engineering materials providing new prospects for modern technology. Recently, a new class of composite materials known as functionally graded materials (FGMs) has drawn considerable attention of the scientific community. In general, FGMs are defined as composite materials in which the composition or microstructure or both are locally varied so that a certain variation of the local material properties is achieved. This gradual change in composition and microstructure of material is suitable to get gradient of properties and performances. FGMs are synthesized in such a way that they possess continuous spatial variations in volume fractions of their constituents to yield a predetermined composition. These variations lead to the formation of a non-homogeneous macrostructure with continuously varying mechanical and / or thermal properties in one or more than one direction. Lightweight functionally graded composites with high strength to weight and stiffness to weight ratios have been used successfully in aircraft industry and other engineering applications like in electronics industry and in thermal barrier coatings. In the present work, elastic-plastic crack growth problems (using Ramberg-Osgood Model) in an FGM plate under cyclic load has been explored by extended finite element method. Both edge and centre crack problems have been solved by taking additionally holes, inclusions and minor cracks under plane stress conditions. Both soft and hard inclusions have been implemented in the problems. The validity of linear elastic fracture mechanics theory is limited to the brittle materials. A rectangular plate of functionally graded material of length 100 mm and height 200 mm with 100% copper-nickel alloy on left side and 100% ceramic (alumina) on right side is considered in the problem. Exponential gradation in property is imparted in x-direction. A uniform traction of 100 MPa is applied to the top edge of the rectangular domain along y direction. In some problems, domain contains major crack along with minor cracks or / and holes or / and inclusions. Major crack is located the centre of the left edge or the centre of the domain. The discontinuities, such as minor cracks, holes, and inclusions are added either singly or in combination with each other. On the basis of this study, it is found that effect of minor crack in the domain’s failure crack length is minimum whereas soft inclusions have moderate effect and the effect of holes have maximum effect. It is observed that the crack growth is more before the failure in each case when hard inclusions are present in place of soft inclusions.

Keywords: elastic-plastic, fatigue crack, functionally graded materials, extended finite element method (XFEM)

Procedia PDF Downloads 390
548 Activated Carbon Content Influence in Mineral Barrier Performance

Authors: Raul Guerrero, Sandro Machado, Miriam Carvalho

Abstract:

Soil and aquifer pollution, caused by hydrocarbon liquid spilling, is induced by misguided operational practices and inefficient safety guidelines. According to the Environmental Brazilian Institute (IBAMA), during 2013 alone, over 472.13 m3 of diesel oil leaked into the environment nationwide for those reported cases only. Regarding the aforementioned information, there’s an indisputable need to adopt appropriate environmental safeguards specially in those areas intended for the production, treatment, transportation and storage of hydrocarbon fluids. According to Brazilian norm, ABNT-NBR 7505-1:2000, compacted soil or mineral barriers used in structural contingency levees, such as storage tanks, are required to present a maximum water permeability coefficient, k, of 1x10-6 cm/s. However, as discussed by several authors, water can not be adopted as the reference fluid to determine the site’s containment performance against organic fluids. Mainly, due to the great discrepancy observed in polarity values (dielectric constant) between water and most organic fluids. Previous studies, within this same research group, proposed an optimal range of values for the soil’s index properties for mineral barrier composition focused on organic fluid containment. Unfortunately, in some circumstances, it is not possible to encounter a type of soil with the required geotechnical characteristics near the containment site, increasing prevention and construction costs, as well as environmental risks. For these specific cases, the use of an organic product or material as an additive to enhance mineral-barrier containment performance may be an attractive geotechnical solution. This paper evaluates the effect of activated carbon (AC) content additions into a clayey soil towards hydrocarbon fluid permeability. Variables such as compaction energy, carbon texture and addition content (0%, 10% and 20%) were analyzed through laboratory falling-head permeability tests using distilled water and commercial diesel as percolating fluids. The obtained results showed that the AC with smaller particle-size reduced k values significantly against diesel, indicating a direct relationship between particle-size reduction (surface area increase) of the organic product and organic fluid containment.

Keywords: activated carbon, clayey soils, permeability, surface area

Procedia PDF Downloads 257
547 Syntheses in Polyol Medium of Inorganic Oxides with Various Smart Optical Properties

Authors: Shian Guan, Marie Bourdin, Isabelle Trenque, Younes Messaddeq, Thierry Cardinal, Nicolas Penin, Issam Mjejri, Aline Rougier, Etienne Duguet, Stephane Mornet, Manuel Gaudon

Abstract:

At the interface of the studies performed by 3 Ph.D. students: Shian Guan (2017-2020), Marie Bourdin (2016-2019) and Isabelle Trenque (2012-2015), a single synthesis route: polyol-mediated process, was used with success for the preparation of different inorganic oxides. Both of these inorganic oxides were elaborated for their potential application as smart optical compounds. This synthesis route has allowed us to develop nanoparticles of zinc oxide, vanadium oxide or tungsten oxide. This route is with easy implementation, inexpensive and with large-scale production potentialities and leads to materials of high purity. The obtaining by this route of nanometric particles, however perfectly crystalline, has notably led to the possibility of doping these matrix materials with high doping ion concentrations (high solubility limits). Thus, Al3+ or Ga3+ doped-ZnO powder, with high doping rate in comparison with the literature, exhibits remarkable infrared absorption properties thanks to their high free carrier density. Note also that due to the narrow particle size distribution of the as-prepared nanometric doped-ZnO powder, the original correlation between crystallite size and unit-cell parameters have been established. Also, depending on the annealing atmosphere use to treat vanadium precursors, VO2, V2O3 or V2O5 oxides with thermochromic or electrochromic properties can be obtained without any impurity, despite the versatility of the oxidation state of vanadium. This is of more particular interest on vanadium dioxide, a relatively difficult-to-prepare oxide, whose first-order metal-insulator phase transition is widely explored in the literature for its thermochromic behavior (in smart windows with optimal thermal insulation). Finally, the reducing nature of the polyol solvents ensures the production of oxygen-deficient tungsten oxide, thus conferring to the nano-powders exotic colorimetric properties, as well as optimized photochromic and electrochromic behaviors.

Keywords: inorganic oxides, electrochromic, photochromic, thermochromic

Procedia PDF Downloads 221
546 Characteristics of Clinical and Diagnostic Aspects of Benign Diseases of Cervi̇x in Women

Authors: Gurbanova J., Majidova N., Ali-Zade S., Hasanova A., Mikailzade P.

Abstract:

Currently, the problem of oncogynecological diseases is widespread and remains relevant in terms of quantitative growth. It is known that due to the increase in the number of benign diseases of the cervix, the development of precancerous conditions occurs. Benign diseases of the cervix represent the most common gynecological problem, which are often precursors of malignant neoplasms, especially cervical cancer. According to statistics, benign diseases of the cervix cover 25-45% of all gynecological diseases. Among women's oncogynecological diseases, cervical cancer ranks second in the world after breast cancer and ranks first in the mortality rate among oncological diseases in economically underdeveloped countries. We performed a comprehensive clinical and laboratory examination of 130 women aged 18 to 73 with benign cervical diseases. 59 (38.5%) women of reproductive age, as well as 39 (30%) premenopausal and 41 (31.5%) menopausal patients, participated in the study. Detailed anamnesis was collected from all patients, objective and gynecological examination was performed, laboratory and instrumental examinations (USM, IPV DNA, smear microscopy, and PCR bacteriological examination of sexually transmitted infections), simple and extended colposcopy, liquid-based РАР-smear smear and РАР-classic smear examinations were conducted. As a result of the research, the following nosological forms were found in women with benign diseases of the cervix: non-specific vaginitis in 10 (7.7%) cases; ectopia, endocervicitis - 60(46.2%); cervical ectropion - 7(5.4%); cervical polyp - 9(6.9%); cervical leukoplakia - 15(11.5%); atrophic vaginitis - 7(5.4%); condyloma - 12(9.2%); cervical stenosis - 2(1.5%); endometriosis of the cervix - was noted in 8 (6.2%) cases (p<0.001), respectively. Characteristics of the menstrual cycle among the examined women: normal cycle in 97 (74.6%) cases; oligomenorrhea – 23 (17.7%); polymenorrhea – 4(3.1%); algomenorrhea – noted in 6 (4.6%) cases (p<0.001). Cytological examination showed that: the specificity of liquid-based cytology was 76.2%, and the traditional PAP test was set at 70.6%. The overall diagnostic value was calculated to be 86% in liquid-based cytology and 78.5% in conventional PAP tests. Treatment of women with benign diseases of the cervix was carried out by diathermocoagulation method and "FOTEK EA 141M" device. It should be noted that 6 months after the treatment, after treatment with the "FOTEK EA 141M" device, there was no relapse in any patient. Recurrence was found in 23.7% of patients after diathermoelectrocoagulation. Thus, it is clear from the above that the study of cervical pathologies, the determination of optimal examinations, and effective treatment methods is one of the urgent problems facing obstetrics and gynecology.

Keywords: cervical cancer, cytological examination, PAP-smear, non-specific vaginitis

Procedia PDF Downloads 122
545 Beneficial Effects of Whey Protein Concentrate in Venous Thrombosis

Authors: Anna Tokajuk, Agnieszka Zakrzeska, Ewa Chabielska, Halina Car

Abstract:

Whey is a by-product generated mainly in the production of cheese and casein. Powder forms of whey are used widely in the food industry as well as a high-protein food for infants, for convalescents, by athletes and especially by bodybuilders to increase muscle mass during exercise. Whey protein concentrate-80 (WPC-80) is a source of bioactive peptides with beneficial effects on the cardiovascular system. It is known that whey proteins health beneficial properties include antidiabetic, blood pressure lowering, improving cardiovascular system function, antibacterial, antiviral and other effects. To study its influence on the development of thrombosis, venous thrombosis model was performed according to the protocol featured by Reyers with modification by Chabielska and Gromotowicz. Male Wistar-Crl: WI (Han) rats from researched groups were supplemented with two doses of WPC-80 (0.3 or 0.5 g/kg) for 7, 14 or 21 days and after these periods, one-hour venous thrombosis model was performed. Control group received 0.9 % NaCl solution and was sham operated. The statistical significance of results was computed by Mann – Whitney’s test. We observed that thrombus weight was decreased in animals obtaining WPC-8080 and that was statistically significant in 14 and 21-day supplemented groups. Blood count parameters did not differ significantly in rats with and without thrombosis induction whether they were fed with WPC-80 or not. Moreover, the number of platelets (PLT) was within the normal range in each group. The examined coagulation parameters in rats of the control groups were within normal limits. After WPC-80 supplementation there was the tendency to prolonged activated partial thromboplastin time (aPTT), but in comparison, the results did not turn out significant. In animals that received WPC-80 0.3 g·kg-1 for 21 days with and without induced thrombosis, prothrombin time (PT) and an international normalized ratio (INR) was somewhat decreased, remaining within the normal range, but the nature and significance of this observation are beyond the framework of the current study. Additionally, fibrinogen and thrombin time (TT) did not differ significantly between groups. Therefore the exact effect of WPC-80 on coagulation system is still elusive and requires further thorough research including mechanisms of action. Determining the potential clinical application of WPC-80 requires the selection of the optimal dose and duration of supplementation.

Keywords: antithrombotic, rats, venous thrombosis, WPC-80

Procedia PDF Downloads 119
544 Removal of Methylene Blue from Aqueous Solution by Adsorption onto Untreated Coffee Grounds

Authors: N. Azouaou, H. Mokaddem, D. Senadjki, K. Kedjit, Z. Sadaoui

Abstract:

Introduction: Water contamination caused by dye industries, including food, leather, textile, plastic, cosmetics, paper-making, printing and dye synthesis, has caused more and more attention, since most dyes are harmful to human being and environments. Untreated coffee grounds were used as a high-efficiency adsorbent for the removal of a cationic dye (methylene blue, MB) from aqueous solution. Characterization of the adsorbent was performed using several techniques such as SEM, surface area (BET), FTIR and pH zero charge. The effects of contact time, adsorbent dose, initial solution pH and initial concentration were systematically investigated. Results showed the adsorption kinetics followed the pseudo-second-order kinetic model. Langmuir isotherm model is in good agreement with the experimental data as compared to Freundlich and D–R models. The maximum adsorption capacity was found equal to 52.63mg/g. In addition, the possible adsorption mechanism was also proposed based on the experimental results. Experimental: The adsorption experiments were carried out in batch at room temperature. A given mass of adsorbent was added to methylene blue (MB) solution and the entirety was agitated during a certain time. The samples were carried out at quite time intervals. The concentrations of MB left in supernatant solutions after different time intervals were determined using a UV–vis spectrophotometer. The amount of MB adsorbed per unit mass of coffee grounds (qt) and the dye removal efficiency (R %) were evaluated. Results and Discussion: Some chemical and physical characteristics of coffee grounds are presented and the morphological analysis of the adsorbent was also studied. Conclusions: The good capacity of untreated coffee grounds to remove MB from aqueous solution was demonstrated in this study, highlighting its potential for effluent treatment processes. The kinetic experiments show that the adsorption is rapid and maximum adsorption capacities qmax= 52.63mg/g achieved in 30min. The adsorption process is a function of the adsorbent concentration, pH and metal ion concentration. The optimal parameters found are adsorbent dose m=5g, pH=5 and ambient temperature. FTIR spectra showed that the principal functional sites taking part in the sorption process included carboxyl and hydroxyl groups.

Keywords: adsorption, methylene blue, coffee grounds, kinetic study

Procedia PDF Downloads 233
543 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing

Authors: Rowan P. Martnishn

Abstract:

During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.

Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding

Procedia PDF Downloads 31
542 Outpatient Pelvic Nerve and Muscle Treatment Reduces Pain and Improves Functionality for Patients with Chronic Pelvic Pain and Erectile Dysfunction

Authors: Allyson Augusta Shrikhande, Alexa Rains, Tayyaba Ahmed, Marjorie Mamsaang, Rakhi Vyas, Janaki Natarajan, Erika Moody, Christian Reutter, Kimberlee Leishear, Yogita Tailor, Sandra Sandhu-Restaino, Lora Liu, Neha James, Rosemarie Filart

Abstract:

Characterized by consistent difficulty getting and keeping an erection firm enough for intercourse, Erectile Dysfunction may affect up to 15% of adult men. Although awareness and access to treatment have improved in recent years, many patients do not actively seek diagnosis or treatment due to the stigma surrounding this condition. Patients who do seek treatment are often dissatisfied by the efficacy of the medication. The condition inhibits patients’ quality of life by worsening mental health and relationships. The purpose of this study was to test the effectiveness of an outpatient neuromuscular treatment protocol in treating the symptoms of Chronic Pelvic Pain and Erectile Dysfunction, improving pain and function. 56 patients ages 20-79 presented to an outpatient clinic for treatment of pelvic pain and Erectile Dysfunction symptoms. These symptoms had persisted for an average of 4 years. All patients underwent external ultrasound-guided hydro-dissection technique targeted at pelvic peripheral nerves in combination with pelvic floor musculature trigger-point injections. To measure the effects of this treatment, a five question Erectile Dysfunction questionnaire was completed by each patient at their first visit to a clinic and three months after treatment began. Answers were summed for a total score of 5-25, with a higher score indicating optimal function. The average score before treatment was 14.125 (SD 5.411) (a=0.05; CI 12.708-15.542), which increased by 18% to an average of 16.625 (SD 6.423) (a=0.05; CI 14.943-18.307) after treatment (P=0.0004). Secondary outcome variables included a Visual Analogue Scale (VAS) to measure pelvic pain intensity and the Functional Pelvic Pain Scale (FPPS) to measure function across multiple areas. VAS scores reduced by 51% after three months. Before treatment, the mean VAS score was 5.87, and the posttreatment mean VAS score was 2.89. Pelvic pain functionality improved by 34% after three months. Pretreatment FPPS scores averaged at 7.48, decreasing to 4.91 after treatment. These results indicate that this unique treatment was very effective at relieving pain and increasing function for patients with Erectile Dysfunction.

Keywords: chronic pelvic pain, erectile dysfunction, nonsurgical, outpatient, trigger point injections

Procedia PDF Downloads 91
541 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts

Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira

Abstract:

In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.

Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design

Procedia PDF Downloads 115
540 Controlling Shape and Position of Silicon Micro-nanorolls Fabricated using Fine Bubbles during Anodization

Authors: Yodai Ashikubo, Toshiaki Suzuki, Satoshi Kouya, Mitsuya Motohashi

Abstract:

Functional microstructures such as wires, fins, needles, and rolls are currently being applied to variety of high-performance devices. Under these conditions, a roll structure (silicon micro-nanoroll) was formed on the surface of the silicon substrate via fine bubbles during anodization using an extremely diluted hydrofluoric acid (HF + H₂O). The as-formed roll had a microscale length and width of approximately 1 µm. The number of rolls was 3-10 times and the thickness of the film forming the rolls was about 10 nm. Thus, it is promising for applications as a distinct device material. These rolls functioned as capsules and/or pipelines. To date, number of rolls and roll length have been controlled by anodization conditions. In general, controlling the position and roll winding state is required for device applications. However, it has not been discussed. Grooves formed on silicon surface before anodization might be useful control the bubbles. In this study, we investigated the effect of the grooves on the position and shape of the roll. The surfaces of the silicon wafers were anodized. The starting material was p-type (100) single-crystalline silicon wafers. The resistivity of the wafer is 5-20 ∙ cm. Grooves were formed on the surface of the substrate before anodization using sandpaper and diamond pen. The average width and depth of the grooves were approximately 1 µm and 0.1 µm, respectively. The HF concentration {HF/ (HF + C₂H5OH + H₂O)} was 0.001 % by volume. The C2H5OH concentration {C₂H5OH/ (HF + C₂H5OH + H₂O)} was 70 %. A vertical single-tank cell and Pt cathode were used for anodization. The silicon roll was observed by field-emission scanning electron microscopy (FE-SEM; JSM-7100, JEOL). The atomic bonding state of the rolls was evaluated using X-ray photoelectron spectroscopy (XPS; ESCA-3400, Shimadzu). For straight groove, the rolls were formed along the groove. This indicates that the orientation of the rolls can be controlled by the grooves. For lattice-like groove, the rolls formed inside the lattice and along the long sides. In other words, the aspect ratio of the lattice is very important for the roll formation. In addition, many rolls were formed and winding states were not uniform when the lattice size is too large. On the other hand, no rolls were formed for small lattice. These results indicate that there is the optimal size of lattice for roll formation. In the future, we are planning on formation of rolls using groove formed by lithography technique instead of sandpaper and the pen. Furthermore, the rolls included nanoparticles will be formed for nanodevices.

Keywords: silicon roll, anodization, fine bubble, microstructure

Procedia PDF Downloads 29
539 Sustainable Solid Waste Management Solutions for Asian Countries Using the Potential in Municipal Solid Waste of Indian Cities

Authors: S. H. Babu Gurucharan, Priyanka Kaushal

Abstract:

Majority of the world's population is expected to live in the Asia and Pacific region by 2050 and thus their cities will generate the maximum waste. India, being the second populous country in the world, is an ideal case study to identify a solution for Asian countries. Waste minimisation and utilisation have always been part of the Indian culture. During rapid urbanisation, our society lost the art of waste minimisation and utilisation habits. Presently, Waste is not considered as a resource, thus wasting an opportunity to tap resources. The technologies in vogue are not suited for effective treatment of large quantities of generated solid waste, without impacting the environment and the population. If not treated efficiently, Waste can become a silent killer. The article is trying to highlight the Indian municipal solid waste scenario as a key indicator of Asian waste management and recommend sustainable waste management and suggest effective solutions to treat the Solid Waste. The methods followed during the research were to analyse the solid waste data on characteristics of solid waste generated in Indian cities, then evaluate the current technologies to identify the most suitable technology in Indian conditions with minimal environmental impact, interact with the technology technical teams, then generate a technical process specific to Indian conditions and further examining the environmental impact and advantages/ disadvantages of the suggested process. The most important finding from the study was the recognition that most of the current municipal waste treatment technologies being employed, operate sub-optimally in Indian conditions. Therefore, the study using the available data, generated heat and mass balance of processes to arrive at the final technical process, which was broadly divided into Waste processing, Waste Treatment, Power Generation, through various permutations and combinations at each stage to ensure that the process is techno-commercially viable in Indian conditions. Then environmental impact was arrived through secondary sources and a comparison of environmental impact of different technologies was tabulated. The major advantages of the suggested process are the effective use of waste for resource generation both in terms of maximised power output or conversion to eco-friendly products like biofuels or chemicals using advanced technologies, minimum environmental impact and the least landfill requirement. The major drawbacks are the capital, operations and maintenance costs. The existing technologies in use in Indian municipalities have their own limitations and the shortlisted technology is far superior to other technologies in vogue. Treatment of Municipal Solid Waste with an efficient green power generation is possible through a combination of suitable environment-friendly technologies. A combination of bio-reactors and plasma-based gasification technology is most suitable for Indian Waste and in turn for Asian waste conditions.

Keywords: calorific value, gas fermentation, landfill, municipal solid waste, plasma gasification, syngas

Procedia PDF Downloads 184
538 A Multi-Criteria Decision Making Approach for Disassembly-To-Order Systems under Uncertainty

Authors: Ammar Y. Alqahtani

Abstract:

In order to minimize the negative impact on the environment, it is essential to manage the waste that generated from the premature disposal of end-of-life (EOL) products properly. Consequently, government and international organizations introduced new policies and regulations to minimize the amount of waste being sent to landfills. Moreover, the consumers’ awareness regards environment has forced original equipment manufacturers to consider being more environmentally conscious. Therefore, manufacturers have thought of different ways to deal with waste generated from EOL products viz., remanufacturing, reusing, recycling, or disposing of EOL products. The rate of depletion of virgin natural resources and their dependency on the natural resources can be reduced by manufacturers when EOL products are treated as remanufactured, reused, or recycled, as well as this will cut on the amount of harmful waste sent to landfills. However, disposal of EOL products contributes to the problem and therefore is used as a last option. Number of EOL need to be estimated in order to fulfill the components demand. Then, disassembly process needs to be performed to extract individual components and subassemblies. Smart products, built with sensors embedded and network connectivity to enable the collection and exchange of data, utilize sensors that are implanted into products during production. These sensors are used for remanufacturers to predict an optimal warranty policy and time period that should be offered to customers who purchase remanufactured components and products. Sensor-provided data can help to evaluate the overall condition of a product, as well as the remaining lives of product components, prior to perform a disassembly process. In this paper, a multi-period disassembly-to-order (DTO) model is developed that takes into consideration the different system uncertainties. The DTO model is solved using Nonlinear Programming (NLP) in multiple periods. A DTO system is considered where a variety of EOL products are purchased for disassembly. The model’s main objective is to determine the best combination of EOL products to be purchased from every supplier in each period which maximized the total profit of the system while satisfying the demand. This paper also addressed the impact of sensor embedded products on the cost of warranties. Lastly, this paper presented and analyzed a case study involving various simulation conditions to illustrate the applicability of the model.

Keywords: closed-loop supply chains, environmentally conscious manufacturing, product recovery, reverse logistics

Procedia PDF Downloads 138
537 An Interactive User-Oriented Approach to Optimizing Public Space Lighting

Authors: Tamar Trop, Boris Portnov

Abstract:

Public Space Lighting (PSL) of outdoor urban areas promotes comfort, defines spaces and neighborhood identities, enhances perceived safety and security, and contributes to residential satisfaction and wellbeing. However, if excessive or misdirected, PSL leads to unnecessary energy waste and increased greenhouse gas emissions, poses a non-negligible threat to the nocturnal environment, and may become a potential health hazard. At present, PSL is designed according to international, regional, and national standards, which consolidate best practice. Yet, knowledge regarding the optimal light characteristics needed for creating a perception of personal comfort and safety in densely populated residential areas, and the factors associated with this perception, is still scarce. The presented study suggests a paradigm shift in designing PSL towards a user-centered approach, which incorporates pedestrians' perspectives into the process. The study is an ongoing joint research project between China and Israel Ministries of Science and Technology. Its main objectives are to reveal inhabitants' perceptions of and preferences for PSL in different densely populated neighborhoods in China and Israel, and to develop a model that links instrumentally measured parameters of PSL (e.g., intensity, spectra and glare) with its perceived comfort and quality, while controlling for three groups of attributes: locational, temporal, and individual. To investigate measured and perceived PSL, the study employed various research methods and data collection tools, developed a location-based mobile application, and used multiple data sources, such as satellite multi-spectral night-time light imagery, census statistics, and detailed planning schemes. One of the study’s preliminary findings is that higher sense of safety in the investigated neighborhoods is not associated with higher levels of light intensity. This implies potential for energy saving in brightly illuminated residential areas. Study findings might contribute to the design of a smart and adaptive PSL strategy that enhances pedestrians’ perceived safety and comfort while reducing light pollution and energy consumption.

Keywords: energy efficiency, light pollution, public space lighting, PSL, safety perceptions

Procedia PDF Downloads 135
536 Trajectory Generation Procedure for Unmanned Aerial Vehicles

Authors: Amor Jnifene, Cedric Cocaud

Abstract:

One of the most constraining problems facing the development of autonomous vehicles is the limitations of current technologies. Guidance and navigation controllers need to be faster and more robust. Communication data links need to be more reliable and secure. For an Unmanned Aerial Vehicles (UAV) to be useful, and fully autonomous, one important feature that needs to be an integral part of the navigation system is autonomous trajectory planning. The work discussed in this paper presents a method for on-line trajectory planning for UAV’s. This method takes into account various constraints of different types including specific vectors of approach close to target points, multiple objectives, and other constraints related to speed, altitude, and obstacle avoidance. The trajectory produced by the proposed method ensures a smooth transition between different segments, satisfies the minimum curvature imposed by the dynamics of the UAV, and finds the optimum velocity based on available atmospheric conditions. Given a set of objective points and waypoints a skeleton of the trajectory is constructed first by linking all waypoints with straight segments based on the order in which they are encountered in the path. Secondly, vectors of approach (VoA) are assigned to objective waypoints and their preceding transitional waypoint if any. Thirdly, the straight segments are replaced by 3D curvilinear trajectories taking into account the aircraft dynamics. In summary, this work presents a method for on-line 3D trajectory generation (TG) of Unmanned Aerial Vehicles (UAVs). The method takes as inputs a series of waypoints and an optional vector of approach for each of the waypoints. Using a dynamic model based on the performance equations of fixed wing aircrafts, the TG computes a set of 3D parametric curves establishing a course between every pair of waypoints, and assembling these sets of curves to construct a complete trajectory. The algorithm ensures geometric continuity at each connection point between two sets of curves. The geometry of the trajectory is optimized according to the dynamic characteristics of the aircraft such that the result translates into a series of dynamically feasible maneuvers. In summary, this work presents a method for on-line 3D trajectory generation (TG) of Unmanned Aerial Vehicles (UAVs). The method takes as inputs a series of waypoints and an optional vector of approach for each of the waypoints. Using a dynamic model based on the performance equations of fixed wing aircraft, the TG computes a set of 3D parametric curves establishing a course between every pair of waypoints, and assembling these sets of curves to construct a complete trajectory. The algorithm ensures geometric continuity at each connection point between two sets of curves. The geometry of the trajectory is optimized according to the dynamic characteristics of the aircraft such that the result translates into a series of dynamically feasible maneuvers.

Keywords: trajectory planning, unmanned autonomous air vehicle, vector of approach, waypoints

Procedia PDF Downloads 409
535 A Retrospective Cohort Study on an Outbreak of Gastroenteritis Linked to a Buffet Lunch Served during a Conference in Accra

Authors: Benjamin Osei Tutu, Sharon Annison

Abstract:

On 21st November, 2016, an outbreak of foodborne illness occurred after a buffet lunch served during a stakeholders’ consultation meeting held in Accra. An investigation was conducted to characterise the affected people, determine the etiologic food, the source of contamination and the etiologic agent and to implement appropriate public health measures to prevent future occurrences. A retrospective cohort study was conducted via telephone interviews, using a structured questionnaire developed from the buffet menu. A case was defined as any person suffering from symptoms of foodborne illness e.g. diarrhoea and/or abdominal cramps after eating food served during the stakeholder consultation meeting in Accra on 21st November, 2016. The exposure status of all the members of the cohort was assessed by taking the food history of each respondent during the telephone interview. The data obtained was analysed using Epi Info 7. An environmental risk assessment was conducted to ascertain the source of the food contamination. Risks of foodborne infection from the foods eaten were determined using attack rates and odds ratios. Data was obtained from 54 people who consumed food served during the stakeholders’ meeting. Out of this population, 44 people reported with symptoms of food poisoning representing 81.45% (overall attack rate). The peak incubation period was seven hours with a minimum and maximum incubation periods of four and 17 hours, respectively. The commonly reported symptoms were diarrhoea (97.73%, 43/44), vomiting (84.09%, 37/44) and abdominal cramps (75.00%, 33/44). From the incubation period, duration of illness and the symptoms, toxin-mediated food poisoning was suspected. The environmental risk assessment of the implicated catering facility indicated a lack of time/temperature control, inadequate knowledge on food safety among workers and sanitation issues. Limited number of food samples was received for microbiological analysis. Multivariate analysis indicated that illness was significantly associated with the consumption of the snacks served (OR 14.78, P < 0.001). No stool and blood or samples of etiologic food were available for organism isolation; however, the suspected etiologic agent was Staphylococcus aureus or Clostridium perfringens. The outbreak could probably be due to the consumption of unwholesome snack (tuna sandwich or chicken. The contamination and/or growth of the etiologic agent in the snack may be due to the breakdown in cleanliness, time/temperature control and good food handling practices. Training of food handlers in basic food hygiene and safety is recommended.

Keywords: Accra, buffet, conference, C. perfringens, cohort study, food poisoning, gastroenteritis, office workers, Staphylococcus aureus

Procedia PDF Downloads 232
534 Gastro-Protective Actions of Melatonin and Murraya koenigii Leaf Extract Combination in Piroxicam Treated Male Wistar Rats

Authors: Syed Benazir Firdaus, Debosree Ghosh, Aindrila Chattyopadhyay, Kuladip Jana, Debasish Bandyopadhyay

Abstract:

Gastro-toxic effect of piroxicam, a classical non-steroidal anti-inflammatory drug (NSAID), has restricted its use in arthritis and similar diseases. The present study aims to find if a combination of melatonin and Murraya koenigii leaf extract therapy can protect against piroxicam induced ulcerative damage in rats. For this study, rats were divided into four groups namely control group where rats were orally administered distilled water, only combination treated group, piroxicam treated group and combination pre-administered piroxicam treated group. Each group of rats consisted of six animals. Melatonin at a dose of 20mg/kg body weight and antioxidant rich Murraya koenigii leaf extract at a dose of 50 mg /kg body weight were successively administered at 30 minutes interval one hour before oral administration of piroxicam at a dose of 30 mg/kg body weight to Wistar rats in the combination pre-administered piroxicam treated group. The rats of the animal group which was only combination treated were administered both the drugs respectively without piroxicam treatment whereas the piroxicam treated animal group was administered only piroxicam at 30mg/kg body weight without any pre-treatment with the combination. Macroscopic examination along with histo-pathological study of gastric tissue using haemotoxylin-eosin staining and alcian blue dye staining showed protection of the gastric mucosa in the combination pre-administered piroxicam treated group. Determination of adherent mucus content biochemically and collagen content through Image J analysis of picro-sirius stained sections of rat gastric tissue also revealed protective effects of the combination in piroxicam mediated toxicity. Gelatinolytic activity of piroxicam was significantly reduced by pre-administration of the drugs which was well exhibited by the gelatin zymography study of the rat gastric tissue. Mean ulcer index determined from macroscopic study of rat stomach reduced to a minimum (0±0.00; Mean ± Standard error of mean and number of animals in the group=6) indicating the absence of ulcer spots on pre-treatment of rats with the combination. Gastro-friendly prostaglandin (PGE2) which otherwise gets depleted on piroxicam treatment was also well protected when the combination was pre-administered in the rats prior to piroxicam treatment. The requirement of the individual drugs in low doses in this combinatorial therapeutic approach will possibly minimize the cost of therapy as well as it will eliminate the possibility of any pro-oxidant side effects on the use of high doses of antioxidants. Beneficial activity of this combination therapy in the rat model raises the possibility that similar protective actions might be also observed if it is adopted by patients consuming NSAIDs like piroxicam. However, the introduction of any such therapeutic approach is subject to future studies in human.

Keywords: gastro-protective action, melatonin, Murraya koenigii leaf extract, piroxicam

Procedia PDF Downloads 308
533 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 63
532 The Effect of Filter Design and Face Velocity on Air Filter Performance

Authors: Iyad Al-Attar

Abstract:

Air filters installed in HVAC equipment and gas turbine for power generation confront several atmospheric contaminants with various concentrations while operating in different environments (tropical, coastal, hot). This leads to engine performance degradation, as contaminants are capable of deteriorating components and fouling compressor assembly. Compressor fouling is responsible for 70 to 85% of gas turbine performance degradation leading to reduction in power output and availability and an increase in the heat rate and fuel consumption. Therefore, filter design must take into account face velocities, pleat count and its corresponding surface area; to verify filter performance characteristics (Efficiency and Pressure Drop). The experimental work undertaken in the current study examined two groups of four filters with different pleating densities were investigated for the initial pressure drop response and fractional efficiencies. The pleating densities used for this study is 28, 30, 32 and 34 pleats per 100mm for each pleated panel and measured for ten different flow rates ranging from 500 to 5000 m3/h with increment of 500m3/h. This experimental work of the current work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase in face velocity and pleat density. The reasons that led to surface area losses of filtration media are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. It is evident from entire array of experiments that as the particle size increases, the efficiency decreases until the MPPS is reached. Beyond the MPPS, the efficiency increases with increase in particle size. The MPPS shifts to a smaller particle size as the face velocity increases, while the pleating density and orientation did not have a pronounced effect on the MPPS. Throughout the study, an optimal pleat count which satisfies initial pressure drop and efficiency requirements may not have necessarily existed. The work has also suggested that a valid comparison of the pleat densities should be based on the effective surface area that participates in the filtration action and not the total surface area the pleat density provides.

Keywords: air filters, fractional efficiency, gas cleaning, glass fibre, HEPA filter, permeability, pressure drop

Procedia PDF Downloads 135
531 Euthanasia as a Case of Judicial Entrepreneurship in India: Analyzing the Role of the Supreme Court in the Policy Process of Euthanasia

Authors: Aishwarya Pothula

Abstract:

Euthanasia in India is a politically dormant policy issue in the sense that discussions around it are sporadic in nature (usually with developments in specific cases) and it stays as a dominant issue in the public domain for a fleeting period. In other words, it is a non-political issue that has been unable to successfully get on the policy agenda. This paper studies how the Supreme Court of India (SC) plays a role in euthanasia’s policy making. In 2011, the SC independently put a law in place that legalized passive euthanasia through its judgement in the Aruna Shanbaug v. Union of India case. According to this, it is no longer illegal to withhold/withdraw a patient’s medical treatment in certain cases. This judgement, therefore, is the empirical focus of this paper. The paper essentially employs two techniques of discourse analysis to study the SC’s system of argumentation. The two methods, Text Analysis using Gasper’s Analysis Table and Frame Analysis – are complemented by two discourse techniques called metaphor analysis and lexical analysis. The framework within which the analysis is conducted lies in 1) the judicial process of India, i.e. the SC procedures and the Constitutional rules and provisions, and 2) John W. Kingdon’s theory of policy windows and policy entrepreneurs. The results of this paper are three-fold: first, the SC dismiss the petitioner’s request for passive euthanasia on inadequate and weak grounds, thereby setting no precedent for the historic law they put in place. In other words, they leave the decision open for the Parliament to act upon. Hence the judgement, as opposed to arguments by many, is by no means an instance of judicial activism/overreach. Second, they define euthanasia in a way that resonates with existing broader societal themes. They combine this with a remarkable use of authoritative and protective tones/stances to settle at an intermediate position that balances the possible opposition to their role in the process and what they (perhaps) perceive to be an optimal solution. Third, they soften up the policy community (including the public) to the idea of passive euthanasia leading it towards a Parliamentarian legislation. They achieve this by shaping prevalent principles, provisions and worldviews through an astute use of the legal instruments at their disposal. This paper refers to this unconventional role of the SC as ‘judicial entrepreneurship’ which is also the first scholarly contribution towards research on euthanasia as a policy issue in India.

Keywords: argumentation analysis, Aruna Ramachandra Shanbaug, discourse analysis, euthanasia, judicial entrepreneurship, policy-making process, supreme court of India

Procedia PDF Downloads 268
530 Empirical Superpave Mix-Design of Rubber-Modified Hot-Mix Asphalt in Railway Sub-Ballast

Authors: Fernando M. Soto, Gaetano Di Mino

Abstract:

The design of an unmodified bituminous mixture and three rubber-aggregate mixtures containing rubber-aggregate by a dry process (RUMAC) was evaluated, using an empirical-analytical approach based on experimental findings obtained in the laboratory with the volumetric mix design by gyratory compaction. A reference dense-graded bituminous sub-ballast mixture (3% of air voids and a bitumen 4% over the total weight of the mix), and three rubberized mixtures by dry process (1,5 to 3% of rubber by total weight and 5-7% of binder) were used applying the Superpave mix-design for a level 3 (high-traffic) design rail lines. The railway trackbed section analyzed was a granular layer of 19 cm compacted, while for the sub-ballast a thickness of 12 cm has been used. In order to evaluate the effect of increasing the specimen density (as a percent of its theoretical maximum specific gravity), in this article, are illustrated the results obtained after different comparative analysis into the influence of varying the binder-rubber percentages under the sub-ballast layer mix-design. This work demonstrates that rubberized blends containing crumb and ground rubber in bituminous asphalt mixtures behave at least similar or better than conventional asphalt materials. By using the same methodology of volumetric compaction, the densification curves resulting from each mixture have been studied. The purpose is to obtain an optimum empirical parameter multiplier of the number of gyrations necessary to reach the same compaction energy as in conventional mixtures. It has provided some experimental parameters adopting an empirical-analytical method, evaluating the results obtained from the gyratory-compaction of bituminous mixtures with an HMA and rubber-aggregate blends. An extensive integrated research has been carried out to assess the suitability of rubber-modified hot mix asphalt mixtures as a sub-ballast layer in railway underlayment trackbed. Design optimization of the mixture was conducted for each mixture and the volumetric properties analyzed. Also, an improved and complete manufacturing process, compaction and curing of these blends are provided. By adopting this increase-parameters of compaction, called 'beta' factor, mixtures modified with rubber with uniform densification and workability are obtained that in the conventional mixtures. It is found that considering the usual bearing capacity requirements in rail track, the optimal rubber content is 2% (by weight) or 3.95% (by volumetric substitution) and a binder content of 6%.

Keywords: empirical approach, rubber-asphalt, sub-ballast, superpave mix-design

Procedia PDF Downloads 369
529 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 377
528 Effects of Supplementation of Nano-Particle Zinc Oxide and Mannan-Oligosaccharide (MOS) on Growth, Feed Utilization, Fatty Acid Profile, Intestinal Morphology, and Hematology in Nile tilapia, Oreochromis niloticus (L.) fry

Authors: Tewodros Abate Alemayehu, Abebe Getahun, Akewake Geremew, Dawit Solomon Demeke, John Recha, Dawit Solomon, Gebremedihin Ambaw, Fasil Dawit Moges

Abstract:

The purpose of this study was to examine the effects of supplementation of zinc oxide (ZnO) nanoparticles and Mannan-oligosaccharide (MOS) on growth performance, feed utilization, fatty acid profiles, hematology, and intestinal morphology of Chamo strain Nile tilapia Oreochromis niloticus (L.) fry reared at optimal temperature (28.62 ± 0.11 ⁰C). Nile tilapia fry (initial weight 1.45 ± 0.01g) were fed basal diet/control diet (Diet-T1), 6 g kg-¹ MOS supplemented diet (Diet-T2), 4 mg ZnO-NPs supplemented diet (Diet-T3), 4 mg ZnO-Bulk supplemented diet (Diet-T4), a combination of 6 g kg-¹ MOS and 4 mg ZnO-Bulk supplemented diet (Diet-T5) and combination of 6 g kg-¹ MOS and 4 mg ZnO-NPs supplemented diet (Diet-T6). Randomly, duplicate aquariums for each diet were assigned and hand-fed to apparent satiation three times daily (08:00, 12:00, and 16:00) for 12 weeks. Fish fed MOS, ZnO-NPs, and a combination of MOS and ZnO-Bulk supplemented diet had higher weight gain, Daily Growth Rate (DGR), and Specific Growth Rate (SGR) than fish fed the basal diet and other feeding groups, although the effect was not significant. According to the GC analysis, Nile tilapia was supplemented with 6 g kg-¹ MOS, 4 mg ZnO-NPs, or a combination of ZnO-NPs, and MOS showed the highest content of EPA, DHA, and higher ratios of PUFA/SFA than other feeding groups. Mean villi length in the proximal and middle portion of the Nile tilapia intestine was affected significantly (p<0.05) by diet. Fish fed Diet-T2 and Diet-T3 had significantly higher villi lengths in the proximal and middle portions of the intestine compared to other feeding groups. The inclusion of additives significantly improved goblet numbers at the proximal, middle, and distal portions of the intestine. Supplementation of additives had also improved some hematological parameters compared with control groups. In conclusion, dietary supplementation of additives MOS and ZnO-NPs could confer benefits on growth performance, fatty acid profiles, hematology, and intestinal morphology of Chamo strain Nile tilapia.

Keywords: chamo strain nile tilapia, fatty acid profile, hematology, intestinal morphology, MOS, ZnO-Bulk, ZnO-NPs

Procedia PDF Downloads 77
527 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada

Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman

Abstract:

Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.

Keywords: HAND, DTM, rapid floodplain, simplified conceptual models

Procedia PDF Downloads 153
526 Music Genre Classification Based on Non-Negative Matrix Factorization Features

Authors: Soyon Kim, Edward Kim

Abstract:

In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.

Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)

Procedia PDF Downloads 303
525 Detection and Identification of Antibiotic Resistant Bacteria Using Infra-Red-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have an important role in controlling illness associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global health-care problem. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing like disk diffusion are time-consuming and other method including E-test, genotyping are relatively expensive. Fourier transform infrared (FTIR) microscopy is rapid, safe, and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 550 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 85% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E. coli, FTIR, multivariate analysis, susceptibility

Procedia PDF Downloads 266
524 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture

Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger

Abstract:

3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.

Keywords: 3D woven composites, compression, preforms, textile composites

Procedia PDF Downloads 136
523 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂

Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine

Abstract:

Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).

Keywords: devulcanization, recycling, rubber, waste

Procedia PDF Downloads 390
522 A Dynamic Model for Circularity Assessment of Nutrient Recovery from Domestic Sewage

Authors: Anurag Bhambhani, Jan Peter Van Der Hoek, Zoran Kapelan

Abstract:

The food system depends on the availability of Phosphorus (P) and Nitrogen (N). Growing population, depleting Phosphorus reserves and energy-intensive industrial nitrogen fixation are threats to their future availability. Recovering P and N from domestic sewage water offers a solution. Recovered P and N can be applied to agricultural land, replacing virgin P and N. Thus, recovery from sewage water offers a solution befitting a circular economy. To ensure minimum waste and maximum resource efficiency a circularity assessment method is crucial to optimize nutrient flows and minimize losses. Material Circularity Indicator (MCI) is a useful method to quantify the circularity of materials. It was developed for materials that remain within the market and recently extended to include biotic materials that may be composted or used for energy recovery after end-of-use. However, MCI has not been used in the context of nutrient recovery. Besides, MCI is time-static, i.e., it cannot account for dynamic systems such as the terrestrial nutrient cycles. Nutrient application to agricultural land is a highly dynamic process wherein flows and stocks change with time. The rate of recycling of nutrients in nature can depend on numerous factors such as prevailing soil conditions, local hydrology, the presence of animals, etc. Therefore, a dynamic model of nutrient flows with indicators is needed for the circularity assessment. A simple substance flow model of P and N will be developed with the help of flow equations and transfer coefficients that incorporate the nutrient recovery step along with the agricultural application, the volatilization and leaching processes, plant uptake and subsequent animal and human uptake. The model is then used for calculating the proportions of linear and restorative flows (coming from reused/recycled sources). The model will simulate the adsorption process based on the quantity of adsorbent and nutrient concentration in the water. Thereafter, the application of the adsorbed nutrients to agricultural land will be simulated based on adsorbate release kinetics, local soil conditions, hydrology, vegetation, etc. Based on the model, the restorative nutrient flow (returning to the sewage plant following human consumption) will be calculated. The developed methodology will be applied to a case study of resource recovery from wastewater. In the aforementioned case study located in Italy, biochar or zeolite is to be used for recovery of P and N from domestic sewage through adsorption and thereafter, used as a slow-release fertilizer in agriculture. Using this model, information regarding the efficiency of nutrient recovery and application can be generated. This can help to optimize the recovery process and application of the nutrients. Consequently, this will help to optimize nutrient recovery and application and reduce the dependence of the food system on the virgin extraction of P and N.

Keywords: circular economy, dynamic substance flow, nutrient cycles, resource recovery from water

Procedia PDF Downloads 198