Search results for: speed limit in london
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4529

Search results for: speed limit in london

629 2D-Numerical Modelling of Local Scour around a Circular Pier in Steady Current

Authors: Mohamed Rajab Peer Mohamed, Thiruvenkatasamy Kannabiran

Abstract:

In the present investigation, the scour around a circular pier subjected to a steady current were studied numerically using two-dimensional MIKE21 Flow Model (FM) and Sand Transport (ST)Modulewhich is developed by Danish Hydraulic Institute (DHI), Denmark. The unstructured flexible mesh generated with rectangular flume dimension of 10 m wide, 1 m deep, and 30 m long. The grain size of the sand was d50 = 0.16 mm, sediment size, sediment gradation=1.16, pier diameter D= 30 mm and depth-averaged current velocity, U = 0.449 m/s are considered in the model. The estimated scour depth obtained from this model is validated and it is observed that the results of the model have good agreement with flume experimental results.In order to estimate the scour depth, several simulations were made for three cases viz., Case I:change in sediment transport model description in the numerical model viz, i) Engelund-Hansen model, ii) Engelund-Fredsøe model, and iii) Van Rijn model, Case II: change in current velocity for keeping constant pile diameter D=0.03 m and Case III:change in pier diameter for constant depth averaged current speed U=0.449 m/s.In case I simulations, the results indicate that the scour depth S/D is the order of 1.73 for Engelund-Hansen model, 0.64 for Engelund-Fredsøe model and 0.46 for VanRijn model. The scour depth estimates using Engelund-Hansen method compares well the experimental results.In case II, simulations show that the scour depth increases with increasing current component of the flow.In case III simulations, the results indicate that the scour depth increases with increase in pier diameter and it stabilize attains steady value when the Froude number> 2.71.All the results of the numerical simulations are clearly matches with reported values of the experimental results. Hence, this MIKE21 FM –Sand Transport model can be used as a suitable tool to estimate the scour depth for field applications. Moreover, to provide suitable scour protection methods, the maximum scour depth is to be predicted, Engelund-Hansen method can be adopted to estimate the scour depth in the steady current region.

Keywords: circular pier, MIKE21, numerical model, scour, sediment transport

Procedia PDF Downloads 303
628 High-Rises and Urban Design: The Reasons for Unsuccessful Placemaking with Residential High-Rises in England

Authors: E. Kalcheva, A. Taki, Y. Hadi

Abstract:

High-rises and placemaking is an understudied combination which receives more and more interest with the proliferation of this typology in many British cities. The reason for studying three major cities in England: London, Birmingham and Manchester, is to learn from the latest advances in urban design in well-developed and prominent urban environment. The analysis of several high-rise sites reveals the weaknesses in urban design of contemporary British cities and presents an opportunity to study from the implemented examples. Therefore, the purpose of this research is to analyze design approaches towards creating a sustainable and varied urban environment when high-rises are involved. The research questions raised by the study are: what is the quality of high-rises and their surroundings; what facilities and features are deployed in the research area; what is the role of the high-rise buildings in the placemaking process; what urban design principles are applicable in this context. The methodology utilizes observation of the researched area by structured questions, developed by the author to evaluate the outdoor qualities of the high-rise surroundings. In this context, the paper argues that the quality of the public realm around the high-rises is quite low, missing basic but vital elements such as plazas, public art, and seating, along with landscaping and pocket parks. There is lack of coherence, the rhythm of the streets is often disrupted, and even though the high-rises are very aesthetically appealing, they fail to create a sense of place on their own. The implications of the study are that future planning can take into consideration the critique in this article and provide more opportunities for urban design interventions around high-rise buildings in the British cities.

Keywords: high-rises, placemaking, urban design, townscape

Procedia PDF Downloads 315
627 Blood Flow Estimator of the Left Ventricular Assist Device Based in Look-Up-Table: In vitro Tests

Authors: Tarcisio F. Leao, Bruno Utiyama, Jeison Fonseca, Eduardo Bock, Aron Andrade

Abstract:

This work presents a blood flow estimator based in Look-Up-Table (LUT) for control of Left Ventricular Assist Device (LVAD). This device has been used as bridge to transplantation or as destination therapy to treat patients with heart failure (HF). Destination Therapy application requires a high performance LVAD; thus, a stable control is important to keep adequate interaction between heart and device. LVAD control provides an adequate cardiac output while sustaining an appropriate flow and pressure blood perfusion, also described as physiologic control. Because thrombus formation and system reliability reduction, sensors are not desirable to measure these variables (flow and pressure blood). To achieve this, control systems have been researched to estimate blood flow. LVAD used in the study is composed by blood centrifugal pump, control, and power supply. This technique used pump and actuator (motor) parameters of LVAD, such as speed and electric current. Estimator relates electromechanical torque (motor or actuator) and hydraulic power (blood pump) via LUT. An in vitro Mock Loop was used to evaluate deviations between blood flow estimated and actual. A solution with glycerin (50%) and water was used to simulate the blood viscosity with hematocrit 45%. Tests were carried out with variation hematocrit: 25%, 45% and 58% of hematocrit, or 40%, 50% and 60% of glycerin in water solution, respectively. Test with bovine blood was carried out (42% hematocrit). Mock Loop is composed: reservoir, tubes, pressure and flow sensors, and fluid (or blood), beyond LVAD. Estimator based in LUT is patented, number BR1020160068363, in Brazil. Mean deviation is 0.23 ± 0.07 L/min for mean flow estimated. Larger mean deviation was 0.5 L/min considering hematocrit variation. This estimator achieved deviation adequate for physiologic control implementation. Future works will evaluate flow estimation performance in control system of LVAD.

Keywords: blood pump, flow estimator, left ventricular assist device, look-up-table

Procedia PDF Downloads 175
626 'Infection in the Sentence': The Castration of a Black Woman's Dream of Authorship as Manifested in Buchi Emecheta's Second Class Citizen

Authors: Aseel Hatif Jassam, Hadeel Hatif Jassam

Abstract:

The paper discusses the phallocentric discourse that is challenged by women in general and of women of color in particular in spite of the simultaneity of oppression due to race, class, and gender in the diaspora. Therefore, the paper gives a brief account of women's experience in the light of postcolonial feminist theory. The paper also cast light on the theories of Luce Irigaray and Helen Cixous, two Feminist theorists who support and advise women to have their own discourse to challenge the infectious patriarchal sentence advocated by Sigmund Freud and Harold Bloom's model of literary history. Black women authors like BuchiEmecheta as well as her alter ego Adah, a Nigerian-born girl and the protagonist of her semi-autobiographical novel, Second Class Citizen, suffer from this phallocentric and oppressive sentence and displacement as they migrate from Nigeria, a former British colony where they feel marginalized to North London with the hope of realizing their dreams. Yet, in the British diaspora, they get culturally shocked and continue to suffer from further marginalization due to class and race and are insulted and interiorized ironically by their patriarchal husbands who try to put an end to their dreams of authorship. With the phallocentric belief that women aren't capable of self-representation in the background of their mindsets, the violent Sylvester Onwordi and Francis Obi, the husbands of both Emecheta and Adah, respectively have practiced oppression on them by burning their own authoritative voice, represented by the novels they write while they are struggling with their economically atrocious living experience in the British diaspora.

Keywords: authorship, British diaspora, discourse, phallocentric, patriarchy

Procedia PDF Downloads 167
625 Extraction and Quantification of Triclosan in Wastewater Samples Using Molecularly Imprinted Membrane Adsorbent

Authors: Siyabonga Aubrey Mhlongo, Linda Lunga Sibali, Phumlane Selby Mdluli, Peter Papoh Ndibewu, Kholofelo Clifford Malematja

Abstract:

This paper reports on the successful extraction and quantification of an antibacterial and antifungal agent present in some consumer products (Triclosan: C₁₂H₇Cl₃O₂)generally found in wastewater or effluents using molecularly imprinted membrane adsorbent (MIMs) followed by quantification and removal on a high-performance liquid chromatography (HPLC). Triclosan is an antibacterial and antifungal agent present in some consumer products like toothpaste, soaps, detergents, toys, and surgical cleaning treatments. The MIMs was fabricated usingpolyvinylidene fluoride (PVDF) polymer with selective micro composite particles known as molecularly imprinted polymers (MIPs)via a phase inversion by immersion precipitation technique. This resulted in an improved hydrophilicity and mechanical behaviour of the membranes. Wastewater samples were collected from the Umbogintwini Industrial Complex (UIC) (south coast of Durban, KwaZulu-Natal in South Africa). central UIC effluent treatment plant and pre-treated before analysis. Experimental parameters such as sample size, contact time, stirring speed were optimised. The resultant MIMs had an adsorption efficiency of 97% of TCS with reference to NIMs and bare membrane, which had 92%, 88%, respectively. The analytical method utilized in this review had limits of detection (LoD) and limits of quantification (LoQ) of 0.22, 0.71µgL-1 in wastewater effluent, respectively. The percentage recovery for the effluent samples was 68%. The detection of TCS was monitored for 10 consecutive days, where optimum TCS traces detected in the treated wastewater was 55.0μg/L inday 9 of the monitored days, while the lowest detected was 6.0μg/L. As the concentrations of analytefound in effluent water samples were not so diverse, this study suggested that MIMs could be the best potential adsorbent for the development and continuous progress in membrane technologyand environmental sciences, lending its capability to desalination.

Keywords: molecularly imprinted membrane, triclosan, phase inversion, wastewater

Procedia PDF Downloads 114
624 Challenges of Carbon Trading Schemes in Africa

Authors: Bengan Simbarashe Manwere

Abstract:

The entire African continent, comprising 55 countries, holds a 2% share of the global carbon market. The World Bank attributes the continent’s insignificant share and participation in the carbon market to the limited access to electricity. Approximately 800 million people spread across 47 African countries generate as much power as Spain, with a population of 45million. Only South Africa and North Africa have carbon-reduction investment opportunities on the continent and dominate the 2% market share of the global carbon market. On the back of the 2015 Paris Agreement, South Africa signed into law the Carbon Tax Act 15 of 2019 and the Customs and Excise Amendment Act 13 of 2019 (Gazette No. 4280) on 1 June 2019. By these laws, South Africa was ushered into the league of active global carbon market players. By increasing the cost of production by the rate of R120/tCO2e, the tax intentionally compels the internalization of pollution as a cost of production and, relatedly, stimulate investment in clean technologies. The first phase covered the 1 June 2019 – 31 December 2022 period during which the tax was meant to escalate at CPI + 2% for Scope 1 emitters. However, in the second phase, which stretches from 2023 to 2030, the tax will escalate at the inflation rate only as measured by the consumer price index (CPI). The Carbon Tax Act provides for carbon allowances as mitigation strategies to limit agents’ carbon tax liability by up to 95% for fugitive and process emissions. Although the June 2019 Carbon Tax Act explicitly makes provision for a carbon trading scheme (CTS), the carbon trading regulations thereof were only finalised in December 2020. This points to a delay in the establishment of a carbon trading scheme (CTS). Relatedly, emitters in South Africa are not able to benefit from the 95% reduction in effective carbon tax rate from R120/tCO2e to R6/tCO2e as the Johannesburg Stock Exchange (JSE) has not yet finalized the establishment of the market for trading carbon credits. Whereas most carbon trading schemes have been designed and constructed from the beginning as new tailor-made systems in countries the likes of France, Australia, Romania which treat carbon as a financial product, South Africa intends, on the contrary, to leverage existing trading infrastructure of the Johannesburg Stock Exchange (JSE) and the Clearing and Settlement platforms of Strate, among others, in the interest of the Paris Agreement timelines. Therefore the carbon trading scheme will not be constructed from scratch. At the same time, carbon will be treated as a commodity in order to align with the existing institutional and infrastructural capacity. This explains why the Carbon Tax Act is silent about the involvement of the Financial Sector Conduct Authority (FSCA).For South Africa, there is need to establish they equilibrium stability of the CTS. This is important as South Africa is an innovator in carbon trading and the successful trading of carbon credits on the JSE will lead to imitation by early adopters first, followed by the middle majority thereafter.

Keywords: carbon trading scheme (CTS), Johannesburg stock exchange (JSE), carbon tax act 15 of 2019, South Africa

Procedia PDF Downloads 52
623 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 133
622 Detection of Egg Proteins in Food Matrices (2011-2021)

Authors: Daniela Manila Bianchi, Samantha Lupi, Elisa Barcucci, Sandra Fragassi, Clara Tramuta, Lucia Decastelli

Abstract:

Introduction: The undeclared allergens detection in food products plays a fundamental role in the safety of the allergic consumer. The protection of allergic consumers is guaranteed, in Europe, by Regulation (EU) No 1169/2011 of the European Parliament, which governs the consumer's right to information and identifies 14 food allergens to be mandatorily indicated on food labels: among these, an egg is included. An egg can be present as an ingredient or as contamination in raw and cooked products. The main allergen egg proteins are ovomucoid, ovalbumin, lysozyme, and ovotransferrin. This study presents the results of a survey conducted in Northern Italy aimed at detecting the presence of undeclared egg proteins in food matrices in the latest ten years (2011-2021). Method: In the period January 2011 - October 2021, a total of 1205 different types of food matrices (ready-to-eat, meats, and meat products, bakery and pastry products, baby foods, food supplements, pasta, fish and fish products, preparations for soups and broths) were delivered to Food Control Laboratory of Istituto Zooprofilattico Sperimentale of Piemonte Liguria and Valle d’Aosta to be analyzed as official samples in the frame of Regional Monitoring Plan of Food Safety or in the contest of food poisoning. The laboratory is ISO 17025 accredited, and since 2019, it has represented the National Reference Centre for the detection in foods of substances causing food allergies or intolerances (CreNaRiA). All samples were stored in the laboratory according to food business operator instructions and analyzed within the expiry date for the detection of undeclared egg proteins. Analyses were performed with RIDASCREEN®FAST Ei/Egg (R-Biopharm ® Italia srl) kit: the method was internally validated and accredited with a Limit of Detection (LOD) equal to 2 ppm (mg/Kg). It is a sandwich enzyme immunoassay for the quantitative analysis of whole egg powder in foods. Results: The results obtained through this study showed that egg proteins were found in 2% (n. 28) of food matrices, including meats and meat products (n. 16), fish and fish products (n. 4), bakery and pastry products (n. 4), pasta (n. 2), preparations for soups and broths (n.1) and ready-to-eat (n. 1). In particular, in 2011 egg proteins were detected in 5% of samples, in 2012 in 4%, in 2013, 2016 and 2018 in 2%, in 2014, 2015 and 2019 in 3%. No egg protein traces were detected in 2017, 2020, and 2021. Discussion: Food allergies occur in the Western World in 2% of adults and up to 8% of children. Allergy to eggs is one of the most common food allergies in the pediatrics context. The percentage of positivity obtained from this study is, however, low. The trend over the ten years has been slightly variable, with comparable data.

Keywords: allergens, food, egg proteins, immunoassay

Procedia PDF Downloads 126
621 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 521
620 Intermittent Effect of Coupled Thermal and Acoustic Sources on Combustion: A Spatial Perspective

Authors: Pallavi Gajjar, Vinayak Malhotra

Abstract:

Rockets have been known to have played a predominant role in spacecraft propulsion. The quintessential aspect of combustion-related requirements of a rocket engine is the minimization of the surrounding risks/hazards. Over time, it has become imperative to understand the combustion rate variation in presence of external energy source(s). Rocket propulsion represents a special domain of chemical propulsion assisted by high speed flows in presence of acoustics and thermal source(s). Jet noise leads to a significant loss of resources and every year a huge amount of financial aid is spent to prevent it. External heat source(s) induce high possibility of fire risk/hazards which can sufficiently endanger the operation of a space vehicle. Appreciable work had been done with justifiable simplification and emphasis on the linear variation of external energy source(s), which yields good physical insight but does not cater to accurate predictions. Present work experimentally attempts to understand the correlation between inter-energy conversions with the non-linear placement of external energy source(s). The work is motivated by the need to have better fire safety and enhanced combustion. The specific objectives of the work are a) To interpret the related energy transfer for combustion in presence of alternate external energy source(s) viz., thermal and acoustic, b) To fundamentally understand the role of key controlling parameters viz., separation distance, the number of the source(s), selected configurations and their non-linear variation to resemble real-life cases. An experimental setup was prepared using incense sticks as potential fuel and paraffin wax candles as the external energy source(s). The acoustics was generated using frequency generator, and source(s) were placed at selected locations. Non-equidistant parametric experimentation was carried out, and the effects were noted on regression rate changes. The results are expected to be very helpful in offering a new perspective into futuristic rocket designs and safety.

Keywords: combustion, acoustic energy, external energy sources, regression rate

Procedia PDF Downloads 137
619 Study on the Prediction of Serviceability of Garments Based on the Seam Efficiency and Selection of the Right Seam to Ensure Better Serviceability of Garments

Authors: Md Azizul Islam

Abstract:

Seam is the line of joining two separate fabric layers for functional or aesthetic purposes. Different kinds of seams are used for assembling the different areas or parts of the garment to increase serviceability. To empirically support the importance of seam efficiency on serviceability of garments, this study is focused on choosing the right type of seams for particular sewing parts of the garments based on the seam efficiency to ensure better serviceability. Seam efficiency is the ratio of seam strength and fabric strength. Single jersey knitted finished fabrics of four different GSMs (gram per square meter) were used to make the test garments T-shirt. Three distinct types of the seam: superimposed, lapped and flat seam was applied to the side seams of T-shirt and sewn by lockstitch (stitch class- 301) in a flat-bed plain sewing machine (maximum sewing speed: 5000 rpm) to make (3x4) 12 T-shirts. For experimental purposes, needle thread count (50/3 Ne), bobbin thread count (50/2 Ne) and the stitch density (stitch per inch: 8-9), Needle size (16 in singer system), stitch length (31 cm), and seam allowance (2.5cm) were kept same for all specimens. The grab test (ASTM D5034-08) was done in the Universal tensile tester to measure the seam strength and fabric strength. The produced T-shirts were given to 12 soccer players who wore the shirts for 20 soccer matches (each match of 90 minutes duration). Serviceability of the shirt were measured by visual inspection of a 5 points scale based on the seam conditions. The study found that T-shirts produced with lapped seam show better serviceability and T-shirts made of flat seams perform the lowest score in serviceability score. From the calculated seam efficiency (seam strength/ fabric strength), it was obvious that the performance (in terms of strength) of the lapped and bound seam is higher than that of the superimposed seam and the performance of superimposed seam is far better than that of the flat seam. So it can be predicted that to get a garment of high serviceability, lapped seams could be used instead of superimposed or other types of the seam. In addition, less stressed garments can be assembled by others seems like superimposed seams or flat seams.

Keywords: seam, seam efficiency, serviceability, T-shirt

Procedia PDF Downloads 191
618 Simulation and Design of an Aerospace Mission Powered by “Candy” Type Fuel Engines

Authors: N. Hernández Huertas, F. Rojas Mora

Abstract:

Sounding rockets are aerospace vehicles that were developed in the mid-20th century, and since then numerous investigations have been executed with the aim of innovate in this type of technology. However, the costs associated to the production of this type of technology are usually quite high, and therefore the challenge that exists today is to be able to reduce them. In this way, the main objective of this document is to present the design process of a Colombian aerospace mission capable to reach the thermosphere using low-cost “Candy” type solid fuel engines. This mission is the latest development of the Uniandes Aerospace Project (PUA for its Spanish acronym), which is an undergraduate and postgraduate research group at Universidad de los Andes (Bogotá, Colombia), dedicated to incurring in this type of technology. In this way, the investigations that have been carried out on Candy-type solid fuel, which is a compound of potassium nitrate and sorbitol, have allowed the production of engines powerful enough to reach space, and which represents a unique technological advance in Latin America and an important development in experimental rocketry. In this way, following the engineering iterative design methodology was possible to design a 2-stage sounding rocket with 1 solid fuel engine in each one, which was then simulated in RockSim V9.0 software and reached an apogee of approximately 150 km above sea level. Similarly, a speed equal to 5 Mach was obtained, which after performing a finite element analysis, it was shown that the rocket is strong enough to be able to withstand such speeds. Under these premises, it was demonstrated that it is possible to build a high-power aerospace mission at low cost, using Candy-type solid fuel engines. For this reason, the feasibility of carrying out similar missions clearly depends on the ability to replicate the engines in the best way, since as mentioned above, the design of the rocket is adequate to reach supersonic speeds and reach space. Consequently, with a team of at least 3 members, the mission can be obtained in less than 3 months. Therefore, when publishing this project, it is intended to be a reference for future research in this field and benefit the industry.

Keywords: aerospace missions, Candy type solid propellant engines, design of solid rockets, experimental rocketry, low costs missions

Procedia PDF Downloads 99
617 Accidental U.S. Taxpayers Residing Abroad: Choosing between U.S. Citizenship or Keeping Their Local Investment Accounts

Authors: Marco Sewald

Abstract:

Due to the current enforcement of exterritorial U.S. legislation, up to 9 million U.S. (dual) citizens residing abroad are subject to U.S. double and surcharge taxation and at risk of losing access to otherwise basic financial services and investment opportunities abroad. The United States is the only OECD country that taxes non-resident citizens, lawful permanent residents and other non-resident aliens on their worldwide income, based on local U.S. tax laws. To enforce these policies the U.S. has implemented ‘saving clauses’ in all tax treaties and implemented several compliance provisions, including the Foreign Account Tax Compliance Act (FATCA), Qualified Intermediaries Agreements (QI) and Intergovernmental Agreements (IGA) addressing Foreign Financial Institutions (FFIs) to implement these provisions in foreign jurisdictions. This policy creates systematic cases of double and surcharge taxation. The increased enforcement of compliance rules is creating additional report burdens for U.S. persons abroad and FFIs accepting such U.S. persons as customers. FFIs in Europe react with a growing denial of specific financial services to this population. The numbers of U.S. citizens renouncing has dramatically increased in the last years. A case study is chosen as an appropriate methodology and research method, as being an empirical inquiry that investigates a contemporary phenomenon within its real-life context; when the boundaries between phenomenon and context are not clearly evident; and in which multiple sources of evidence are used. This evaluative approach is testing whether the combination of policies works in practice, or whether they are in accordance with desirable moral, political, economical aims, or may serve other causes. The research critically evaluates the financial and non-financial consequences and develops sufficient strategies. It further discusses these strategies to avoid the undesired consequences of exterritorial U.S. legislation. Three possible strategies are resulting from the use cases: (1) Duck and cover, (2) Pay U.S. double/surcharge taxes, tax preparing fees and accept imposed product limitations and (3) Renounce U.S. citizenship and pay possible exit taxes, tax preparing fees and the requested $2,350 fee to renounce. While the first strategy is unlawful and therefore unsuitable, the second strategy is only suitable if the U.S. citizen residing abroad is planning to move to the U.S. in the future. The last strategy is the only reasonable and lawful way provided by the U.S. to limit the exposure to U.S. double and surcharge taxation and the limitations on financial products. The results are believed to add a perspective to the current academic discourse regarding U.S. citizenship based taxation, currently dominated by U.S. scholars, while providing sufficient strategies for the affected population at the same time.

Keywords: citizenship based taxation, FATCA, FBAR, qualified intermediaries agreements, renounce U.S. citizenship

Procedia PDF Downloads 194
616 Fire Safe Medical Oxygen Delivery for Aerospace Environments

Authors: M. A. Rahman, A. T. Ohta, H. V. Trinh, J. Hyvl

Abstract:

Atmospheric pressure and oxygen (O2) concentration are critical life support parameters for human-occupied aerospace vehicles and habitats. Various medical conditions may require medical O2; for example, the American Medical Association has determined that commercial air travel exposes passengers to altitude-related hypoxia and gas expansion. It may cause some passengers to experience significant symptoms and medical complications during the flight, requiring supplemental medical-grade O2 to maintain adequate tissue oxygenation and prevent hypoxemic complications. Although supplemental medical grade O2 is a successful lifesaver for respiratory and cardiac failure, O2-enriched exhaled air can contain more than 95 % O2, increasing the likelihood of a fire. In an aerospace environment, a localized high concentration O2 bubble forms around a patient being treated for hypoxia, increasing the cabin O2 beyond the safe limit. To address this problem, this work describes a medical O2 delivery system that can reduce the O2 concentration from patient-exhaled O2-rich air to safe levels while maintaining the prescribed O2 administration to the patient. The O2 delivery system is designed to be a part of the medical O2 kit. The system uses cationic multimetallic cobalt complexes to reversibly, selectively, and stoichiometrically chemisorb O2 from the exhaled air. An air-release sub-system monitors the exhaled air, and as soon the O2 percentage falls below 21%, the air is released to the room air. The O2-enriched exhaled air is channeled through a layer of porous, thin-film heaters coated with the cobalt complex. The complex absorbs O2, and when saturated, the complex is heated to 100°C using the thin-film heater. Upon heating, the complex desorbs O2 and is once again ready to absorb or remove the excess O2 from exhaled air. The O2 absorption is a sub-second process, and desorption is a multi-second process. While heating at 0.685 °C/sec, the complex desorbs ~90% O2 in 110 sec. These fast reaction times mean that a simultaneous absorb/desorb process in the O2 delivery system will create a continuous absorption of O2. Moreover, the complex can concentrate O2 by a factor of 160 times that in air and desorb over 90% of the O2 at 100°C. Over 12 cycles of thermogravimetry measurement, less than 0.1% decrease in reversibility in O2 uptake was observed. The 1 kg complex can desorb over 20L of O2, so simultaneous O2 desorption by 0.5 kg of complex and absorption by 0.5 kg of complex can potentially continuously remove 9L/min O2 (~90% desorbed at 100°C) from exhaled air. The complex is synthesized and characterized for reversible O2 absorption and efficacy. The complex changes its color from dark brown to light gray after O2 desorption. In addition to thermogravimetric analysis, the O2 absorption/desorption cycle is characterized using optical imaging, showing stable color changes over ten cycles. The complex was also tested at room temperature in a low O2 environment in its O2 desorbed state, and observed to hold the deoxygenated state under these conditions. The results show the feasibility of using the complex for reversible O2 absorption in the proposed fire safe medical O2 delivery system.

Keywords: fire risk, medical oxygen, oxygen removal, reversible absorption

Procedia PDF Downloads 96
615 Expression of PGC-1 Alpha Isoforms in Response to Eccentric and Concentric Resistance Training in Healthy Subjects

Authors: Pejman Taghibeikzadehbadr

Abstract:

Background and Aim: PGC-1 alpha is a transcription factor that was first detected in brown adipose tissue. Since its discovery, PGC-1 alpha has been known to facilitate beneficial adaptations such as mitochondrial biogenesis and increased angiogenesis in skeletal muscle following aerobic exercise. Therefore, the purpose of this study was to investigate the expression of PGC-1 alpha isoforms in response to eccentric and concentric resistance training in healthy subjects. Materials and Methods: Ten healthy men were randomly divided into two groups (5 patients in eccentric group - 5 in eccentric group). Isokinetic contraction protocols included eccentric and concentric knee extension with maximum power and angular velocity of 60 degrees per second. The torques assigned to each subject were considered to match the workload in both protocols, with a rotational speed of 60 degrees per second. Contractions consisted of a maximum of 12 sets of 10 repetitions for the right leg, a rest time of 30 seconds between each set. At the beginning and end of the study, biopsy of the lateral broad muscle tissue was performed. Biopsies were performed in both distal and proximal directions of the lateral flank. To evaluate the expression of PGC1α-1 and PGC1α-4 genes, tissue analysis was performed in each group using Real-Time PCR technique. Data were analyzed using dependent t-test and covariance test. SPSS21 software and Exell 2013 software were used for data analysis. Results: The results showed that intra-group changes of PGC1α-1 after one session of activity were not significant in eccentric (p = 0.168) and concentric (p = 0.959) groups. Also, inter-group changes showed no difference between the two groups (p = 0.681). Also, intra-group changes of PGC1α-4 after one session of activity were significant in an eccentric group (p = 0.012) and concentric group (p = 0.02). Also, inter-group changes showed no difference between the two groups (p = 0.362). Conclusion: It seems that the lack of significant changes in the desired variables due to the lack of exercise pressure is sufficient to stimulate the increase of PGC1α-1 and PGC1α-4. And with regard to reviewing the answer, it seems that the compatibility debate has different results that need to be addressed.

Keywords: eccentric contraction, concentric contraction, PGC1α-1 و PGC1α-4, human subject

Procedia PDF Downloads 70
614 Emphasis on Difference: Ethnic and National Cultural Heritage Identities and Issues in East Asia Focusing on Korea Cases

Authors: Hyuk-Jin Lee

Abstract:

Even though 23 years have passed in the 21st century, nation-state and nationality-centered cultural identities are still the sentiments and ideologies that dominate the world. Nevertheless, as seen in many cases in Europe, a new perspective is needed to recognize mutual exchanges and influences and to view them as natural cultural exchanges between countries. The situation in East Asia is completely different from Europe. This is presumed to be from the long tradition of having an ethnocentric state concept for at least hundreds of years, quite different from Europe, where the concept of a nation-state was established relatively recently. In other words, unlike Europe, where active exchanges took place, the problem stems from the unique characteristics of East Asia, which has a strong tradition of finding its identity in 'difference'. Thus, it would not be hard to find cultural studies or news of the three East Asian countries emphasizing differences among one another. This applies to all cultural areas, including traditional architecture. For example, in the Korean traditional architecture field, buildings with effects from neighboring countries tend to be ignored, even if they are traditional Korean architecture. In addition to this, in the case of Korea, there seems to be one more cultural harmful aftereffect caused by the 36 years of Japanese colonial rule in the early 20th century; the obsessive filtering concept of 'it must be different from Japan'. In other words, the implicit ideological coercion that the definition of 'Korean cultural heritage' should not be influenced by exchanges with Japan may be found throughout Korean studies. The architectural and cultural aspects of the vast period of time, from the Three Kingdoms era to the beginning of Joseon, which was a period in which cultural influence exchanges with neighboring countries were relatively strong compared to the late Joseon Dynasty, also reflect the 'distorted filtering' caused by finding a repulsive identity against the Japanese colonial period. It is important to look the cultural heritage and traditions as they are inductively, not deductively. If not, we may often ignore or limit our own precious cultural heritage. Conversely, If Baekje, the ancient Korean Kingdom, helped Japan in construction and craftsmen played a big role in building the ancient temple, it would be a healthier perspective to view it as a cultural exchange rather than proudly seeing it as a cultural owner's perspective because this point of view is a proper reconstruction of our ancient and medieval Asian culture (strictly speaking, the color common to East Asia at the time). In particular, this study will examine this topic by giving specific examples from each field of Korean cultural studies. In the search for cultural identity, it would be more helpful for healthy relations between countries and collaborative research in the sensitive part of the interpretation of historical facts as well as cultural circles to minimize excessive meanings on originality and difference.

Keywords: cultural heritage identity, cultural ideology, East Asia, Korea

Procedia PDF Downloads 68
613 Design Forms Urban Space

Authors: Amir Shouri, Fereshteh Tabe

Abstract:

Thoughtful and sequential design strategies will shape the future of human being’s lifestyle. Design, as a product, either being for small furniture on sidewalk or a multi-story structure in urban scale, will be important in creating the sense of quality for citizens of a city. Technology besides economy has played a major role in improving design process and increasing awareness of clients about the character of their required design product. Architects along with other design professionals benefited from improvements in aesthetics and technology in building industry. Accordingly, the expectation platforms of people about the quality of habitable space have risen. However, the question is if the quality of architectural design product has increased with the same speed as technology and client’s expectations. Is it behind or a head of technological and economical improvements? This study will work on developing a model of planning for New York City, from the past to present to future. The role of thoughtful thinking at design stage regardless of where or when it is for; may result in a positive or negative aspect. However, considering design objectives based on the need of human being may help in developing a successful design plan. Technology, economy, culture and people’s support may be other parameters in designing a good product. ‘Design Forms Urban Space’ is going to be done in an analytical, qualitative and quantitative work frame, where it will study cases from all over the world and their achievements compared to New York City’s development. Technology, Organic Design, Materiality, Urban forms, city politics and sustainability will be discussed in different cases in international scale. From design professional’s interest in doing a high quality work for a particular answer to importance of being a follower, the ‘Zero-Carbon City’ in Persian Gulf to ‘Polluted City’ in China, from ‘Urban Scale Furniture’ in cities to ‘Seasonal installations’ of a Megacity, will all be studied with references and detailed look to analysis of each case in order to propose the most resourceful, practical and realistic solutions to questions on ‘A Good Design in a City’, ‘New City Planning and social activities’ and ‘New Strategic Architecture for better Cities’.

Keywords: design quality, urban scale, active city, city installations, architecture for better cities

Procedia PDF Downloads 336
612 A Failure to Strike a Balance: The Use of Parental Mediation Strategies by Foster Carers and Social Workers

Authors: Jennifer E Simpson

Abstract:

Background and purpose: The ubiquitous use of the Internet and social media by children and young people has had a dual effect. The first is to open a world of possibilities and promise that is characterized by the ability to consume and create content, connect with friends, explore and experiment. The second relates to risks such as unsolicited requests, sexual exploitation, cyberbullying and commercial exploitation. This duality poses significant difficulties for a generation of foster carers and social workers who have no childhood experience to draw on in terms of growing up using the Internet, social media and digital devices. This presentation is concerned with the findings of a small qualitative study about the use of digital devices and the Internet by care-experienced young people to stay in touch with their families and the way this was managed by foster carers and social workers using specific parental mediation strategies. The findings highlight that restrictive strategies were used by foster carers and endorsed by social workers. An argument is made for an approach that develops a series of balanced solutions that move foster carers from such restrictive approaches to those that are grounded in co-use and are interpretive in nature. Methods: Using a purposive sampling strategy, 12 triads consisting of care-experienced young people (aged 13-18 years), their foster carers and allocated social workers were recruited. All respondents undertook a semi-structured interview, with the young people detailing what social media apps and other devices they used to contact their families via an Ecomap. The foster carers and social workers shared details of the methods and approaches they used to manage digital devices and the Internet in general. Data analysis was performed using a Framework analytic method to explore the various attitudes, as well as complementary and contradictory perspectives of the young people, their foster carers and allocated social workers. Findings: The majority of foster carers made use of parental mediation strategies that erred on the side of typologies that included setting rules and regulations (restrictive), ad-hoc checking of a young person’s behavior and device (monitoring), and software used to limit or block access to inappropriate websites (technical). It was noted that minimal use was made by foster carers of parental mediation strategies that included talking about content (active/interpretive) or sharing Internet activities (co-use). Amongst the majority of the social workers, they also had a strong preference for restrictive approaches. Conclusions and implications: Trepidations on the part of both foster carers and social workers about the use of digital devices and the Internet meant that the parental strategies used were weighted more towards restriction, with little use made of approaches such as co-use and interpretative. This lack of balance calls for solutions that are grounded in co-use and an interpretive approach, both of which can be achieved through training and support, as well as wider policy change.

Keywords: parental mediation strategies, risk, children in state care, online safety

Procedia PDF Downloads 64
611 Comparison of Different Reanalysis Products for Predicting Extreme Precipitation in the Southern Coast of the Caspian Sea

Authors: Parvin Ghafarian, Mohammadreza Mohammadpur Panchah, Mehri Fallahi

Abstract:

Synoptic patterns from surface up to tropopause are very important for forecasting the weather and atmospheric conditions. There are many tools to prepare and analyze these maps. Reanalysis data and the outputs of numerical weather prediction models, satellite images, meteorological radar, and weather station data are used in world forecasting centers to predict the weather. The forecasting extreme precipitating on the southern coast of the Caspian Sea (CS) is the main issue due to complex topography. Also, there are different types of climate in these areas. In this research, we used two reanalysis data such as ECMWF Reanalysis 5th Generation Description (ERA5) and National Centers for Environmental Prediction /National Center for Atmospheric Research (NCEP/NCAR) for verification of the numerical model. ERA5 is the latest version of ECMWF. The temporal resolution of ERA5 is hourly, and the NCEP/NCAR is every six hours. Some atmospheric parameters such as mean sea level pressure, geopotential height, relative humidity, wind speed and direction, sea surface temperature, etc. were selected and analyzed. Some different type of precipitation (rain and snow) was selected. The results showed that the NCEP/NCAR has more ability to demonstrate the intensity of the atmospheric system. The ERA5 is suitable for extract the value of parameters for specific point. Also, ERA5 is appropriate to analyze the snowfall events over CS (snow cover and snow depth). Sea surface temperature has the main role to generate instability over CS, especially when the cold air pass from the CS. Sea surface temperature of NCEP/NCAR product has low resolution near coast. However, both data were able to detect meteorological synoptic patterns that led to heavy rainfall over CS. However, due to the time lag, they are not suitable for forecast centers. The application of these two data is for research and verification of meteorological models. Finally, ERA5 has a better resolution, respect to NCEP/NCAR reanalysis data, but NCEP/NCAR data is available from 1948 and appropriate for long term research.

Keywords: synoptic patterns, heavy precipitation, reanalysis data, snow

Procedia PDF Downloads 109
610 Derivation of Fragility Functions of Marine Drilling Risers Under Ocean Environment

Authors: Pranjal Srivastava, Piyali Sengupta

Abstract:

The performance of marine drilling risers is crucial in the offshore oil and gas industry to ensure safe drilling operation with minimum downtime. Experimental investigations on marine drilling risers are limited in the literature owing to the expensive and exhaustive test setup required to replicate the realistic riser model and ocean environment in the laboratory. Therefore, this study presents an analytical model of marine drilling riser for determining its fragility under ocean environmental loading. In this study, the marine drilling riser is idealized as a continuous beam having a concentric circular cross-section. Hydrodynamic loading acting on the marine drilling riser is determined by Morison’s equations. By considering the equilibrium of forces on the marine drilling riser for the connected and normal drilling conditions, the governing partial differential equations in terms of independent variables z (depth) and t (time) are derived. Subsequently, the Runge Kutta method and Finite Difference Method are employed for solving the partial differential equations arising from the analytical model. The proposed analytical approach is successfully validated with respect to the experimental results from the literature. From the dynamic analysis results of the proposed analytical approach, the critical design parameters peak displacements, upper and lower flex joint rotations and von Mises stresses of marine drilling risers are determined. An extensive parametric study is conducted to explore the effects of top tension, drilling depth, ocean current speed and platform drift on the critical design parameters of the marine drilling riser. Thereafter, incremental dynamic analysis is performed to derive the fragility functions of shallow water and deep-water marine drilling risers under ocean environmental loading. The proposed methodology can also be adopted for downtime estimation of marine drilling risers incorporating the ranges of uncertainties associated with the ocean environment, especially at deep and ultra-deepwater.

Keywords: drilling riser, marine, analytical model, fragility

Procedia PDF Downloads 135
609 Application of 2D Electrical Resistivity Tomographic Imaging Technique to Study Climate Induced Landslide and Slope Stability through the Analysis of Factor of Safety: A Case Study in Ooty Area, Tamil Nadu, India

Authors: S. Maniruzzaman, N. Ramanujam, Qazi Akhter Rasool, Swapan Kumar Biswas, P. Prasad, Chandrakanta Ojha

Abstract:

Landslide is one of the major natural disasters in South Asian countries. Applying 2D Electrical Resistivity Tomographic Imaging estimation of geometry, thickness, and depth of failure zone of the landslide can be made. Landslide is a pertinent problem in Nilgris plateau next to Himalaya. Nilgris range consists of hard Archean metamorphic rocks. Intense weathering prevailed during the Pre-Cambrian time had deformed the rocks up to 45m depth. The landslides are dominant in the southern and eastern part of plateau of is comparatively smaller than the northern drainage basins, as it has low density of drainage; coarse texture permitted the more of infiltration of rainwater, whereas in the northern part of the plateau entombed with high density of drainage pattern and fine texture with less infiltration than run off, and low to the susceptible to landslide. To get comprehensive information about the landslide zone 2D Electrical Resistivity Tomographic imaging study with CRM 500 Resistivity meter are used in Coonoor– Mettupalyam sector of Nilgiris plateau. To calculate Factor of Safety the infinite slope model of Brunsden and Prior is used. Factor of Safety can be expressed (FS) as the ratio of resisting forces to disturbing forces. If FS < 1 disturbing forces are larger than resisting forces and failure may occur. The geotechnical parameters of soil samples are calculated on the basis upon the apparent resistivity values for litho units of measured from 2D ERT image of the landslide zone. Relationship between friction angles for various soil properties is established by simple regression analysis from apparent resistivity data. Increase of water content in slide zone reduces the effectiveness of the shearing resistance and increase the sliding movement. Time-lapse resistivity changes to slope failure is determined through geophysical Factor of Safety which depends on resistivity and site topography. This ERT technique infers soil property at variable depths in wider areas. This approach to retrieve the soil property and overcomes the limit of the point of information provided by rain gauges and porous probes. Monitoring of slope stability without altering soil structure through the ERT technique is non-invasive with low cost. In landslide prone area an automated Electrical Resistivity Tomographic Imaging system should be installed permanently with electrode networks to monitor the hydraulic precursors to monitor landslide movement.

Keywords: 2D ERT, landslide, safety factor, slope stability

Procedia PDF Downloads 302
608 Empirical Modeling and Optimization of Laser Welding of AISI 304 Stainless Steel

Authors: Nikhil Kumar, Asish Bandyopadhyay

Abstract:

Laser welding process is a capable technology for forming the automobile, microelectronics, marine and aerospace parts etc. In the present work, a mathematical and statistical approach is adopted to study the laser welding of AISI 304 stainless steel. A robotic control 500 W pulsed Nd:YAG laser source with 1064 nm wavelength has been used for welding purpose. Butt joints are made. The effects of welding parameters, namely; laser power, scanning speed and pulse width on the seam width and depth of penetration has been investigated using the empirical models developed by response surface methodology (RSM). Weld quality is directly correlated with the weld geometry. Twenty sets of experiments have been conducted as per central composite design (CCD) design matrix. The second order mathematical model has been developed for predicting the desired responses. The results of ANOVA indicate that the laser power has the most significant effect on responses. Microstructural analysis as well as hardness of the selected weld specimens has been carried out to understand the metallurgical and mechanical behaviour of the weld. Average micro-hardness of the weld is observed to be higher than the base metal. Higher hardness of the weld is the resultant of grain refinement and δ-ferrite formation in the weld structure. The result suggests that the lower line energy generally produce fine grain structure and improved mechanical properties than the high line energy. The combined effects of input parameters on responses have been analyzed with the help of developed 3-D response surface and contour plots. Finally, multi-objective optimization has been conducted for producing weld joint with complete penetration, minimum seam width and acceptable welding profile. Confirmatory tests have been conducted at optimum parametric conditions to validate the applied optimization technique.

Keywords: ANOVA, laser welding, modeling and optimization, response surface methodology

Procedia PDF Downloads 289
607 Performance of the Aptima® HIV-1 Quant Dx Assay on the Panther System

Authors: Siobhan O’Shea, Sangeetha Vijaysri Nair, Hee Cheol Kim, Charles Thomas Nugent, Cheuk Yan William Tong, Sam Douthwaite, Andrew Worlock

Abstract:

The Aptima® HIV-1 Quant Dx Assay is a fully automated assay on the Panther system. It is based on Transcription-Mediated Amplification and real time detection technologies. This assay is intended for monitoring HIV-1 viral load in plasma specimens and for the detection of HIV-1 in plasma and serum specimens. Nine-hundred and seventy nine specimens selected at random from routine testing at St Thomas’ Hospital, London were anonymised and used to compare the performance of the Aptima HIV-1 Quant Dx assay and Roche COBAS® AmpliPrep/COBAS® TaqMan® HIV-1 Test, v2.0. Two-hundred and thirty four specimens gave quantitative HIV-1 viral load results in both assays. The quantitative results reported by the Aptima Assay were comparable those reported by the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 Test, v2.0 with a linear regression slope of 1.04 and an intercept on -0.097. The Aptima assay detected HIV-1 in more samples than the Roche assay. This was not due to lack of specificity of the Aptima assay because this assay gave 99.83% specificity on testing plasma specimens from 600 HIV-1 negative individuals. To understand the reason for this higher detection rate a side-by-side comparison of low level panels made from the HIV-1 3rd international standard (NIBSC10/152) and clinical samples of various subtypes were tested in both assays. The Aptima assay was more sensitive than the Roche assay. The good sensitivity, specificity and agreement with other commercial assays make the HIV-1 Quant Dx Assay appropriate for both viral load monitoring and detection of HIV-1 infections.

Keywords: HIV viral load, Aptima, Roche, Panther system

Procedia PDF Downloads 360
606 Monitoring of Indoor Air Quality in Museums

Authors: Olympia Nisiforou

Abstract:

The cultural heritage of each country represents a unique and irreplaceable witness of the past. Nevertheless, on many occasions, such heritage is extremely vulnerable to natural disasters and reckless behaviors. Even if such exhibits are now located in Museums, they still receive insufficient protection due to improper environmental conditions. These external changes can negatively affect the conditions of the exhibits and contribute to inefficient maintenance in time. Hence, it is imperative to develop an innovative, low-cost system, to monitor indoor air quality systematically, since conventional methods are quite expensive and time-consuming. The present study gives an insight into the indoor air quality of the National Byzantine Museum of Cyprus. In particular, systematic measurements of particulate matter, bio-aerosols, the concentration of targeted chemical pollutants (including Volatile organic compounds (VOCs), temperature, relative humidity, and lighting conditions as well as microbial counts have been performed using conventional techniques. Measurements showed that most of the monitored physiochemical parameters did not vary significantly within the various sampling locations. Seasonal fluctuations of ammonia were observed, showing higher concentrations in the summer and lower in winter. It was found that the outdoor environment does not significantly affect indoor air quality in terms of VOC and Nitrogen oxides (NOX). A cutting-edge portable Gas Chromatography-Mass Spectrometry (GC-MS) system (TORION T-9) was used to identify and measure the concentrations of specific Volatile and Semi-volatile Organic Compounds. A large number of different VOCs and SVOCs found such as Benzene, Toluene, Xylene, Ethanol, Hexadecane, and Acetic acid, as well as some more complex compounds such as 3-ethyl-2,4-dimethyl-Isopropyl alcohol, 4,4'-biphenylene-bis-(3-aminobenzoate) and trifluoro-2,2-dimethylpropyl ester. Apart from the permanent indoor/outdoor sources (i.e., wooden frames, painted exhibits, carpets, ventilation system and outdoor air) of the above organic compounds, the concentration of some of them within the areas of the museum were found to increase when large groups of visitors were simultaneously present at a specific place within the museum. The high presence of Particulate Matter (PM), fungi and bacteria were found in the museum’s areas where carpets were present but low colonial counts were found in rooms where artworks are exhibited. Measurements mentioned above were used to validate an innovative low-cost air-quality monitoring system that has been developed within the present work. The developed system is able to monitor the average concentrations (on a bidaily basis) of several pollutants and presents several innovative features, including the prompt alerting in case of increased average concentrations of monitored pollutants, i.e., exceeding the limit values defined by the user.

Keywords: exibitions, indoor air quality , VOCs, pollution

Procedia PDF Downloads 115
605 Formula Student Car: Design, Analysis and Lap Time Simulation

Authors: Rachit Ahuja, Ayush Chugh

Abstract:

Aerodynamic forces and moments, as well as tire-road forces largely affects the maneuverability of the vehicle. Car manufacturers are largely fascinated and influenced by various aerodynamic improvements made in formula cars. There is constant effort of applying these aerodynamic improvements in road vehicles. In motor racing, the key differentiating factor in a high performance car is its ability to maintain highest possible acceleration in appropriate direction. One of the main areas of concern in motor racing is balance of aerodynamic forces and stream line the flow of air across the body of the vehicle. At present, formula racing cars are regulated by stringent FIA norms, there are constrains for dimensions of the vehicle, engine capacity etc. So one of the fields in which there is a large scope of improvement is aerodynamics of the vehicle. In this project work, an attempt has been made to design a formula- student (FS) car, improve its aerodynamic characteristics through steady state CFD simulations and simultaneously calculate its lap time. Initially, a CAD model of a formula student car is made using SOLIDWORKS as per the given dimensions and a steady-state external air-flow simulation is performed on the baseline model of the formula student car without any add on device to evaluate and analyze the air-flow pattern around the car and aerodynamic forces using FLUENT Solver. A detailed survey on different add-on devices used in racing application like: - front wing, diffuser, shark pin, T- wing etc. is made and geometric model of these add-on devices are created. These add-on devices are assembled with the baseline model. Steady state CFD simulations are done on the modified car to evaluate the aerodynamic effects of these add-on devices on the car. Later comparison of lap time simulation of the formula student car with and without the add-on devices is done with the help of MATLAB. Aerodynamic performances like: - lift, drag and their coefficients are evaluated for different configuration and design of the add-on devices at different speed of the vehicle. From parametric CFD simulations on formula student car attached with add-on devices, there is a considerable amount of drag and lift force reduction besides streamlining the airflow across the car. The best possible configuration of these add-on devices is obtained from these CFD simulations and also use of these add-on devices have shown an improvement in performance of the car which can be compared by various lap time simulations of the car.

Keywords: aerodynamic performance, front wing, laptime simulation, t-wing

Procedia PDF Downloads 189
604 Hybrid Method for Smart Suggestions in Conversations for Online Marketplaces

Authors: Yasamin Rahimi, Ali Kamandi, Abbas Hoseini, Hesam Haddad

Abstract:

Online/offline chat is a convenient approach in the electronic markets of second-hand products in which potential customers would like to have more information about the products to fill the information gap between buyers and sellers. Online peer in peer market is trying to create artificial intelligence-based systems that help customers ask more informative questions in an easier way. In this article, we introduce a method for the question/answer system that we have developed for the top-ranked electronic market in Iran called Divar. When it comes to secondhand products, incomplete product information in a purchase will result in loss to the buyer. One way to balance buyer and seller information of a product is to help the buyer ask more informative questions when purchasing. Also, the short time to start and achieve the desired result of the conversation was one of our main goals, which was achieved according to A/B tests results. In this paper, we propose and evaluate a method for suggesting questions and answers in the messaging platform of the e-commerce website Divar. Creating such systems is to help users gather knowledge about the product easier and faster, All from the Divar database. We collected a dataset of around 2 million messages in Persian colloquial language, and for each category of product, we gathered 500K messages, of which only 2K were Tagged, and semi-supervised methods were used. In order to publish the proposed model to production, it is required to be fast enough to process 10 million messages daily on CPU processors. In order to reach that speed, in many subtasks, faster and simplistic models are preferred over deep neural models. The proposed method, which requires only a small amount of labeled data, is currently used in Divar production on CPU processors, and 15% of buyers and seller’s messages in conversations is directly chosen from our model output, and more than 27% of buyers have used this model suggestions in at least one daily conversation.

Keywords: smart reply, spell checker, information retrieval, intent detection, question answering

Procedia PDF Downloads 179
603 Cu₂(ZnSn)(S)₄ Electrodeposition from a Single Bath for Photovoltaic Applications

Authors: Mahfouz Saeed

Abstract:

Cu₂(ZnSn)(S)₄ (CTZS) offers potential advantages over CuInGaSe₂ (CIGS) as solar thin film because to its higher band gap. Preparing such photovoltaic materials by electrochemical techniques is particularly attractive due to the lower processing cost and the high throughput of such techniques. Several recent publications report CTZS electroplating; however, the electrochemical process still facing serious challenges such as a sulfur atomic ration which is about 50% of the total alloy. We introduce in this work an improved electrolyte composition which enables the direct electrodeposition of CTZS from a single bath. The electrolyte is significantly more dilute in comparison to common baths described in the literature. The bath composition we introduce is: 0.0032 M CuSO₄, 0.0021 M ZnSO₄, 0.0303 M SnCl₂, 0.0038 M Na₂S₂O₃, and 0.3 mM Na₂S₂O3. PHydrion is applied to buffer the electrolyte to pH=2, and 0.7 M LiCl is applied as supporting electrolyte. Electrochemical process was carried at a rotating disk electrode which provides quantitative characterization of the flow (room temperature). Comprehensive electrochemical behavior study at different electrode rotation rates are provided. The effects of agitation on atomic composition of the deposit and its adhesion to the molybdenum back contact are discussed. The post treatment annealing was conducted under sulfur atmosphere with no need for metals addition from the gas phase during annealing. The potential which produced the desired atomic ratio of CTZS at -0.82 V/NHE. Smooth deposit, with uniform composition across the sample surface and depth was obtained at 500 rpm rotation speed. Final sulfur atomic ratio was adjusted to 50.2% in order to have the desired atomic ration. The final composition was investigated using Energy-dispersive X-ray spectroscopy technique (EDS). XRD technique used to analyze CTZS crystallography and thickness. Complete and functional CTZS PV devices were fabricated by depositing all the required layers in the correct order and the desired optical properties. Acknowledgments: Case Western Reserve University for the technical help and for using their instruments.

Keywords: photovoltaic, CTZS, thin film, electrochemical

Procedia PDF Downloads 231
602 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 334
601 The Desire for Significance & Memorability in Popular Culture: A Cognitive Psychological Study of Contemporary Literature, Art, and Media

Authors: Israel B. Bitton

Abstract:

“Memory” is associated with various phenomena, from physical to mental, personal to collective and historical to cultural. As part of a broader exploration of memory studies in philosophy and science (slated for academic publication October 2021), this specific study employs analytical methods of cognitive psychology and philosophy of memory to theorize that A) the primary human will (drive) is to significance, in that every human action and expression can be rooted in a most primal desire to be cosmically significant (however that is individually perceived); and B) that the will to significance manifests as the will to memorability, an innate desire to be remembered by others after death. In support of these broad claims, a review of various popular culture “touchpoints”—historic and contemporary records spanning literature, film and television, traditional news media, and social media—is presented to demonstrate how this very theory is repeatedly and commonly expressed (and has been for a long time) by many popular public figures as well as “everyday people.” Though developed before COVID, the crisis only increased the theory’s relevance: so many people were forced to die alone, leaving them and their loved ones to face even greater existential angst than what ordinarily accompanies death since the usual expectations for one’s “final moments” were shattered. To underscore this issue of, and response to, what can be considered a sociocultural “memory gap,” this study concludes with a summary of several projects launched by journalists at the height of the pandemic to document the memorable human stories behind COVID’s tragic warped speed death toll that, when analyzed through the lens of Viktor E. Frankl’s psychoanalytical perspective on “existential meaning,” shows how countless individuals were robbed of the last wills and testaments to their self-significance and memorability typically afforded to the dying and the aggrieved. The resulting insight ought to inform how government and public health officials determine what is truly “non-essential” to human health, physical and mental, at times of crisis.

Keywords: cognitive psychology, covid, neuroscience, philosophy of memory

Procedia PDF Downloads 179
600 Designing of Induction Motor Efficiency Monitoring System

Authors: Ali Mamizadeh, Ires Iskender, Saeid Aghaei

Abstract:

Energy is one of the important issues with high priority property in the world. Energy demand is rapidly increasing depending on the growing population and industry. The useable energy sources in the world will be insufficient to meet the need for energy. Therefore, the efficient and economical usage of energy sources is getting more importance. In a survey conducted among electric consuming machines, the electrical machines are consuming about 40% of the total electrical energy consumed by electrical devices and 96% of this consumption belongs to induction motors. Induction motors are the workhorses of industry and have very large application areas in industry and urban systems like water pumping and distribution systems, steel and paper industries and etc. Monitoring and the control of the motors have an important effect on the operating performance of the motor, driver selection and replacement strategy management of electrical machines. The sensorless monitoring system for monitoring and calculating efficiency of induction motors are studied in this study. The equivalent circuit of IEEE is used in the design of this study. The terminal current and voltage of induction motor are used in this motor to measure the efficiency of induction motor. The motor nameplate information and the measured current and voltage are used in this system to calculate accurately the losses of induction motor to calculate its input and output power. The efficiency of the induction motor is monitored online in the proposed method without disconnecting the motor from the driver and without adding any additional connection at the motor terminal box. The proposed monitoring system measure accurately the efficiency by including all losses without using torque meter and speed sensor. The monitoring system uses embedded architecture and does not need to connect to a computer to measure and log measured data. The conclusion regarding the efficiency, the accuracy and technical and economical benefits of the proposed method are presented. The experimental verification has been obtained on a 3 phase 1.1 kW, 2-pole induction motor. The proposed method can be used for optimal control of induction motors, efficiency monitoring and motor replacement strategy.

Keywords: induction motor, efficiency, power losses, monitoring, embedded design

Procedia PDF Downloads 338