Search results for: reference values
2532 Assessment of the Number of Damaged Buildings from a Flood Event Using Remote Sensing Technique
Authors: Jaturong Som-ard
Abstract:
The heavy rainfall from 3rd to 22th January 2017 had swamped much area of Ranot district in southern Thailand. Due to heavy rainfall, the district was flooded which had a lot of effects on economy and social loss. The major objective of this study is to detect flooding extent using Sentinel-1A data and identify a number of damaged buildings over there. The data were collected in two stages as pre-flooding and during flood event. Calibration, speckle filtering, geometric correction, and histogram thresholding were performed with the data, based on intensity spectral values to classify thematic maps. The maps were used to identify flooding extent using change detection, along with the buildings digitized and collected on JOSM desktop. The numbers of damaged buildings were counted within the flooding extent with respect to building data. The total flooded areas were observed as 181.45 sq.km. These areas were mostly occurred at Ban khao, Ranot, Takhria, and Phang Yang sub-districts, respectively. The Ban khao sub-district had more occurrence than the others because this area is located at lower altitude and close to Thale Noi and Thale Luang lakes than others. The numbers of damaged buildings were high in Khlong Daen (726 features), Tha Bon (645 features), and Ranot sub-district (604 features), respectively. The final flood extent map might be very useful for the plan, prevention and management of flood occurrence area. The map of building damage can be used for the quick response, recovery and mitigation to the affected areas for different concern organization.Keywords: flooding extent, Sentinel-1A data, JOSM desktop, damaged buildings
Procedia PDF Downloads 1922531 Effect of Architecture and Operating Conditions of Vehicle on Bulb Lifetime in Automotive
Authors: Hatice Özbek, Caner Çil, Ahmet Rodoplu
Abstract:
Automotive lighting is the leading function in the configuration of vehicle architecture. Especially headlights and taillights from external lighting functions are among the structures that determine the stylistic character of the vehicle. At the same time, the fact that lighting functions are related to many other functions brings along difficulties in design. Customers expect maximum quality from the vehicle. In these circumstances, it is necessary to make designs that aim to keep the performance of bulbs with limited working lives at the highest level. With this study, the factors that influence the working lives of filament lamps were examined and bulb explosions that can occur sooner than anticipated in the future were prevented while the vehicle was still in the design phase by determining the relations with electrical, dynamical and static variables. Especially the filaments of the bulbs used in the front lighting of the vehicle are deformed in a shorter time due to the high voltage requirement. In addition to this, rear lighting lamps vibrate as a result of the tailgate opening and closing and cause the filaments to be exposed to high stress. With this study, the findings that cause bulb explosions were evaluated. Among the most important findings: 1. The structure of the cables to the lighting functions of the vehicle and the effect of the voltage values are drawn; 2. The effect of the vibration to bulb throughout the life of the vehicle; 3 The effect of the loads carried to bulb while the vehicle doors are opened and closed. At the end of the study, the maximum performance was established in the bulb lifetimes with the optimum changes made in the vehicle architecture based on the findings obtained.Keywords: vehicle architecture, automotive lighting functions, filament lamps, bulb lifetime
Procedia PDF Downloads 1532530 Evaluation of the Benefit of Anti-Endomysial IgA and Anti-Tissue Transglutaminase IgA Antibodies for the Diagnosis of Coeliac Disease in a University Hospital, 2010-2016
Authors: Recep Keşli, Onur Türkyılmaz, Hayriye Tokay, Kasım Demir
Abstract:
Objective: Coeliac disease (CD) is a primary small intestine disorder caused by high sensitivity to gluten which is present in the crops, characterized by inflammation in the small intestine mucosa. The goal of this study was to determine and to compare the sensitivity and specificity values of anti-endomysial IgA (EMA IgA) (IFA) and anti-tissue transglutaminase IgA (anti-tTG IgA) (ELISA) antibodies in the diagnosis of patients suspected with the CD. Methods: One thousand two hundred seventy three patients, who have applied to gastroenterology and pediatric disease polyclinics of Afyon Kocatepe University ANS Research and Practice Hospital were included into the study between 23.09.2010 and 30.05.2016. Sera samples were investigated by immunofluorescence method for EMA positiveness (Euroimmun, Luebeck, Germany). In order to determine quantitative value of Anti-tTG IgA (EIA) (Orgentec Mainz, Germany) fully automated ELISA device (Alisei, Seac, Firenze, Italy) were used. Results: Out of 1273 patients, 160 were diagnosed with coeliac disease according to ESPGHAN 2012 diagnosis criteria. Out of 160 CD patients, 120 were female, 40 were male. The EMA specificity and sensitivity were calculated as 98% and 80% respectively. Specificity and sensitivity of Anti-tTG IgA were determined as 99% and 96% respectively. Conclusion: The specificity of EMA for CD was excellent because all EMA-positive patients (n = 144) were diagnosed with CD. The presence of human anti-tTG IgA was found as a reliable marker for diagnosis and follow-up the CD. Diagnosis of CD should be established on both the clinical and serologic profiles together.Keywords: anti-endomysial antibody, anti-tTG IgA, coeliac disease, immunofluorescence assay (IFA)
Procedia PDF Downloads 2542529 Factors Associated with Weight Loss Maintenance after an Intervention Program
Authors: Filipa Cortez, Vanessa Pereira
Abstract:
Introduction: The main challenge of obesity treatment is long-term weight loss maintenance. The 3 phases method is a weight loss program that combines a low carb and moderately high-protein diet, food supplements and a weekly one-to-one consultation with a certified nutritionist. Sustained weight control is the ultimate goal of phase 3. Success criterion was the minimum loss of 10% of initial weight and its maintenance after 12 months. Objective: The aim of this study was to identify factors associated with successful weight loss maintenance after 12 months at the end of 3 phases method. Methods: The study included 199 subjects that achieved their weight loss goal (phase 3). Weight and body mass index (BMI) were obtained at the baseline and every week until the end of the program. Therapeutic adherence was measured weekly on a Likert scale from 1 to 5. Subjects were considered in compliance with nutritional recommendation and supplementation when their classification was ≥ 4. After 12 months of the method, the current weight and number of previous weight-loss attempts were collected by telephone interview. The statistical significance was assumed at p-values < 0.05. Statistical analyses were performed using SPSS TM software v.21. Results: 65.3% of subjects met the success criterion. The factors which displayed a significant weight loss maintenance prediction were: greater initial percentage weight loss (OR=1.44) during the weight loss intervention and a higher number of consultations in phase 3 (OR=1.10). Conclusion: These findings suggest that the percentage weight loss during the weight loss intervention and the number of consultations in phase 3 may facilitate maintenance of weight loss after the 3 phases method.Keywords: obesity, weight maintenance, low-carbohydrate diet, dietary supplements
Procedia PDF Downloads 1502528 Effect of Varying Scaffold Architecture and Porosity of Calcium Alkali Orthophosphate Based-Scaffolds for Bone Tissue Engineering
Authors: D. Adel, F. Giacomini, R. Gildenhaar, G. Berger, C. Gomes, U. Linow, M. Hardt, B. Peleskae, J. Günster, A. Houshmand, M. Stiller, A. Rack, K. Ghaffar, A. Gamal, M. El Mofty, C. Knabe
Abstract:
The goal of this study was to develop 3D scaffolds from a silica containing calcium alkali orthophosphate utilizing two different fabrication processes, first a replica technique namely the Schwartzwalder Somers method (SSM), and second 3D printing, i.e. Rapid prototyping (RP). First, the mechanical and physical properties of the scaffolds (porosity, compressive strength, and solubility) was assessed and second their potential to facilitate homogenous colonization with osteogenic cells and extracellular bone matrix formation throughout the porous scaffold architecture. To this end murine and rat calavarie osteoblastic cells were dynamically seeded on both scaffold types under perfusion with concentrations of 3 million cells. The amount of cells and extracellular matrix as well as osteogenic marker expression was evaluated using hard tissue histology, immunohistochemistry, and histomorphometric analysis. Total porosities of both scaffolds were 86.9 % and 50% for SSM and RP respectively, Compressive strength values were 0.46 ± 0.2 MPa for SSM and 6.6± 0.8 MPa for RP. Regarding the cellular behavior, RP scaffolds displayed a higher cell and matrix percentage of 24.45%. Immunoscoring yielded strong osteocalcin expression of cells and matrix in RP scaffolds and a moderate expression in SSM scaffolds. 3D printed RP scaffolds displayed superior mechanical and biological properties compared to SSM. 3D printed scaffolds represent excellent candidates for bone tissue engineering.Keywords: calcium alkali orthophosphate, extracellular matrix mineralization, osteoblast differentiation, rapid prototyping, scaffold
Procedia PDF Downloads 3292527 A Study on Architectural Characteristics of Traditional Iranian Ordinary Houses in Mashhad, Iran
Authors: Rana Daneshvar Salehi
Abstract:
In many Iranian cities including Mashhad, the capital of Razavi Khorasan Province, ordinary samples of domestic architecture on a small scale is not considered as heritage. While the principals of house formation are respected in all traditional Iranian houses; from moderate to great ones. During the past decade, Mashhad has lost its identity, and has become a modern city. Identifying it as the capital of the Islamic Culture in 2017 by ISESCO and consequently looking for new developments and transfiguration caused to demolish a large number of traditional modest habitation. For this reason, the present paper aims to introduce the three undiscovered houses with the historical and monumental values located in the oldest neighborhoods of Mashhad which have been neglected in the cultural heritage field. The preliminary phase of this approach will be a measured survey to identify the significant characteristics of selected dwellings and understand the challenges through focusing on building form, orientation, room function, space proportion and ornamental elements’ details. A comparison between the case studies and the wealthy domestically buildings presents that a house belongs to inhabitants with an average income could introduce the same accurate, regular, harmonic and proportionate design which can be found in the great mansions. It reveals that an ordinary traditional house can be regarded as valuable construction not only for its historical characteristics but also for its aesthetical and architectural features that could avoid further destructions in the future.Keywords: traditional ordinary house, architectural characteristic, proportion, heritage
Procedia PDF Downloads 1462526 In silico Designing and Insight into Antimalarial Potential of Chalcone-Quinolinylpyrazole Hybrids by Preclinical Study in Mice
Authors: Deepika Saini, Sandeep Jain, Ajay Kumar
Abstract:
The quinoline scaffold is one of the most widely studied in the discovery of derivatives with various heterocyclic moieties due to its potential antimalarial activities. In the present study, a chalcone series of quinoline derivatives clubbed with pyrazole were synthesized to evaluate their antimalarial property by in vitro schizont maturation inhibition assay against both chloroquine sensitive, 3D7 and chloroquine resistant, RKL9 strain of Plasmodium falciparum. Further, top five compounds were studied for in vivo preclinical study for antimalarial potential against P. berghei in Swiss albino mice. To understand the mechanism of synthesized analogues, they were screened computationally by molecular docking techniques. Compounds were docked into the active site of a protein receptor, Plasmodium falciparum Cysteine Protease Falcipain-2. The compounds were successfully synthesized, and structural confirmation was performed by FTIR, 1H-NMR, mass spectrometry and elemental analysis. In vitro study suggested that the compounds 5b, 5g, 5l, 5s and 5u possessed best antimalarial activity and further tested for in vivo screening. Compound 5u (CH₃ on both rings) with EC₅₀ 0.313 & 0.801 µg/ml against CQ-S & CQ-R strains of P. falciparum respectively and 78.01% suppression of parasitemia. The molecular docking studies of the compounds helped in understanding the mechanism of action against falcipain-2. The present study reveals the binding signatures of the synthesized ligands within the active site of the protein, and it explains the results from in vitro study in their EC₅₀ values and percentage parasitemia.Keywords: antimalarial activity, chalcone, docking, quinoline
Procedia PDF Downloads 4092525 Miracle Fruit Application in Sour Beverages: Effect of Different Concentrations on the Temporal Sensory Profile and Overall Linking
Authors: Jéssica F. Rodrigues, Amanda C. Andrade, Sabrina C. Bastos, Sandra B. Coelho, Ana Carla M. Pinheiro
Abstract:
Currently, there is a great demand for the use of natural sweeteners due to the harmful effects of the high sugar and artificial sweeteners consumption on the health. Miracle fruit, which is known for its unique ability to modify the sour taste in sweet taste, has been shown to be a good alternative sweetener. However, it has a high production cost, being important to optimize lower contents to be used. Thus, the aim of this study was to assess the effect of different miracle fruit contents on the temporal (Time-intensity - TI and Temporal Dominance of Sensations - TDS) sensory profile and overall linking of lemonade, to determine the better content to be used as a natural sweetener in sour beverages. TI and TDS results showed that the concentrations of 150 mg, 300 mg and 600 mg miracle fruit were effective in reducing the acidity and promoting the sweet perception in lemonade. Furthermore, the concentrations of 300 mg and 600 mg obtained similar profiles. Through the acceptance test, the concentration of 300 mg miracle fruit was shown to be an efficient substitute for sucrose and sucralose in lemonade, once they had similar hedonic values between ‘I liked it slightly’ and ‘I liked it moderately’. Therefore, 300mg miracle fruit consists in an adequate content to be used as a natural sweetener of lemonade. The results of this work will help the food industry on the efficient application of a new natural sweetener- the Miracle fruit extract in sour beverages, reducing costs and providing a product that meets the consumer desires.Keywords: acceptance, natural sweetener, temporal dominance of sensations, time-intensity
Procedia PDF Downloads 2492524 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 1412523 Electricity Load Modeling: An Application to Italian Market
Authors: Giovanni Masala, Stefania Marica
Abstract:
Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression
Procedia PDF Downloads 3952522 Patterns, Triggers, and Predictors of Relapses among Children with Steroid Sensitive Idiopathic Nephrotic Syndrome at the University of Abuja Teaching Hospital, Gwagwalada, Abuja, Nigeria
Authors: Emmanuel Ademola Anigilaje, Ibraheem Ishola
Abstract:
Background: Childhood steroid-sensitive idiopathic nephrotic syndrome (SSINS) is plagued with relapses that contribute to its morbidity and the cost of treatment. Materials and Methods: This is a retrospective review of relapses among children with SSINS at the University of Abuja Teaching Hospital from January 2016 to July 2020. Triggers related to relapse incidents were noted. Chi-square test was deployed for predictors (factors at the first clinical presentations that associate with subsequent relapses) of relapses. Predictors with p-values of less than 0.05 were considered significant and 95% confidence intervals (CI) and odd ratio (OR) were described. Results: Sixty SSINS comprising 52 males (86.7%), aged 23 months to 18 years, with a mean age of 7.04±4.16 years were studied. Thirty-eight (63.3%) subjects had 126 relapses including infrequent relapses in 30 (78.9%) and frequent relapses in 8 (21.1%). The commonest triggers were acute upper respiratory tract infections (68, 53.9%) and urinary tract infections (UTIs) in 25 (19.8%) relapses. In 4 (3.2%) relapses, no trigger was identified. The time-to-first relapse ranged 14 days to 365 days with a median time of 60 days. The significant predictors were hypertension (OR=3.4, 95% CI; 1.04-11.09, p=0.038), UTIs (OR=9.9, 95% CI; 1.16-80.71, p= 0.014), malaria fever (OR=8.0, 95% CI; 2.45-26.38, p˂0.001), micro-haematuria (OR=4.9, 95% CI; 11.58-15.16, p=0.004), elevated serum creatinine (OR=12.3, 95%CI; 1.48-101.20, p=0.005) and hypercholesterolaemia (OR=4.1, 95%CI; 1.35-12.63, p=0.011). Conclusion: While the pathogenesis of relapses remains unknown, it is prudent to consider relapse-specific preventive strategies against triggers and predictors of relapses in our setting.Keywords: Patterns, triggers, predictors, steroid-sensitive idiopathic nephrotic syndrome, relapses, Nigeria
Procedia PDF Downloads 1582521 Performance of Bored Pile on Alluvial Deposit
Authors: K. Raja Rajan, D. Nagarajan
Abstract:
Bored cast in-situ pile is a popular choice amongst consultant and contractor due to the ability to adjust the pile length suitably in case if any variation found in the actual geological strata. Bangladesh geological strata are dominated by silt content. Design is normally based on field test such as Standard Penetration test N-values. Initially, pile capacity estimated through static formula with co-relation of N-value and angle of internal friction. Initial pile load test was conducted in order to validate the geotechnical parameters assumed in design. Initial pile load test was conducted on 1.5m diameter bored cast in-situ pile. Kentledge method is used to load the pile for 2.5 times of its working load. Initially, safe working load of pile has been estimated as 570T, so test load is fixed to 1425T. Max load applied is 777T for which the settlement reached around 155mm which is more than 10% of diameter of piles. Pile load test results was not satisfactory and compelled to increase the pile length approximately 20% of its total length. Due to unpredictable geotechnical parameters, length of each pile has been increased which is having a major impact on the project cost and as well as in project schedule. Extra bore holes have been planned along with lab test results in order to redefine the assumed geotechnical parameters. This article presents detailed design assumptions of geotechnical parameters in the design stage and the results of pile load test which made to redefine the assumed geotechnical properties.Keywords: end bearing, pile load test, settlement, shaft friction
Procedia PDF Downloads 2652520 Polyphytopharmaca Improving Asthma Control Test Value, Biomarker (Eosinophils and Malondialdehyde): Quasi Experimental Test in Patients with Asthma
Authors: Andri Andri, Susanthy Djajalaksana, Iin Noor Chozin
Abstract:
Background: Despite advances in asthma therapies, a proportion of patients with asthma continue to have difficulty in gaining adequate asthma control. Complex immunological mechanisms and oxidative stress affect this condition, including the role of malondialdehyde (MDA) as a marker of inflammation. This research aimed to determine the effect of polyphytopharmaca administration on the value of asthma control test (ACT), blood eosinophils level and markers of MDA serum inflammation in patients with asthma. Method: Quasi experimental approach was conducted toward 15 stable asthma patients who were not fully controlled in outpatient pulmonary clinic, Public Hospital of Dr. Saiful Anwar Malang. Assessments of ACT values, eosinophil levels, and serum MDA levels were carried out before and after administration of polyphytopharmaca which contained a combination of 100 mg Nigella sativa extract, Kleinhovia hospita 100 mg, Curcuma xanthorrhiza 75 mg, and Ophiocephalus striatus 100 mg, three times daily with two capsules for 12 weeks. The ACT value was determined by the researcher by asking the patient directly, blood eosinophil levels were calculated by analyzing blood type counts, and serum MDA levels were detected by the qPCR method. Result: There was a significant enhancement of ACT value (18.07 ± 2.57 to 22.06 ± 1.83, p = 0.001) (from 60% uncontrolled ACT to 93.3% controlled ACT), a significant decrease in blood eosinophils levels (653.15 ± 276.15 pg/mL to 460.66 ± 202.04 pg/mL, p = 0.038), and decreased serum MDA levels (109.64 ± 53.77 ng / ml to 78.68 ± 64.92 ng/ml, p = 0.156). Conclusion: Administration of polyphytopharmaca can increase ACT value, decrease blood eosinophils levels and reduce MDA serum in stable asthma patients who are not fully controlled.Keywords: asthma control test, eosinophils levels, malondialdehyde, polyphytopharmaca
Procedia PDF Downloads 1202519 Phytochemical Evaluation and In-Vitro Antibacterial Activity of Ethanolic Extracts of Moroccan Lavandula x Intermedia Leaves and Flowers
Authors: Jamila Fliou, Federica Spinola, Ouassima Riffi, Asmaa Zriouel, Ali Amechrouq, Luca Nalbone, Alessandro Giuffrida, Filippo Giarratana
Abstract:
This study performed a preliminary evaluation of the phytochemical composition and in vitro antibacterial activity of ethanolic extracts of Lavandula x intermedia leaves and flowers collected in the Fez-Meknes region of Morocco. Phytochemical analyses comprised qualitative colourimetric determinations of alkaloids, anthraquinones, and terpenes and quantitative analysis of total polyphenols, flavonoids, and condensed tannins by UV spectrophotometer. Antibacterial activity was evaluated by determining minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) values against different ATCC bacterial strains. The phytochemical analysis showed a high amount of total polyphenols, flavonoids, and tannins in the leaf extract and a higher amount of terpenes based on colourimetric reaction than the flower extract. A positive colourimetric reaction for alkaloids and anthraquinones was detected for both extracts. The antibacterial activity of leaves and flower extract was not different against Gram-positive and Gram-negative strains (p<0.05). The results of the present study suggest the possible use of ethanolic extracts of L. x intermedia collected in the Fez-Meknes region of Morocco as a natural agent against bacterial pathogens.Keywords: antimicrobial activity, Lavandula spp., lavender, lavandin, UV spectrophotometric analysis
Procedia PDF Downloads 682518 Effects of Particle Size Distribution on Mechanical Strength and Physical Properties in Engineered Quartz Stone
Authors: Esra Arici, Duygu Olmez, Murat Ozkan, Nurcan Topcu, Furkan Capraz, Gokhan Deniz, Arman Altinyay
Abstract:
Engineered quartz stone is a composite material comprising approximately 90 wt.% fine quartz aggregate with a variety of particle size ranges and `10 wt.% unsaturated polyester resin (UPR). In this study, the objective is to investigate the influence of particle size distribution on mechanical strength and physical properties of the engineered stone slabs. For this purpose, granular quartz with two particle size ranges of 63-200 µm and 100-300 µm were used individually and mixed with a difference in ratios of mixing. The void volume of each granular packing was measured in order to define the amount of filler; quartz powder with the size of less than 38 µm, and UPR required filling inter-particle spaces. Test slabs were prepared using vibration-compression under vacuum. The study reports that both impact strength and flexural strength of samples increased as the mix ratio of the particle size range of 63-200 µm increased. On the other hand, the values of water absorption rate, apparent density and abrasion resistance were not affected by the particle size distribution owing to vacuum compaction. It is found that increasing the mix ratio of the particle size range of 63-200 µm caused the higher porosity. This led to increasing in the amount of the binder paste needed. It is also observed that homogeneity in the slabs was improved with the particle size range of 63-200 µm.Keywords: engineered quartz stone, fine quartz aggregate, granular packing, mechanical strength, particle size distribution, physical properties.
Procedia PDF Downloads 1472517 A Study on the Effect of Mg and Ag Additions and Age Hardening Treatment on the Properties of As-Cast Al-Cu-Mg-Ag Alloys
Authors: Ahmed. S. Alasmari, M. S. Soliman, Magdy M. El-Rayes
Abstract:
This study focuses on the effect of the addition of magnesium (Mg) and silver (Ag) on the mechanical properties of aluminum based alloys. The alloying elements will be added at different levels using the factorial design of experiments of 22; the two factors are Mg and Ag at two levels of concentration. The superior mechanical properties of the produced Al-Cu-Mg-Ag alloys after aging will be resulted from a unique type of precipitation named as Ω-phase. The formed precipitate enhanced the tensile strength and thermal stability. This paper further investigated the microstructure and mechanical properties of as cast Al–Cu–Mg–Ag alloys after being complete homogenized treatment at 520 °C for 8 hours followed by isothermally age hardening process at 190 °C for different periods of time. The homogenization at 520 °C for 8 hours was selected based on homogenization study at various temperatures and times. The alloys’ microstructures were studied by using optical microscopy (OM). In addition to that, the fracture surface investigation was performed using a scanning electronic microscope (SEM). Studying the microstructure of aged Al-Cu-Mg-Ag alloys reveal that the grains are equiaxed with an average grain size of about 50 µm. A detailed fractography study for fractured surface of the aged alloys exhibited a mixed fracture whereby the random fracture suggested crack propagation along the grain boundaries while the dimples indicated that the fracture was ductile. The present result has shown that alloy 5 has the highest hardness values and the best mechanical behaviors.Keywords: precipitation hardening, aluminum alloys, aging, design of experiments, analysis of variance, heat treatments
Procedia PDF Downloads 1572516 The Imminent Other in Anna Deavere Smith’s Performance
Authors: Joy Shihyi Huang
Abstract:
This paper discusses the concept of community in Anna Deavere Smith’s performance, one that challenges and explores existing notions of justice and the other. In contrast to unwavering assumptions of essentialism that have helped to propel a discourse on moral agency within the black community, Smith employs postmodern ideas in which the theatrical attributes of doubling and repetition are conceptualized as part of what Marvin Carlson coined as a ‘memory machine.’ Her dismissal of the need for linear time, such as that regulated by Aristotle’s The Poetics and its concomitant ethics, values, and emotions as a primary ontological and epistemological construct produced by the existing African American historiography, demonstrates an urgency to produce an alternative communal self to override metanarratives in which the African Americans’ lives are contained and sublated by specific historical confines. Drawing on Emmanuel Levinas’ theories in ethics, specifically his notion of ‘proximity’ and ‘the third,’ the paper argues that Smith enacts a new model of ethics by launching an acting method that eliminates the boundary of self and other. Defying psychological realism, Smith conceptualizes an approach to acting that surpasses the mere mimetic value of invoking a ‘likeness’ of an actor to a character, which as such, resembles the mere attribution of various racial or sexual attributes in identity politics. Such acting, she contends, reduces the other to a representation of, at best, an ultimate rendering of me/my experience. She instead appreciates ‘unlikeness,’ recognizes the unavoidable actor/character gap as a power that humbles the self, whose irreversible journey to the other carves out its own image.Keywords: Anna Deavere Smith, Emmanuel Levinas, other, performance
Procedia PDF Downloads 1552515 Consequential Effects of Coal Utilization on Urban Water Supply Sources – a Study of Ajali River in Enugu State Nigeria
Authors: Enebe Christian Chukwudi
Abstract:
Water bodies around the world notably underground water, ground water, rivers, streams, and seas, face degradation of their water quality as a result of activities associated with coal utilization including coal mining, coal processing, coal burning, waste storage and thermal pollution from coal plants which tend to contaminate these water bodies. This contamination results from heavy metals, presence of sulphate and iron, dissolved solids, mercury and other toxins contained in coal ash, sludge, and coal waste. These wastes sometimes find their way to sources of urban water supply and contaminate them. A major problem encountered in the supply of potable water to Enugu municipality is the contamination of Ajali River, the source of water supply to Enugu municipal by coal waste. Hydro geochemical analysis of Ajali water samples indicate high sulphate and iron content, high total dissolved solids(TDS), low pH (acidity values) and significant hardness in addition to presence of heavy metals, mercury, and other toxins. This is indicative of the following remedial measures: I. Proper disposal of mine wastes at designated disposal sites that are suitably prepared. II. Proper water treatment and III. Reduction of coal related contaminants taking advantage of clean coal technology.Keywords: effects, coal, utilization, water quality, sources, waste, contamination, treatment
Procedia PDF Downloads 4242514 Electroactive Fluorene-Based Polymer Films Obtained by Electropolymerization
Authors: Mariana-Dana Damaceanu
Abstract:
Electrochemical oxidation is one of the most convenient ways to obtain conjugated polymer films as polypyrrole, polyaniline, polythiophene or polycarbazole. The research in the field has been mainly directed to the study of electrical conduction properties of the materials obtained by electropolymerization, often the main reason being their use as electroconducting electrodes, and very little attention has been paid to the morphological and optical quality of the films electrodeposited on flat surfaces. Electropolymerization of the monomer solution was scarcely used in the past to manufacture polymer-based light-emitting diodes (PLED), most probably due to the difficulty of obtaining defectless polymer films with good mechanical and optical properties, or conductive polymers with well controlled molecular weights. Here we report our attempts in using electrochemical deposition as appropriate method for preparing ultrathin films of fluorene-based polymers for PLED applications. The properties of these films were evaluated in terms of structural morphology, optical properties, and electrochemical conduction. Thus, electropolymerization of 4,4'-(9-fluorenylidene)-dianiline was performed in dichloromethane solution, at a concentration of 10-2 M, using 0.1 M tetrabutylammonium tetrafluoroborate as electrolyte salt. The potential was scanned between 0 and 1.3 V on the one hand, and 0 - 2 V on the other hand, when polymer films with different structures and properties were obtained. Indium tin oxide-coated glass substrate of different size was used as working electrode, platinum wire as counter electrode and calomel electrode as reference. For each potential range 100 cycles were recorded at a scan rate of 100 mV/s. The film obtained in the potential range from 0 to 1.3 V, namely poly(FDA-NH), is visible to the naked eye, being light brown, transparent and fluorescent, and displays an amorphous morphology. Instead, the electrogrowth poly(FDA) film in the potential range of 0 - 2 V is yellowish-brown and opaque, presenting a self-assembled structure in aggregates of irregular shape and size. The polymers structure was identified by FTIR spectroscopy, which shows the presence of broad bands specific to a polymer, the band centered at approx. 3443 cm-1 being ascribed to the secondary amine. The two polymer films display two absorption maxima, at 434-436 nm assigned to π-π* transitions of polymers, and another at 832 and 880 nm assigned to polaron transitions. The fluorescence spectra indicated the presence of emission bands in the blue domain, with two peaks at 422 and 488 nm for poly (FDA-NH), and four narrow peaks at 422, 447, 460 and 484 nm for poly(FDA), peaks originating from fluorene-containing segments of varying degrees of conjugation. Poly(FDA-NH) exhibited two oxidation peaks in the anodic region and the HOMO energy value of 5.41 eV, whereas poly(FDA) showed only one oxidation peak and the HOMO level localized at 5.29 eV. The electrochemical data are discussed in close correlation with the proposed chemical structure of the electrogrowth films. Further research will be carried out to study their use and performance in light-emitting devices.Keywords: electrogrowth polymer films, fluorene, morphology, optical properties
Procedia PDF Downloads 3452513 Aircraft Components, Manufacturing and Design: Opportunities, Bottlenecks, and Challenges
Authors: Ionel Botef
Abstract:
Aerospace products operate in very aggressive environments characterized by high temperature, high pressure, large stresses on individual components, the presence of oxidizing and corroding atmosphere, as well as internally created or externally ingested particulate materials that induce erosion and impact damage. Consequently, during operation, the materials of individual components degrade. In addition, the impact of maintenance costs for both civil and military aircraft was estimated at least two to three times greater than initial purchase values, and this trend is expected to increase. As a result, for viable product realisation and maintenance, a spectrum of issues regarding novel processing technologies, innovation of new materials, performance, costs, and environmental impact must constantly be addressed. One of these technologies, namely the cold-gas dynamic-spray process has enabled a broad range of coatings and applications, including many that have not been previously possible or commercially practical, hence its potential for new aerospace applications. Therefore, the purpose of this paper is to summarise the state of the art of this technology alongside its theoretical and experimental studies, and explore how the cold-gas dynamic-spray process could be integrated within a framework that finally could lead to more efficient aircraft maintenance. Based on the paper's qualitative findings supported by authorities, evidence, and logic essentially it is argued that the cold-gas dynamic-spray manufacturing process should not be viewed in isolation, but should be viewed as a component of a broad framework that finally leads to more efficient aerospace operations.Keywords: aerospace, aging aircraft, cold spray, materials
Procedia PDF Downloads 1212512 Performance Validation of Model Predictive Control for Electrical Power Converters of a Grid Integrated Oscillating Water Column
Authors: G. Rajapakse, S. Jayasinghe, A. Fleming
Abstract:
This paper aims to experimentally validate the control strategy used for electrical power converters in grid integrated oscillating water column (OWC) wave energy converter (WEC). The particular OWC’s unidirectional air turbine-generator output power results in discrete large power pulses. Therefore, the system requires power conditioning prior to integrating to the grid. This is achieved by using a back to back power converter with an energy storage system. A Li-Ion battery energy storage is connected to the dc-link of the back-to-back converter using a bidirectional dc-dc converter. This arrangement decouples the system dynamics and mitigates the mismatch between supply and demand powers. All three electrical power converters used in the arrangement are controlled using finite control set-model predictive control (FCS-MPC) strategy. The rectifier controller is to regulate the speed of the turbine at a set rotational speed to uphold the air turbine at a desirable speed range under varying wave conditions. The inverter controller is to maintain the output power to the grid adhering to grid codes. The dc-dc bidirectional converter controller is to set the dc-link voltage at its reference value. The software modeling of the OWC system and FCS-MPC is carried out in the MATLAB/Simulink software using actual data and parameters obtained from a prototype unidirectional air-turbine OWC developed at Australian Maritime College (AMC). The hardware development and experimental validations are being carried out at AMC Electronic laboratory. The designed FCS-MPC for the power converters are separately coded in Code Composer Studio V8 and downloaded into separate Texas Instrument’s TIVA C Series EK-TM4C123GXL Launchpad Evaluation Boards with TM4C123GH6PMI microcontrollers (real-time control processors). Each microcontroller is used to drive 2kW 3-phase STEVAL-IHM028V2 evaluation board with an intelligent power module (STGIPS20C60). The power module consists of a 3-phase inverter bridge with 600V insulated gate bipolar transistors. Delta standard (ASDA-B2 series) servo drive/motor coupled to a 2kW permanent magnet synchronous generator is served as the turbine-generator. This lab-scale setup is used to obtain experimental results. The validation of the FCS-MPC is done by comparing these experimental results to the results obtained by MATLAB/Simulink software results in similar scenarios. The results show that under the proposed control scheme, the regulated variables follow their references accurately. This research confirms that FCS-MPC fits well into the power converter control of the OWC-WEC system with a Li-Ion battery energy storage.Keywords: dc-dc bidirectional converter, finite control set-model predictive control, Li-ion battery energy storage, oscillating water column, wave energy converter
Procedia PDF Downloads 1132511 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 1012510 Simulation of a Three-Link, Six-Muscle Musculoskeletal Arm Activated by Hill Muscle Model
Authors: Nafiseh Ebrahimi, Amir Jafari
Abstract:
The study of humanoid character is of great interest to researchers in the field of robotics and biomechanics. One might want to know the forces and torques required to move a limb from an initial position to the desired destination position. Inverse dynamics is a helpful method to compute the force and torques for an articulated body limb. It enables us to know the joint torques required to rotate a link between two positions. Our goal in this study was to control a human-like articulated manipulator for a specific task of path tracking. For this purpose, the human arm was modeled with a three-link planar manipulator activated by Hill muscle model. Applying a proportional controller, values of force and torques applied to the joints were calculated by inverse dynamics, and then joints and muscle forces trajectories were computed and presented. To be more accurate to say, the kinematics of the muscle-joint space was formulated by which we defined the relationship between the muscle lengths and the geometry of the links and joints. Secondary, the kinematic of the links was introduced to calculate the position of the end-effector in terms of geometry. Then, we considered the modeling of Hill muscle dynamics, and after calculation of joint torques, finally, we applied them to the dynamics of the three-link manipulator obtained from the inverse dynamics to calculate the joint states, find and control the location of manipulator’s end-effector. The results show that the human arm model was successfully controlled to take the designated path of an ellipse precisely.Keywords: arm manipulator, hill muscle model, six-muscle model, three-link lodel
Procedia PDF Downloads 1422509 Variations in Wood Traits across Major Gymnosperm and Angiosperm Tree Species and the Driving Factors in China
Authors: Meixia Zhang, Chengjun Ji, Wenxuan Han
Abstract:
Many wood traits are important functional attributes for tree species, connected with resource competition among species, community dynamics, and ecosystem functions. Large variations in these traits exist among taxonomic categories, but variation in these traits between gymnosperms and angiosperms is still poorly documented. This paper explores the systematic differences in 12 traits between the two tree categories and the potential effects of environmental factors and life form. Based on a database of wood traits for major gymnosperm and angiosperm tree species across China, the values of 12 wood traits and their driving factors in gymnosperms vs. angiosperms were compared. The results are summarized below: i) Means of wood traits were all significantly lower in gymnosperms than in angiosperms. ii) Air-dried density (ADD) and tangential shrinkage coefficient (TSC) reflect the basic information of wood traits for gymnosperms, while ADD and radial shrinkage coefficient (RSC) represent those for angiosperms, providing higher explanation power when used as the evaluation index of wood traits. iii) For both gymnosperm and angiosperm species, life form exhibits the largest explanation rate for large-scale spatial patterns of ADD, TSC (RSC), climatic factors the next, and edaphic factors have the least effect, suggesting that life form is the dominant factor controlling spatial patterns of wood traits. Variations in the magnitude and key traits between gymnosperms and angiosperms and the same dominant factors might indicate the evolutionary divergence and convergence in key functional traits among woody plants.Keywords: allometry, functional traits, phylogeny, shrinkage coefficient, wood density
Procedia PDF Downloads 2762508 Study of the Tribological Behavior of a Pin on Disc Type of Contact
Authors: S. Djebali, S. Larbi, A. Bilek
Abstract:
The present work aims at contributing to the study of the complex phenomenon of wear of pin on disc contact in dry sliding friction between two material couples (bronze/steel and unsaturated polyester virgin and charged with graphite powder/steel). The work consists of the determination of the coefficient of friction, the study of the influence of the tribological parameters on this coefficient and the determination of the mass loss and the wear rate of the pin. This study is also widened to the highlighting of the influence of the addition of graphite powder on the tribological properties of the polymer constituting the pin. The experiments are carried out on a pin-disc type tribometer that we have designed and manufactured. Tests are conducted according to the standards DIN 50321 and DIN EN 50324. The discs are made of annealed XC48 steel and quenched and tempered XC48 steel. The main results are described here after. The increase of the normal load and the sliding speed causes the increase of the friction coefficient, whereas the increase of the percentage of graphite and the hardness of the disc surface contributes to its reduction. The mass loss also increases with the normal load. The influence of the normal load on the friction coefficient is more significant than that of the sliding speed. The effect of the sliding speed decreases for large speed values. The increase of the amount of graphite powder leads to a decrease of the coefficient of friction, the mass loss and the wear rate. The addition of graphite to the UP resin is beneficial; it plays the role of solid lubricant.Keywords: bronze, friction coefficient, graphite, mass loss, polyester, steel, wear rate
Procedia PDF Downloads 3452507 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy
Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu
Abstract:
The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis
Procedia PDF Downloads 652506 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images
Authors: Shenlun Chen, Leonard Wee
Abstract:
Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.Keywords: colorectal cancer, differentiation, survival analysis, tumor grading
Procedia PDF Downloads 1342505 Interpreting Form Based Code in Historic Residential Corridor
Authors: Diljan C. K.
Abstract:
Every location on the planet has a history and culture that give it its own identity and character, making it distinct from others. urbanised world, it is fashionable to remould its original character and impression in a contemporary style. The new character and impression of places show a complete detachment from their roots. The heritage and cultural values of the place are replaced by new impressions, and as a result, they eventually lose their identity and character and never have sustenance. In this situation, form-based coding acts as a tool in the urban design process, helping to come up with solutions that strongly bind individuals to their neighbourhood and are closely related to culture through the physical spaces they are associated with. Form-based code was made by pioneers of new urbanism in 1987 in the United States of America. Since then, it has been used in various projects inside and outside the USA with varied scales, from the design of a single building to the design of a whole community. This research makes an effort to interpret the form-based code in historic corridors to establish the association of physical form and space with the public realm to uphold the context and culture. Many of the historic corridors are undergoing a tremendous transformation in their physical form, avoiding their culture and context. This will lead to it losing its identity in form and function. If the case of Valiyashala in Trivandrum is taken as the case, which is transforming its form and will lead to the loss of its identity, the form-based code will be a suitable tool to strengthen its historical value. The study concludes by analysing the existing code (KMBR) of Valiyashala and form-based code to find the requirements in form-based code for Valiyashala.Keywords: form based code, urban conservation, heritage, historic corridor
Procedia PDF Downloads 1092504 Assessing Effects of an Intervention on Bottle-Weaning and Reducing Daily Milk Intake from Bottles in Toddlers Using Two-Part Random Effects Models
Authors: Yungtai Lo
Abstract:
Two-part random effects models have been used to fit semi-continuous longitudinal data where the response variable has a point mass at 0 and a continuous right-skewed distribution for positive values. We review methods proposed in the literature for analyzing data with excess zeros. A two-part logit-log-normal random effects model, a two-part logit-truncated normal random effects model, a two-part logit-gamma random effects model, and a two-part logit-skew normal random effects model were used to examine effects of a bottle-weaning intervention on reducing bottle use and daily milk intake from bottles in toddlers aged 11 to 13 months in a randomized controlled trial. We show in all four two-part models that the intervention promoted bottle-weaning and reduced daily milk intake from bottles in toddlers drinking from a bottle. We also show that there are no differences in model fit using either the logit link function or the probit link function for modeling the probability of bottle-weaning in all four models. Furthermore, prediction accuracy of the logit or probit link function is not sensitive to the distribution assumption on daily milk intake from bottles in toddlers not off bottles.Keywords: two-part model, semi-continuous variable, truncated normal, gamma regression, skew normal, Pearson residual, receiver operating characteristic curve
Procedia PDF Downloads 3492503 Rapid Separation of Biomolecules and Neutral Analytes with a Cationic Stationary Phase by Capillary Electrochromatography
Authors: A. Aslihan Gokaltun, Ali Tuncel
Abstract:
The unique properties of capillary electrochromatography (CEC) such as high performance, high selectivity, low consumption of both reagents and analytes ensure this technique an attractive one for the separation of biomolecules including nucleosides and nucleotides, peptides, proteins, carbohydrates. Monoliths have become a well-established separation media for CEC in the format that can be compared to a single large 'particle' that does not include interparticular voids. Convective flow through the pores of monolith significantly accelerates the rate of mass transfer and enables a substantial increase in the speed of the separation. In this work, we propose a new approach for the preparation of cationic monolithic stationary phase for capillary electrochromatography. Instead of utilizing a charge bearing monomer during polymerization, the desired charge-bearing group is generated on the capillary monolith after polymerization by using the reactive moiety of the monolithic support via one-pot, simple reaction. Optimized monolithic column compensates the disadvantages of frequently used reversed phases, which are difficult for separation of polar solutes. Rapid separation and high column efficiencies are achieved for the separation of neutral analytes, nucleic acid bases and nucleosides in reversed phase mode. Capillary monolith showed satisfactory hydrodynamic permeability and mechanical stability with relative standard deviation (RSD) values below 2 %. A new promising, reactive support that has a 'ligand selection flexibility' due to its reactive functionality represent a new family of separation media for CEC.Keywords: biomolecules, capillary electrochromatography, cationic monolith, neutral analytes
Procedia PDF Downloads 212