Search results for: isolation forest method
14240 Diagnosis and Analysis of Automated Liver and Tumor Segmentation on CT
Authors: R. R. Ramsheeja, R. Sreeraj
Abstract:
For view the internal structures of the human body such as liver, brain, kidney etc have a wide range of different modalities for medical images are provided nowadays. Computer Tomography is one of the most significant medical image modalities. In this paper use CT liver images for study the use of automatic computer aided techniques to calculate the volume of the liver tumor. Segmentation method is used for the detection of tumor from the CT scan is proposed. Gaussian filter is used for denoising the liver image and Adaptive Thresholding algorithm is used for segmentation. Multiple Region Of Interest(ROI) based method that may help to characteristic the feature different. It provides a significant impact on classification performance. Due to the characteristic of liver tumor lesion, inherent difficulties appear selective. For a better performance, a novel proposed system is introduced. Multiple ROI based feature selection and classification are performed. In order to obtain of relevant features for Support Vector Machine(SVM) classifier is important for better generalization performance. The proposed system helps to improve the better classification performance, reason in which we can see a significant reduction of features is used. The diagnosis of liver cancer from the computer tomography images is very difficult in nature. Early detection of liver tumor is very helpful to save the human life.Keywords: computed tomography (CT), multiple region of interest(ROI), feature values, segmentation, SVM classification
Procedia PDF Downloads 50914239 Inversion of Gravity Data for Density Reconstruction
Authors: Arka Roy, Chandra Prakash Dubey
Abstract:
Inverse problem generally used for recovering hidden information from outside available data. Vertical component of gravity field we will be going to use for underneath density structure calculation. Ill-posing nature is main obstacle for any inverse problem. Linear regularization using Tikhonov formulation are used for appropriate choice of SVD and GSVD components. For real time data handle, signal to noise ratios should have to be less for reliable solution. In our study, 2D and 3D synthetic model with rectangular grid are used for gravity field calculation and its corresponding inversion for density reconstruction. Fine grid also we have considered to hold any irregular structure. Keeping in mind of algebraic ambiguity factor number of observation point should be more than that of number of data point. Picard plot is represented here for choosing appropriate or main controlling Eigenvalues for a regularized solution. Another important study is depth resolution plot (DRP). DRP are generally used for studying how the inversion is influenced by regularizing or discretizing. Our further study involves real time gravity data inversion of Vredeforte Dome South Africa. We apply our method to this data. The results include density structure is in good agreement with known formation in that region, which puts an additional support of our method.Keywords: depth resolution plot, gravity inversion, Picard plot, SVD, Tikhonov formulation
Procedia PDF Downloads 21214238 Comparison of the Dose Reached to the Rectum and Bladder in Two Treatment Methods by Tandem and Ovoid and Tandem and Ring in the High Dose Rate Brachytherapy of Cervical Cancer
Authors: Akbar Haghzadeh Saraskanroud, Amir Hossein Yahyavi Zanjani, Niloofar Kargar, Hanieh Ahrabi
Abstract:
Cervical cancer refers to an unusual growth of cells in the cervix. The cervix is the lower part of the uterus, which connects to the vagina. Various risk factors such as human papillomavirus (HPV), having a weakened immune system, smoking or breathing in secondhand smoke, reproductive factors, and obesity play important roles in causing most cervical cancers. When cervical cancer happens, surgery is often the first treatment option to remove it. Other treatments might include chemotherapy and targeted therapy medicines. Radiation therapy with high-energy photon beams also may be used. Sometimes combined treatment, including radiation with low-dose chemotherapy, was applied. Intracavitary brachytherapy is an integral part of radiotherapy for locally advanced gynecologic malignancies such as cervical cancer. In the treatment of cervical cancer, there are different tools for doing brachytherapy. Two combinations of different applicators for this purpose are Tandem and Ovoid and Tandem and Ring. This study evaluated the dose differences between these two methods in the organs at risk of the rectum, sigmoid, and bladder. In this study, the treatment planswere simulated by the Oncentra treatment planning system and Tandem, Ovid, and Rings of different sizes. CT scan images of 23 patients were treated with HDR_BT Elekta Flexitron system were used for this study. Contouring of HR-CTV, rectum and bladder was performed for all patients. Then, the received dose of 0.1 and 0.2cc volumes of organs at risk were obtained and compared for these two methods: T-Ovoid and T-Ring. By doing investigations and dose measurements of points A and B and the volumes specified by ICRU, it seems that when comparing ring and ovoid to tandem and ovoid, the total dose to the rectum was lower by about 11%, and the bladder was 7%. In the case of HR CTV, this comparison showed that this ratio is about 7% better. Figure 1 shows the amount of decrease in rectum dose in the T-Ring method compared to T-Ovoid. Figure 2 indicates the amount of decrease in bladder dose in the T-Ring method compared to T-Ovoid. Finally, figure 3 illustrates the amount of HR-CTV coverage in the T-Ring method compared to the T-Ovoid.Keywords: cervical cancer, brachytherapy, rectum, tandem and ovoid, tandem and ring.
Procedia PDF Downloads 4414237 Recommended Practice for Experimental Evaluation of the Seepage Sensitivity Damage of Coalbed Methane Reservoirs
Authors: Hao Liu, Lihui Zheng, Chinedu J. Okere, Chao Wang, Xiangchun Wang, Peng Zhang
Abstract:
The coalbed methane (CBM) extraction industry (an unconventional energy source) is yet to promulgated an established standard code of practice for the experimental evaluation of sensitivity damage of coal samples. The existing experimental process of previous researches mainly followed the industry standard for conventional oil and gas reservoirs (CIS). However, the existing evaluation method ignores certain critical differences between CBM reservoirs and conventional reservoirs, which could inevitably result in an inaccurate evaluation of sensitivity damage and, eventually, poor decisions regarding the formulation of formation damage prevention measures. In this study, we propose improved experimental guidelines for evaluating seepage sensitivity damage of CBM reservoirs by leveraging on the shortcomings of the existing methods. The proposed method was established via a theoretical analysis of the main drawbacks of the existing methods and validated through comparative experiments. The results show that the proposed evaluation technique provided reliable experimental results that can better reflect actual reservoir conditions and correctly guide future development of CBM reservoirs. This study is pioneering the research on the optimization of experimental parameters for efficient exploration and development of CBM reservoirs.Keywords: coalbed methane, formation damage, permeability, unconventional energy source
Procedia PDF Downloads 12814236 A Lightweight Pretrained Encrypted Traffic Classification Method with Squeeze-and-Excitation Block and Sharpness-Aware Optimization
Authors: Zhiyan Meng, Dan Liu, Jintao Meng
Abstract:
Dependable encrypted traffic classification is crucial for improving cybersecurity and handling the growing amount of data. Large language models have shown that learning from large datasets can be effective, making pre-trained methods for encrypted traffic classification popular. However, attention-based pre-trained methods face two main issues: their large neural parameters are not suitable for low-computation environments like mobile devices and real-time applications, and they often overfit by getting stuck in local minima. To address these issues, we developed a lightweight transformer model, which reduces the computational parameters through lightweight vocabulary construction and Squeeze-and-Excitation Block. We use sharpness-aware optimization to avoid local minima during pre-training and capture temporal features with relative positional embeddings. Our approach keeps the model's classification accuracy high for downstream tasks. We conducted experiments on four datasets -USTC-TFC2016, VPN 2016, Tor 2016, and CICIOT 2022. Even with fewer than 18 million parameters, our method achieves classification results similar to methods with ten times as many parameters.Keywords: sharpness-aware optimization, encrypted traffic classification, squeeze-and-excitation block, pretrained model
Procedia PDF Downloads 3014235 A Fast Method for Graphene-Supported Pd-Co Nanostructures as Catalyst toward Ethanol Oxidation in Alkaline Media
Authors: Amir Shafiee Kisomi, Mehrdad Mofidi
Abstract:
Nowadays, fuel cells as a promising alternative for power source have been widely studied owing to their security, high energy density, low operation temperatures, renewable capability and low environmental pollutant emission. The nanoparticles of core-shell type could be widely described in a combination of a shell (outer layer material) and a core (inner material), and their characteristics are greatly conditional on dimensions and composition of the core and shell. In addition, the change in the constituting materials or the ratio of core to the shell can create their special noble characteristics. In this study, a fast technique for the fabrication of a Pd-Co/G/GCE modified electrode is offered. Thermal decomposition reaction of cobalt (II) formate salt over the surface of graphene/glassy carbon electrode (G/GCE) is utilized for the synthesis of Co nanoparticles. The nanoparticles of Pd-Co decorated on the graphene are created based on the following method: (1) Thermal decomposition reaction of cobalt (II) formate salt and (2) the galvanic replacement process Co by Pd2+. The physical and electrochemical performances of the as-prepared Pd-Co/G electrocatalyst are studied by Field Emission Scanning Electron Microscopy (FESEM), Energy Dispersive X-ray Spectroscopy (EDS), Cyclic Voltammetry (CV), and Chronoamperometry (CHA). Galvanic replacement method is utilized as a facile and spontaneous approach for growth of Pd nanostructures. The Pd-Co/G is used as an anode catalyst for ethanol oxidation in alkaline media. The Pd-Co/G not only delivered much higher current density (262.3 mAcm-2) compared to the Pd/C (32.1 mAcm-2) catalyst, but also demonstrated a negative shift of the onset oxidation potential (-0.480 vs -0.460 mV) in the forward sweep. Moreover, the novel Pd-Co/G electrocatalyst represents large electrochemically active surface area (ECSA), lower apparent activation energy (Ea), higher levels of durability and poisoning tolerance compared to the Pd/C catalyst. The paper demonstrates that the catalytic activity and stability of Pd-Co/G electrocatalyst are higher than those of the Pd/C electrocatalyst toward ethanol oxidation in alkaline media.Keywords: thermal decomposition, nanostructures, galvanic replacement, electrocatalyst, ethanol oxidation, alkaline media
Procedia PDF Downloads 15314234 Effects of Packaging Method, Storage Temperature and Storage Time on the Quality Properties of Cold-Dried Beef Slices
Authors: Elif Aykın Dinçer, Mustafa Erbaş
Abstract:
The effects of packaging method (modified atmosphere packaging (MAP) and aerobic packaging (AP)), storage temperature (4 and 25°C) and storage time (0, 15, 30, 45, 60, 75 and 90 days) on the chemical, microbiological and sensory properties of cold-dried beef slices were investigated. Beef slices were dried at 10°C and 3 m/s after pasteurization with hot steam and then packaged in order to determine the effect of different storage conditions. As the storage temperature and time increased, it was determined that the amount of CO2 decreased in the MAP packed samples and that the amount of O2 decreased while the amount of CO2 increased in the AP packed samples. The water activity value of stored beef slices decreased from 0.91 to 0.88 during 90 days of storage. The pH, TBARS and NPN-M values of stored beef slices were higher in the AP packed samples and pH value increased from 5.68 to 5.93, TBARS increased from 25.25 to 60.11 μmol MDA/kg and NPN-M value increased from 4.37 to 6.66 g/100g during the 90 days of storage. It was determined that the microbiological quality of MAP packed samples was higher and the mean counts of TAMB, TPB, Micrococcus/Staphylococcus, LAB and yeast-mold were 4.10, 3.28, 3.46, 2.99 and 3.14 log cfu/g, respectively. As a result of sensory evaluation, it was found that the quality of samples packed MAP and stored at low temperature was higher and the shelf life of samples was 90 days at 4°C and 75 days at 25°C for MAP treatment, and 60 days at 4°C and 45 days at 25°C for AP treatment.Keywords: cold drying, dried meat, packaging, storage
Procedia PDF Downloads 15014233 [Keynote Talk]: Morphological Analysis of Continuous Graphene Oxide Fibers Incorporated with Carbon Nanotube and MnCl₂
Authors: Nuray Ucar, Pelin Altay, Ilkay Ozsev Yuksek
Abstract:
Graphene oxide fibers have recently received increasing attention due to their excellent properties such as high specific surface area, high mechanical strength, good thermal properties and high electrical conductivity. They have shown notable potential in various applications including batteries, sensors, filtration and separation and wearable electronics. Carbon nanotubes (CNTs) have unique structural, mechanical, and electrical properties and can be used together with graphene oxide fibers for several application areas such as lithium ion batteries, wearable electronics, etc. Metals salts that can be converted into metal ions and metal oxide can be also used for several application areas such as battery, purification natural gas, filtration, absorption. This study investigates the effects of CNT and metal complex compounds (MnCl₂, metal salts) on the morphological structure of graphene oxide fibers. The graphene oxide dispersion was manufactured by modified Hummers method, and continuous graphene oxide fibers were produced with wet spinning. The CNT and MnCl₂ were incorporated into the coagulation baths during wet spinning process. Produced composite continuous fibers were analyzed with SEM, SEM-EDS and AFM microscopies and as spun fiber counts were measured.Keywords: continuous graphene oxide fiber, Hummers' method, CNT, MnCl₂
Procedia PDF Downloads 17614232 Broad Host Range Bacteriophage Cocktail for Reduction of Staphylococcus aureus as Potential Therapy for Atopic Dermatitis
Authors: Tamar Lin, Nufar Buchshtab, Yifat Elharar, Julian Nicenboim, Rotem Edgar, Iddo Weiner, Lior Zelcbuch, Ariel Cohen, Sharon Kredo-Russo, Inbar Gahali-Sass, Naomi Zak, Sailaja Puttagunta, Merav Bassan
Abstract:
Background: Atopic dermatitis (AD) is a chronic, relapsing inflammatory skin disorder that is characterized by dry skin and flares of eczematous lesions and intense pruritus. Multiple lines of evidence suggest that AD is associated with increased colonization by Staphylococcus aureus, which contributes to disease pathogenesis through the release of virulence factors that affect both keratinocytes and immune cells, leading to disruption of the skin barrier and immune cell dysfunction. The aim of the current study is to develop a bacteriophage-based product that specifically targets S. aureus. Methods: For the discovery of phage, environmental samples were screened on 118 S. aureus strains isolated from skin samples, followed by multiple enrichment steps. Natural phages were isolated, subjected to Next-generation Sequencing (NGS), and analyzed using proprietary bioinformatics tools for undesirable genes (toxins, antibiotic resistance genes, lysogeny potential), taxonomic classification, and purity. Phage host range was determined by an efficiency of plating (EOP) value above 0.1 and the ability of the cocktail to completely lyse liquid bacterial culture under different growth conditions (e.g., temperature, bacterial stage). Results: Sequencing analysis demonstrated that the 118 S. aureus clinical strains were distributed across the phylogenetic tree of all available Refseq S. aureus (~10,750 strains). Screening environmental samples on the S. aureus isolates resulted in the isolation of 50 lytic phages from different genera, including Silviavirus, Kayvirus, Podoviridae, and a novel unidentified phage. NGS sequencing confirmed the absence of toxic elements in the phages’ genomes. The host range of the individual phages, as measured by the efficiency of plating (EOP), ranged between 41% (48/118) to 79% (93/118). Host range studies in liquid culture revealed that a subset of the phages can infect a broad range of S. aureus strains in different metabolic states, including stationary state. Combining the single-phage EOP results of selected phages resulted in a broad host range cocktail which infected 92% (109/118) of the strains. When tested in vitro in a liquid infection assay, clearance was achieved in 87% (103/118) of the strains, with no evidence of phage resistance throughout the study (24 hours). A S. aureus host was identified that can be used for the production of all the phages in the cocktail at high titers suitable for large-scale manufacturing. This host was validated for the absence of contaminating prophages using advanced NGS methods combined with multiple production cycles. The phages are produced under optimized scale-up conditions and are being used for the development of a topical formulation (BX005) that may be administered to subjects with atopic dermatitis. Conclusions: A cocktail of natural phages targeting S. aureus was effective in reducing bacterial burden across multiple assays. Phage products may offer safe and effective steroid-sparing options for atopic dermatitis.Keywords: atopic dermatitis, bacteriophage cocktail, host range, Staphylococcus aureus
Procedia PDF Downloads 15314231 Influence of P-Y Curves on Buckling Capacity of Pile Foundation
Authors: Praveen Huded, Suresh Dash
Abstract:
Pile foundations are one of the most preferred deep foundation system for high rise or heavily loaded structures. In many instances, the failure of the pile founded structures in liquefiable soils had been observed even in many recent earthquakes. Recent centrifuge and shake table experiments on two layered soil system have credibly shown that failure of pile foundation can occur because of buckling, as the pile behaves as an unsupported slender structural element once the surrounding soil liquefies. However the buckling capacity depends on largely on the depth of soil liquefied and its residual strength. Hence it is essential to check the pile against the possible buckling failure. Beam on non-linear Winkler Foundation is one of the efficient method to model the pile-soil behavior in liquefiable soil. The pile-soil interaction is modelled through p-y springs, different author have proposed different types of p-y curves for the liquefiable soil. In the present paper the influence two such p-y curves on the buckling capacity of pile foundation is studied considering initial geometric and non-linear behavior of pile foundation. The proposed method is validated against experimental results. Significant difference in the buckling capacity is observed for the two p-y curves used in the analysis. A parametric study is conducted to understand the influence of pile diameter, pile flexural rigidity, different initial geometric imperfections, and different soil relative densities on buckling capacity of pile foundation.Keywords: Pile foundation , Liquefaction, Buckling load, non-linear py curve, Opensees
Procedia PDF Downloads 16514230 Local Buckling of Web-Core and Foam-Core Sandwich Panels
Authors: Ali N. Suri, Ahmad A. Al-Makhlufi
Abstract:
Sandwich construction is widely accepted as a method of construction especially in the aircraft industry. It is a type of stressed skin construction formed by bonding two thin faces to a thick core, the faces resist all of the applied edge loads and provide all or nearly all of the required rigidities, the core spaces the faces to increase cross section moment of inertia about common neutral axis and transmit shear between them provides a perfect bond between core and faces is made. Material for face sheets can be of metal or reinforced plastics laminates, core material can be metallic cores of thin sheets forming corrugation or honeycomb, or non-metallic core of Balsa wood, plastic foams, or honeycomb made of reinforced plastics. For in plane axial loading web core and web-foam core Sandwich panels can fail by local buckling of plates forming the cross section with buckling wave length of the order of length of spacing between webs. In this study local buckling of web core and web-foam core Sandwich panels is carried out for given materials of facing and core, and given panel overall dimension for different combinations of cross section geometries. The Finite Strip Method is used for the analysis, and Fortran based computer program is developed and used.Keywords: local buckling, finite strip, sandwich panels, web and foam core
Procedia PDF Downloads 35114229 Phytochemical Screening, Antioxidant Potential, and Mineral Composition of Dried Abelmoschus esculentus L. Fruits Consume in Gada Area of Sokoto State, Nigeria
Authors: I. Sani, F. Bello, I. M. Fakai, A. Abdulhamid
Abstract:
Abelmoschus esculentus L. fruit is very common especially in northern part of Nigeria, but people are ignorant of its medicinal and pharmacological benefits. Preliminary phytochemical screening, antioxidant potential and mineral composition of the dried form of this fruit were determined. The Phytochemical screening was conducted using standard methods. Antioxidant potential screening was carried out using Ferric Reducing Antioxidant Power Assay (FRAP) method, while, the mineral compositions were analyzed using an atomic absorption spectrophotometer by wet digest method. The result of the qualitative phytochemical screening revealed that the fruits contain saponins, flavonoids, tannins, steroids, and terpenoids, while, anthraquinone, alkaloids, phenols, glycosides, and phlobatannins were not detected. The quantitative analysis revealed that the fruits contain saponnins (380 ± 0.020 mg/g), flavonoids (240±0.01 mg/g), and tannins (21.71 ± 0.66 mg/ml). The antioxidant potential was determined to be 54.1 ± 0.19%. The mineral composition revealed that 100 g of the fruits contains 97.52 ± 1.04 mg of magnesium (Mg), 94.53 ± 3.21 mg of calcium (Ca), 77.10 ± 0.79 mg of iron (Fe), 47.14 ± 0.41 mg of zinc (Zn), 43.96 ± 1.49 mg of potassium (K), 42.02 ± 1.09 mg of sodium (Na), 0.47 ± 0.08 mg of copper (Cu) and 0.10 ± 0.02 mg of lead (Pb). These results showed that the Abelmoschus esculentus L. fruit is a good source of antioxidants, and contains an appreciable amount of phytochemicals, therefore, it has some pharmacological attributes. On the other side, the fruit can serve as a nutritional supplement for Mg, Ca, Fe, Zn, K, and Na, but a poor source of Cu, and contains no significant amount of Pb.Keywords: Abelmoschus esculentus Fruits, antioxidant potential, mineral composition, phytochemical screening
Procedia PDF Downloads 37614228 Aflatoxins Characterization in Remedial Plant-Delphinium denudatum by High-Performance Liquid Chromatography–Tandem Mass Spectrometry
Authors: Nadeem A. Siddique, Mohd Mujeeb, Kahkashan
Abstract:
Introduction: The objective of the projected work is to study the occurrence of the aflatoxins B1, B2, G1and G2 in remedial plants, exclusively in Delphinium denudatum. The aflatoxins were analysed by high-performance liquid chromatography–tandem quadrupole mass spectrometry with electrospray ionization (HPLC–MS/MS) and immunoaffinity column chromatography were used for extraction and purification of aflatoxins. PDA media was selected for fungal count. Results: A good quality linear relationship was originated for AFB1, AFB2, AFG1 and AFG2 at 1–10 ppb (r > 0.9995). The analyte precision at three different spiking levels was 88.7–109.1 %, by means of low per cent relative standard deviations in each case. Within 5 to7 min aflatoxins can be separated using an Agilent XDB C18-column. We found that AFB1 and AFB2 were not found in D. denudatum. This was reliable through exceptionally low figures of fungal colonies observed after 6 hr of incubation. The developed analytical method is straightforward, be successfully used to determine the aflatoxins. Conclusion: The developed analytical method is straightforward, simple, accurate, economical and can be successfully used to find out the aflatoxins in remedial plants and consequently to have power over the quality of products. The presence of aflatoxin in the plant extracts was interrelated to the least fungal load in the remedial plants examined.Keywords: aflatoxins, delphinium denudatum, liquid chromatography, mass spectrometry
Procedia PDF Downloads 21314227 Data Recording for Remote Monitoring of Autonomous Vehicles
Authors: Rong-Terng Juang
Abstract:
Autonomous vehicles offer the possibility of significant benefits to social welfare. However, fully automated cars might not be going to happen in the near further. To speed the adoption of the self-driving technologies, many governments worldwide are passing laws requiring data recorders for the testing of autonomous vehicles. Currently, the self-driving vehicle, (e.g., shuttle bus) has to be monitored from a remote control center. When an autonomous vehicle encounters an unexpected driving environment, such as road construction or an obstruction, it should request assistance from a remote operator. Nevertheless, large amounts of data, including images, radar and lidar data, etc., have to be transmitted from the vehicle to the remote center. Therefore, this paper proposes a data compression method of in-vehicle networks for remote monitoring of autonomous vehicles. Firstly, the time-series data are rearranged into a multi-dimensional signal space. Upon the arrival, for controller area networks (CAN), the new data are mapped onto a time-data two-dimensional space associated with the specific CAN identity. Secondly, the data are sampled based on differential sampling. Finally, the whole set of data are encoded using existing algorithms such as Huffman, arithmetic and codebook encoding methods. To evaluate system performance, the proposed method was deployed on an in-house built autonomous vehicle. The testing results show that the amount of data can be reduced as much as 1/7 compared to the raw data.Keywords: autonomous vehicle, data compression, remote monitoring, controller area networks (CAN), Lidar
Procedia PDF Downloads 16314226 Material Analysis for Temple Painting Conservation in Taiwan
Authors: Chen-Fu Wang, Lin-Ya Kung
Abstract:
For traditional painting materials, the artisan used to combine the pigments with different binders to create colors. As time goes by, the materials used for painting evolved from natural to chemical materials. The vast variety of ingredients used in chemical materials has complicated restoration work; it makes conservation work more difficult. Conservation work also becomes harder when the materials cannot be easily identified; therefore, it is essential that we take a more scientific approach to assist in conservation work. Paintings materials are high molecular weight polymer, and their analysis is very complicated as well other contamination such as smoke and dirt can also interfere with the analysis of the material. The current methods of composition analysis of painting materials include Fourier transform infrared spectroscopy (FT-IR), mass spectrometer, Raman spectroscopy, X-ray diffraction spectroscopy (XRD), each of which has its own limitation. In this study, FT-IR was used to analyze the components of the paint coating. We have taken the most commonly seen materials as samples and deteriorated it. The aged information was then used for the database to exam the temple painting materials. By observing the FT-IR changes over time, we can tell all of the painting materials will be deteriorated by the UV light, but only the speed of its degradation had some difference. From the deterioration experiment, the acrylic resin resists better than the others. After collecting the painting materials aging information on FT-IR, we performed some test on the paintings on the temples. It was found that most of the artisan used tune-oil for painting materials, and some other paintings used chemical materials. This method is now working successfully on identifying the painting materials. However, the method is destructive and high cost. In the future, we will work on the how to know the painting materials more efficiently.Keywords: temple painting, painting material, conservation, FT-IR
Procedia PDF Downloads 18814225 Elucidation of the Sequential Transcriptional Activity in Escherichia coli Using Time-Series RNA-Seq Data
Authors: Pui Shan Wong, Kosuke Tashiro, Satoru Kuhara, Sachiyo Aburatani
Abstract:
Functional genomics and gene regulation inference has readily expanded our knowledge and understanding of gene interactions with regards to expression regulation. With the advancement of transcriptome sequencing in time-series comes the ability to study the sequential changes of the transcriptome. This method presented here works to augment existing regulation networks accumulated in literature with transcriptome data gathered from time-series experiments to construct a sequential representation of transcription factor activity. This method is applied on a time-series RNA-Seq data set from Escherichia coli as it transitions from growth to stationary phase over five hours. Investigations are conducted on the various metabolic activities in gene regulation processes by taking advantage of the correlation between regulatory gene pairs to examine their activity on a dynamic network. Especially, the changes in metabolic activity during phase transition are analyzed with focus on the pagP gene as well as other associated transcription factors. The visualization of the sequential transcriptional activity is used to describe the change in metabolic pathway activity originating from the pagP transcription factor, phoP. The results show a shift from amino acid and nucleic acid metabolism, to energy metabolism during the transition to stationary phase in E. coli.Keywords: Escherichia coli, gene regulation, network, time-series
Procedia PDF Downloads 37214224 A Low-Cost Dye Solar Cells Based on Ordinary Glass as Substrates
Authors: Sangmo Jon, Ganghyok Kim, Kwanghyok Jong, Ilnam Jo, Hyangsun Kim, Kukhyon Pae, GyeChol Sin
Abstract:
The back contact dye solar cells (BCDSCs), in which the transparent conductive oxide (TCO) is omitted, have the potential to use intact low-cost general substrates such as glass, metal foil, and papers. Herein, we introduce a facile manufacturing method of a Ti back contact electrode for the BCDSCs. We found that the polylinkers such as poly(butyl titanate) have a strong binding property to make Ti particles connect with one another. A porous Ti film, which consists of Ti particles of ≤10㎛ size connected by a small amount of polylinkers, has an excellent low sheet resistance of 10 ohm sq⁻¹ for an efficient electron collection for DSCs. This Ti back contact electrode can be prepared by using a facile printing method under normal ambient conditions. Conjugating the new back contact electrode technology with the traditional monolithic structure using the carbon counter electrode, we fabricated all TCO-less DSCs. These four-layer structured DSCs consist of a dye-adsorbed nanocrystalline TiO₂ film on a glass substrate, a porous Ti back contact layer, a ZrO₂ spacer layer, and a carbon counter electrode in a layered structure. Under AM 1.5G and 100mWcm⁻² simulated sunlight illumination, the four-layer structured DSCs with N719 dyes and I⁻/I₃⁻ redox electrolytes achieved PCEs up to 5.21%.Keywords: dye solar cells, TCO-less, back contact, printing, porous Ti film
Procedia PDF Downloads 6614223 Troubleshooting Petroleum Equipment Based on Wireless Sensors Based on Bayesian Algorithm
Authors: Vahid Bayrami Rad
Abstract:
In this research, common methods and techniques have been investigated with a focus on intelligent fault finding and monitoring systems in the oil industry. In fact, remote and intelligent control methods are considered a necessity for implementing various operations in the oil industry, but benefiting from the knowledge extracted from countless data generated with the help of data mining algorithms. It is a avoid way to speed up the operational process for monitoring and troubleshooting in today's big oil companies. Therefore, by comparing data mining algorithms and checking the efficiency and structure and how these algorithms respond in different conditions, The proposed (Bayesian) algorithm using data clustering and their analysis and data evaluation using a colored Petri net has provided an applicable and dynamic model from the point of view of reliability and response time. Therefore, by using this method, it is possible to achieve a dynamic and consistent model of the remote control system and prevent the occurrence of leakage in oil pipelines and refineries and reduce costs and human and financial errors. Statistical data The data obtained from the evaluation process shows an increase in reliability, availability and high speed compared to other previous methods in this proposed method.Keywords: wireless sensors, petroleum equipment troubleshooting, Bayesian algorithm, colored Petri net, rapid miner, data mining-reliability
Procedia PDF Downloads 6614222 A Simplified Method to Assess the Damage of an Immersed Cylinder Subjected to Underwater Explosion
Authors: Kevin Brochard, Herve Le Sourne, Guillaume Barras
Abstract:
The design of a submarine’s hull is crucial for its operability and crew’s safety, but also complex. Indeed, engineers need to balance lightness, acoustic discretion and resistance to both immersion pressure and environmental attacks. Submarine explosions represent a first-rate threat for the integrity of the hull, whose behavior needs to be properly analyzed. The presented work is focused on the development of a simplified analytical method to study the structural response of a deeply immersed cylinder submitted to an underwater explosion. This method aims to provide engineers a quick estimation of the resulting damage, allowing them to simulate a large number of explosion scenarios. The present research relies on the so-called plastic string on plastic foundation model. A two-dimensional boundary value problem for a cylindrical shell is converted to an equivalent one-dimensional problem of a plastic string resting on a non-linear plastic foundation. For this purpose, equivalence parameters are defined and evaluated by making assumptions on the shape of the displacement and velocity field in the cross-sectional plane of the cylinder. Closed-form solutions for the deformation and velocity profile of the shell are obtained for explosive loading, and compare well with numerical and experimental results. However, the plastic-string model has not yet been adapted for a cylinder in immersion subjected to an explosive loading. In fact, the effects of fluid-structure interaction have to be taken into account. Moreover, when an underwater explosion occurs, several pressure waves are emitted by the gas bubble pulsations, called secondary waves. The corresponding loads, which may produce significant damages to the cylinder, must also be accounted for. The analytical developments carried out to solve the above problem of a shock wave impacting a cylinder, considering fluid-structure interaction will be presented for an unstiffened cylinder. The resulting deformations are compared to experimental and numerical results for different shock factors and different standoff distances.Keywords: immersed cylinder, rigid plastic material, shock loading, underwater explosion
Procedia PDF Downloads 33714221 Cryptography Over Sextic Extension with Cubic Subfield
Authors: A. Chillali, M. Sahmoudi
Abstract:
In this paper we will give a method for encoding the elements of the ring of integers of sextic extension, namely L = Q(a,b) which is a rational quadratic over cubic field K =Q(a) where a^{2} is a rational square free integer and b is a root of irreducible polynomiale of degree 3.Keywords: coding, integral bases, sextic, quadratic
Procedia PDF Downloads 29714220 A Rapid Reinforcement Technique for Columns by Carbon Fiber/Epoxy Composite Materials
Authors: Faruk Elaldi
Abstract:
There are lots of concrete columns and beams around in our living cities. Those columns are mostly open to aggressive environmental conditions and earthquakes. Mostly, they are deteriorated by sand, wind, humidity and other external applications at times. After a while, these beams and columns need to be repaired. Within the scope of this study, for reinforcement of concrete columns, samples were designed and fabricated to be strengthened with carbon fiber reinforced composite materials and conventional concrete encapsulation and followed by, and they were put into the axial compression test to determine load-carrying performance before column failure. In the first stage of this study, concrete column design and mold designs were completed for a certain load-carrying capacity. Later, the columns were exposed to environmental deterioration in order to reduce load-carrying capacity. To reinforce these damaged columns, two methods were applied, “concrete encapsulation” and the other one “wrapping with carbon fiber /epoxy” material. In the second stage of the study, the reinforced columns were applied to the axial compression test and the results obtained were analyzed. Cost and load-carrying performance comparisons were made and it was found that even though the carbon fiber/epoxy reinforced method is more expensive, this method enhances higher load-carrying capacity and reduces the reinforcement processing period.Keywords: column reinforcement, composite, earth quake, carbon fiber reinforced
Procedia PDF Downloads 18414219 Evaluation of Beam Structure Using Non-Destructive Vibration-Based Damage Detection Method
Authors: Bashir Ahmad Aasim, Abdul Khaliq Karimi, Jun Tomiyama
Abstract:
Material aging is one of the vital issues among all the civil, mechanical, and aerospace engineering societies. Sustenance and reliability of concrete, which is the widely used material in the world, is the focal point in civil engineering societies. For few decades, researchers have been able to present some form algorithms that could lead to evaluate a structure globally rather than locally without harming its serviceability and traffic interference. The algorithms could help presenting different methods for evaluating structures non-destructively. In this paper, a non-destructive vibration-based damage detection method is adopted to evaluate two concrete beams, one being in a healthy state while the second one contains a crack on its bottom vicinity. The study discusses that damage in a structure affects modal parameters (natural frequency, mode shape, and damping ratio), which are the function of physical properties (mass, stiffness, and damping). The assessment is carried out to acquire the natural frequency of the sound beam. Next, the vibration response is recorded from the cracked beam. Eventually, both results are compared to know the variation in the natural frequencies of both beams. The study concludes that damage can be detected using vibration characteristics of a structural member considering the decline occurred in the natural frequency of the cracked beam.Keywords: concrete beam, natural frequency, non-destructive testing, vibration characteristics
Procedia PDF Downloads 11214218 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection
Authors: S. Delgado, C. Cerrada, R. S. Gómez
Abstract:
This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.Keywords: voxelization, GPU acceleration, computer graphics, compute shaders
Procedia PDF Downloads 7314217 Effects of in silico (Virtual Lab) And in vitro (inside the Classroom) Labs in the Academic Performance of Senior High School Students in General Biology
Authors: Mark Archei O. Javier
Abstract:
The Fourth Industrial Revolution (FIR) is a major industrial era characterized by the fusion of technologies that is blurring the lines between the physical, digital, and biological spheres. Since this era teaches us how to thrive in the fast-paced developing world, it is important to be able to adapt. With this, there is a need to make learning and teaching in the bioscience laboratory more challenging and engaging. The goal of the research is to find out if using in silico and in vitro laboratory activities compared to the conventional conduct laboratory activities would have positive impacts on the academic performance of the learners. The potential contribution of the research is that it would improve the teachers’ methods in delivering the content to the students when it comes to topics that need laboratory activities. This study will develop a method by which teachers can provide learning materials to the students. A one-tailed t-Test for independent samples was used to determine the significant difference in the pre- and post-test scores of students. The tests of hypotheses were done at a 0.05 level of significance. Based on the results of the study, the gain scores of the experimental group are greater than the gain scores of the control group. This implies that using in silico and in vitro labs for the experimental group is more effective than the conventional method of doing laboratory activities.Keywords: academic performance, general biology, in silico laboratory, in vivo laboratory, virtual laboratory
Procedia PDF Downloads 18914216 In vitro Regeneration of Neural Cells Using Human Umbilical Cord Derived Mesenchymal Stem Cells
Authors: Urvi Panwar, Kanchan Mishra, Kanjaksha Ghosh, ShankerLal Kothari
Abstract:
Background: Day-by-day the increasing prevalence of neurodegenerative diseases have become a global issue to manage them by medical sciences. The adult neural stem cells are rare and require an invasive and painful procedure to obtain it from central nervous system. Mesenchymal stem cell (MSCs) therapies have shown remarkable application in treatment of various cell injuries and cell loss. MSCs can be derived from various sources like adult tissues, human bone marrow, umbilical cord blood and cord tissue. MSCs have similar proliferation and differentiation capability, but the human umbilical cord-derived mesenchymal stem cells (hUCMSCs) are proved to be more beneficial with respect to cell procurement, differentiation to other cells, preservation, and transplantation. Material and method: Human umbilical cord is easily obtainable and non-controversial comparative to bone marrow and other adult tissues. The umbilical cord can be collected after delivery of baby, and its tissue can be cultured using explant culture method. Cell culture medium such as DMEMF12+10% FBS and DMEMF12+Neural growth factors (bFGF, human noggin, B27) with antibiotics (Streptomycin/Gentamycin) were used to culture and differentiate mesenchymal stem cells into neural cells, respectively. The characterisations of MSCs were done with Flow Cytometer for surface markers CD90, CD73 and CD105 and colony forming unit assay. The differentiated various neural cells will be characterised by fluorescence markers for neurons, astrocytes, and oligodendrocytes; quantitative PCR for genes Nestin and NeuroD1 and Western blotting technique for gap43 protein. Result and discussion: The high quality and number of MSCs were isolated from human umbilical cord via explant culture method. The obtained MSCs were differentiated into neural cells like neurons, astrocytes and oligodendrocytes. The differentiated neural cells can be used to treat neural injuries and neural cell loss by delivering cells by non-invasive administration via cerebrospinal fluid (CSF) or blood. Moreover, the MSCs can also be directly delivered to different injured sites where they differentiate into neural cells. Therefore, human umbilical cord is demonstrated to be an inexpensive and easily available source for MSCs. Moreover, the hUCMSCs can be a potential source for neural cell therapies and neural cell regeneration for neural cell injuries and neural cell loss. This new way of research will be helpful to treat and manage neural cell damages and neurodegenerative diseases like Alzheimer and Parkinson. Still the study has a long way to go but it is a promising approach for many neural disorders for which at present no satisfactory management is available.Keywords: bone marrow, cell therapy, explant culture method, flow cytometer, human umbilical cord, mesenchymal stem cells, neurodegenerative diseases, neuroprotective, regeneration
Procedia PDF Downloads 20214215 MAS Capped CdTe/ZnS Core/Shell Quantum Dot Based Sensor for Detection of Hg(II)
Authors: Dilip Saikia, Suparna Bhattacharjee, Nirab Adhikary
Abstract:
In this piece of work, we have presented the synthesis and characterization of CdTe/ZnS core/shell (CS) quantum dots (QD). CS QDs are used as a fluorescence probe to design a simple cost-effective and ultrasensitive sensor for the detection of toxic Hg(II) in an aqueous medium. Mercaptosuccinic acid (MSA) has been used as a capping agent for the synthesis CdTe/ZnS CS QD. Photoluminescence quenching mechanism has been used in the detection experiment of Hg(II). The designed sensing technique shows a remarkably low detection limit of about 1 picomolar (pM). Here, the CS QDs are synthesized by a simple one-pot aqueous method. The synthesized CS QDs are characterized by using advanced diagnostics tools such as UV-vis, Photoluminescence, XRD, FTIR, TEM and Zeta potential analysis. The interaction between CS QDs and the Hg(II) ions results in the quenching of photoluminescence (PL) intensity of QDs, via the mechanism of excited state electron transfer. The proposed mechanism is explained using cyclic voltammetry and zeta potential analysis. The designed sensor is found to be highly selective towards Hg (II) ions. The analysis of the real samples such as drinking water and tap water has been carried out and the CS QDs show remarkably good results. Using this simple sensing method we have designed a prototype low-cost electronic device for the detection of Hg(II) in an aqueous medium. The findings of the experimental results of the designed sensor is crosschecked by using AAS analysis.Keywords: photoluminescence, quantum dots, quenching, sensor
Procedia PDF Downloads 26614214 Settlement Prediction in Cape Flats Sands Using Shear Wave Velocity – Penetration Resistance Correlations
Authors: Nanine Fouche
Abstract:
The Cape Flats is a low-lying sand-covered expanse of approximately 460 square kilometres, situated to the southeast of the central business district of Cape Town in the Western Cape of South Africa. The aeolian sands masking this area are often loose and compressible in the upper 1m to 1.5m of the surface, and there is a general exceedance of the maximum allowable settlement in these sands. The settlement of shallow foundations on Cape Flats sands is commonly predicted using the results of in-situ tests such as the SPT or DPSH due to the difficulty of retrieving undisturbed samples for laboratory testing. Varying degrees of accuracy and reliability are associated with these methods. More recently, shear wave velocity (Vs) profiles obtained from seismic testing, such as continuous surface wave tests (CSW), are being used for settlement prediction. Such predictions have the advantage of considering non-linear stress-strain behaviour of soil and the degradation of stiffness with increasing strain. CSW tests are rarely executed in the Cape Flats, whereas SPT’s are commonly performed. For this reason, and to facilitate better settlement predictions in Cape Flats sand, equations representing shear wave velocity (Vs) as a function of SPT blow count (N60) and vertical effective stress (v’) were generated by statistical regression of site investigation data. To reveal the most appropriate method of overburden correction, analyses were performed with a separate overburden term (Pa/σ’v) as well as using stress corrected shear wave velocity and SPT blow counts (correcting Vs. and N60 to Vs1and (N1)60respectively). Shear wave velocity profiles and SPT blow count data from three sites masked by Cape Flats sands were utilised to generate 80 Vs-SPT N data pairs for analysis. Investigated terrains included sites in the suburbs of Athlone, Muizenburg, and Atlantis, all underlain by windblown deposits comprising fine and medium sand with varying fines contents. Elastic settlement analysis was also undertaken for the Cape Flats sands, using a non-linear stepwise method based on small-strain stiffness estimates, which was obtained from the best Vs-N60 model and compared to settlement estimates using the general elastic solution with stiffness profiles determined using Stroud’s (1989) and Webb’s (1969) SPT N60-E transformation models. Stroud’s method considers strain level indirectly whereasWebb’smethod does not take account of the variation in elastic modulus with strain. The expression of Vs. in terms of N60 and Pa/σv’ derived from the Atlantis data set revealed the best fit with R2 = 0.83 and a standard error of 83.5m/s. Less accurate Vs-SPT N relations associated with the combined data set is presumably the result of inversion routines used in the analysis of the CSW results showcasing significant variation in relative density and stiffness with depth. The regression analyses revealed that the inclusion of a separate overburden term in the regression of Vs and N60, produces improved fits, as opposed to the stress corrected equations in which the R2 of the regression is notably lower. It is the correction of Vs and N60 to Vs1 and (N1)60 with empirical constants ‘n’ and ‘m’ prior to regression, that introduces bias with respect to overburden pressure. When comparing settlement prediction methods, both Stroud’s method (considering strain level indirectly) and the small strain stiffness method predict higher stiffnesses for medium dense and dense profiles than Webb’s method, which takes no account of strain level in the determination of soil stiffness. Webb’s method appears to be suitable for loose sands only. The Versak software appears to underestimate differences in settlement between square and strip footings of similar width. In conclusion, settlement analysis using small-strain stiffness data from the proposed Vs-N60 model for Cape Flats sands provides a way to take account of the non-linear stress-strain behaviour of the sands when calculating settlement.Keywords: sands, settlement prediction, continuous surface wave test, small-strain stiffness, shear wave velocity, penetration resistance
Procedia PDF Downloads 17514213 A Kernel-Based Method for MicroRNA Precursor Identification
Authors: Bin Liu
Abstract:
MicroRNAs (miRNAs) are small non-coding RNA molecules, functioning in transcriptional and post-transcriptional regulation of gene expression. The discrimination of the real pre-miRNAs from the false ones (such as hairpin sequences with similar stem-loops) is necessary for the understanding of miRNAs’ role in the control of cell life and death. Since both their small size and sequence specificity, it cannot be based on sequence information alone but requires structure information about the miRNA precursor to get satisfactory performance. Kmers are convenient and widely used features for modeling the properties of miRNAs and other biological sequences. However, Kmers suffer from the inherent limitation that if the parameter K is increased to incorporate long range effects, some certain Kmer will appear rarely or even not appear, as a consequence, most Kmers absent and a few present once. Thus, the statistical learning approaches using Kmers as features become susceptible to noisy data once K becomes large. In this study, we proposed a Gapped k-mer approach to overcome the disadvantages of Kmers, and applied this method to the field of miRNA prediction. Combined with the structure status composition, a classifier called imiRNA-GSSC was proposed. We show that compared to the original imiRNA-kmer and alternative approaches. Trained on human miRNA precursors, this predictor can achieve an accuracy of 82.34 for predicting 4022 pre-miRNA precursors from eleven species.Keywords: gapped k-mer, imiRNA-GSSC, microRNA precursor, support vector machine
Procedia PDF Downloads 16214212 Study on an Integrated Real-Time Sensor in Droplet-Based Microfluidics
Authors: Tien-Li Chang, Huang-Chi Huang, Zhao-Chi Chen, Wun-Yi Chen
Abstract:
The droplet-based microfluidic are used as micro-reactors for chemical and biological assays. Hence, the precise addition of reagents into the droplets is essential for this function in the scope of lab-on-a-chip applications. To obtain the characteristics (size, velocity, pressure, and frequency of production) of droplets, this study describes an integrated on-chip method of real-time signal detection. By controlling and manipulating the fluids, the flow behavior can be obtained in the droplet-based microfluidics. The detection method is used a type of infrared sensor. Through the varieties of droplets in the microfluidic devices, the real-time conditions of velocity and pressure are gained from the sensors. Here the microfluidic devices are fabricated by polydimethylsiloxane (PDMS). To measure the droplets, the signal acquisition of sensor and LabVIEW program control must be established in the microchannel devices. The devices can generate the different size droplets where the flow rate of oil phase is fixed 30 μl/hr and the flow rates of water phase range are from 20 μl/hr to 80 μl/hr. The experimental results demonstrate that the sensors are able to measure the time difference of droplets under the different velocity at the voltage from 0 V to 2 V. Consequently, the droplets are measured the fastest speed of 1.6 mm/s and related flow behaviors that can be helpful to develop and integrate the practical microfluidic applications.Keywords: microfluidic, droplets, sensors, single detection
Procedia PDF Downloads 49314211 A Quasi-Systematic Review on Effectiveness of Social and Cultural Sustainability Practices in Built Environment
Authors: Asif Ali, Daud Salim Faruquie
Abstract:
With the advancement of knowledge about the utility and impact of sustainability, its feasibility has been explored into different walks of life. Scientists, however; have established their knowledge in four areas viz environmental, economic, social and cultural, popularly termed as four pillars of sustainability. Aspects of environmental and economic sustainability have been rigorously researched and practiced and huge volume of strong evidence of effectiveness has been founded for these two sub-areas. For the social and cultural aspects of sustainability, dependable evidence of effectiveness is still to be instituted as the researchers and practitioners are developing and experimenting methods across the globe. Therefore, the present research aimed to identify globally used practices of social and cultural sustainability and through evidence synthesis assess their outcomes to determine the effectiveness of those practices. A PICO format steered the methodology which included all populations, popular sustainability practices including walkability/cycle tracks, social/recreational spaces, privacy, health & human services and barrier free built environment, comparators included ‘Before’ and ‘After’, ‘With’ and ‘Without’, ‘More’ and ‘Less’ and outcomes included Social well-being, cultural co-existence, quality of life, ethics and morality, social capital, sense of place, education, health, recreation and leisure, and holistic development. Search of literature included major electronic databases, search websites, organizational resources, directory of open access journals and subscribed journals. Grey literature, however, was not included. Inclusion criteria filtered studies on the basis of research designs such as total randomization, quasi-randomization, cluster randomization, observational or single studies and certain types of analysis. Studies with combined outcomes were considered but studies focusing only on environmental and/or economic outcomes were rejected. Data extraction, critical appraisal and evidence synthesis was carried out using customized tabulation, reference manager and CASP tool. Partial meta-analysis was carried out and calculation of pooled effects and forest plotting were done. As many as 13 studies finally included for final synthesis explained the impact of targeted practices on health, behavioural and social dimensions. Objectivity in the measurement of health outcomes facilitated quantitative synthesis of studies which highlighted the impact of sustainability methods on physical activity, Body Mass Index, perinatal outcomes and child health. Studies synthesized qualitatively (and also quantitatively) showed outcomes such as routines, family relations, citizenship, trust in relationships, social inclusion, neighbourhood social capital, wellbeing, habitability and family’s social processes. The synthesized evidence indicates slight effectiveness and efficacy of social and cultural sustainability on the targeted outcomes. Further synthesis revealed that such results of this study are due weak research designs and disintegrated implementations. If architects and other practitioners deliver their interventions in collaboration with research bodies and policy makers, a stronger evidence-base in this area could be generated.Keywords: built environment, cultural sustainability, social sustainability, sustainable architecture
Procedia PDF Downloads 401