Search results for: molecular dynamic
244 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging
Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi
Abstract:
Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA
Procedia PDF Downloads 280243 Planckian Dissipation in Bi₂Sr₂Ca₂Cu₃O₁₀₋δ
Authors: Lalita, Niladri Sarkar, Subhasis Ghosh
Abstract:
Since the discovery of high temperature superconductivity (HTSC) in cuprates, several aspects of this phenomena have fascinated physics community. The most debated one is the linear temperature dependence of normal state resistivity over wide range of temperature in violation of with Fermi liquid theory. The linear-in-T resistivity (LITR) is the indication of strongly correlated metallic, known as “strange metal”, attributed to non Fermi liquid theory (NFL). The proximity of superconductivity to LITR suggests that there may be underlying common origin. The LITR has been shown to be due to unknown dissipative phenomena, restricted by quantum mechanics and commonly known as ‘‘Planckian dissipation” , the term first coined by Zaanen and the associated inelastic scattering time τ and given by 1/τ=αkBT/ℏ, where ℏ, kB and α are reduced Planck’s constant, Boltzmann constant and a dimensionless constant of order of unity, respectively. Since the first report, experimental support for α ~ 1 is appearing in literature. There are several striking issues which remain to be resolved if we desire to find out or at least get a clue towards microscopic origin of maximal dissipation in cuprates. (i) Universality of α ~ 1, recently some doubts have been raised in some cases. (ii) So far, Planckian dissipation has been demonstrated in overdoped Cuprates, but if the proximity to quantum criticality is important, then Planckian dissipation should be observed in optimally doped and marginally underdoped cuprates. The link between Planckian dissipation and quantum criticality still remains an open problem. (iii) Validity of Planckian dissipation in all cuprates is an important issue. Here, we report reversible change in the superconducting behavior of high temperature superconductor Bi2Sr2Ca2Cu3O10+δ (Bi-2223) under dynamic doping induced by photo-excitation. Two doped Bi-223 samples, which are x = 0.16 (optimal-doped), x = 0.145 (marginal-doped) have been used for this investigation. It is realized that steady state photo-excitation converts magnetic Cu2+ ions to nonmagnetic Cu1+ ions which reduces superconducting transition temperature (Tc) by killing superfluid density. In Bi-2223, one would expect the maximum of suppression of Tc should be at charge transfer gap. We have observed suppression of Tc starts at 2eV, which is the charge transfer gap in Bi-2223. We attribute this transition due to Cu-3d9(Cu2+) to Cu-3d10(Cu+), known as d9 − d10 L transition, photoexcitation makes some Cu ions in CuO2 planes as spinless non-magnetic potential perturbation as Zn2+ does in CuO2 plane in case Zn-doped cuprates. The resistivity varies linearly with temperature with or without photo-excitation. Tc can be varied by almost by 40K be photoexcitation. Superconductivity can be destroyed completely by introducing ≈ 2% of Cu1+ ions for this range of doping. With this controlled variation of Tc and resistivity, detailed investigation has been carried out to reveal Planckian dissipation underdoped to optimally doped Bi-2223. The most important aspect of this investigation is that we could vary Tc dynamically and reversibly, so that LITR and associated Planckian dissipation can be studied over wide ranges of Tc without changing the doping chemically.Keywords: linear resistivity, HTSC, Planckian dissipation, strange metal
Procedia PDF Downloads 62242 Evaluation of Coupled CFD-FEA Simulation for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham
Abstract:
Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 90241 Business Intelligent to a Decision Support Tool for Green Entrepreneurship: Meso and Macro Regions
Authors: Anishur Rahman, Maria Areias, Diogo Simões, Ana Figeuiredo, Filipa Figueiredo, João Nunes
Abstract:
The circular economy (CE) has gained increased awareness among academics, businesses, and decision-makers as it stimulates resource circularity in the production and consumption systems. A large epistemological study has explored the principles of CE, but scant attention eagerly focused on analysing how CE is evaluated, consented to, and enforced using economic metabolism data and business intelligent framework. Economic metabolism involves the ongoing exchange of materials and energy within and across socio-economic systems and requires the assessment of vast amounts of data to provide quantitative analysis related to effective resource management. Limited concern, the present work has focused on the regional flows pilot region from Portugal. By addressing this gap, this study aims to promote eco-innovation and sustainability in the regions of Intermunicipal Communities Região de Coimbra, Viseu Dão Lafões and Beiras e Serra da Estrela, using this data to find precise synergies in terms of material flows and give companies a competitive advantage in form of valuable waste destinations, access to new resources and new markets, cost reduction and risk sharing benefits. In our work, emphasis on applying artificial intelligence (AI) and, more specifically, on implementing state-of-the-art deep learning algorithms is placed, contributing to construction a business intelligent approach. With the emergence of new approaches generally highlighted under the sub-heading of AI and machine learning (ML), the methods for statistical analysis of complex and uncertain production systems are facing significant changes. Therefore, various definitions of AI and its differences from traditional statistics are presented, and furthermore, ML is introduced to identify its place in data science and the differences in topics such as big data analytics and in production problems that using AI and ML are identified. A lifecycle-based approach is then taken to analyse the use of different methods in each phase to identify the most useful technologies and unifying attributes of AI in manufacturing. Most of macroeconomic metabolisms models are mainly direct to contexts of large metropolis, neglecting rural territories, so within this project, a dynamic decision support model coupled with artificial intelligence tools and information platforms will be developed, focused on the reality of these transition zones between the rural and urban. Thus, a real decision support tool is under development, which will surpass the scientific developments carried out to date and will allow to overcome imitations related to the availability and reliability of data.Keywords: circular economy, artificial intelligence, economic metabolisms, machine learning
Procedia PDF Downloads 73240 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography
Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai
Abstract:
Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics
Procedia PDF Downloads 97239 Bacterial Diversity in Vaginal Microbiota in Patients with Different Levels of Cervical Lesions Related to Human Papillomavirus Infection
Authors: Michelle S. Pereira, Analice C. Azevedo, Julliane D. Medeiros, Ana Claudia S. Martins, Didier S. Castellano-Filho, Claudio G. Diniz, Vania L. Silva
Abstract:
Vaginal microbiota is a complex ecosystem, composed by aerobic and anaerobic bacteria, living in a dynamic equilibrium. Lactobacillus spp. are predominant in vaginal ecosystem, and factors such as immunity and hormonal variations may lead to disruptions, resulting in proliferation of opportunistic pathogens. Bacterial vaginosis (BV) is a polymicrobial syndrome, caused by an increasing of anaerobic bacteria replacing Lactobacillus spp. Microorganisms such as Gardnerella vaginalis, Mycoplasma hominis, Mobiluncus spp., and Atopobium vaginae can be found in BV, which may also be associated to other infections such as by Human Papillomavirus (HPV). HPV is highly prevalent in sexually active women, and is considered a risk factor for development of cervical cancer. As long as few data is available on vaginal microbiota of women with HPV-associated cervical lesions, our objectives were to evaluate the diversity in vaginal ecosystem in these women. To all patients, clinical and socio-demographic data were collected after gynecological examination. This study was approved by the Ethics Committee from Federal University of Juiz de Fora, Minas Gerais, Brazil. Vaginal secretion and cervical scraping were collected. Gram-stained smears were evaluated to establish Nugent score for BV determination. Viral and bacterial DNA obtained was used as template for HPV genotyping (PCR) and bacterial fingerprint (REP-PCR). In total 31 patients were included (mean age 35 and 93.6% sexually active). The Nugent score showed that 38.7% were BV. From the medical records, Pap smear tests showed that 32.3% had low grade squamous epithelial lesion (LSIL), 29% had high grade squamous epithelial lesion (HSIL), 25.8% had atypical squamous cells of undetermined significance (ASC-US) and 12.9% with atypical squamous cells that would not exclude high-grade lesion (ASC-H). All participants were HPV+. HPV-16 was the most frequent (87.1%), followed by HPV-18 (61.3%). HPV-31, HPV-52 and HPV-58 were also detected. Coinfection HPV-16/HPV-18 was observed in 75%. In the 18-30 age group, HPV-16 was detected in 40%, and HPV-16/HPV-18 coinfection in 35%. HPV-16 was associated to 30% of ASC-H and 20% of HSIL patients. BV was observed in 50% of HPV-16+ participants and in 45% of HPV-16/HPV-18+. Fingerprints of bacterial communities showed clusters with low similarity suggesting high heterogeneity in vaginal microbiota within the sampled group. Overall, the data is worrisome once cervical-cancer highly risk-associated HPV-types were identified. The high microbial diversity observed may be related to the different levels of cellular lesions, and different physiological conditions of the participants (age, social behavior, education). Further prospective studies are needed to better address correlations and BV and microbial imbalance in vaginal ecosystems which would be related to the different cellular lesions in women with HPV infections. Supported by FAPEMIG, CNPq, CAPES, PPGCBIO/UFJF.Keywords: human papillomavirus, bacterial vaginosis, bacterial diversity, cervical cancer
Procedia PDF Downloads 195238 Investigation of Ground Disturbance Caused by Pile Driving: Case Study
Authors: Thayalan Nall, Harry Poulos
Abstract:
Piling is the most widely used foundation method for heavy structures in poor soil conditions. The geotechnical engineer can choose among a variety of piling methods, but in most cases, driving piles by impact hammer is the most cost-effective alternative. Under unfavourable conditions, driving piles can cause environmental problems, such as noise, ground movements and vibrations, with the risk of ground disturbance leading to potential damage to proposed structures. In one of the project sites in which the authors were involved, three offshore container terminals, namely CT1, CT2 and CT3, were constructed over thick compressible marine mud. The seabed was around 6m deep and the soft clay thickness within the project site varied between 9m and 20m. CT2 and CT3 were connected together and rectangular in shape and were 2600mx800m in size. CT1 was 400m x 800m in size and was located on south opposite of CT2 towards its eastern end. CT1 was constructed first and due to time and environmental limitations, it was supported on a “forest” of large diameter driven piles. CT2 and CT3 are now under construction and are being carried out using a traditional dredging and reclamation approach with ground improvement by surcharging with vertical drains. A few months after the installation of the CT1 piles, a 2600m long sand bund to 2m above mean sea level was constructed along the southern perimeter of CT2 and CT3 to contain the dredged mud that was expected to be pumped. The sand bund was constructed by sand spraying and pumping using a dredging vessel. About 2000m length of the sand bund in the west section was constructed without any major stability issues or any noticeable distress. However, as the sand bund approached the section parallel to CT1, it underwent a series of deep seated failures leading the displaced soft clay materials to heave above the standing water level. The crest of the sand bund was about 100m away from the last row of piles. There were no plausible geological reasons to conclude that the marine mud only across the CT1 region was weaker than over the rest of the site. Hence it was suspected that the pile driving by impact hammer may have caused ground movements and vibrations, leading to generation of excess pore pressures and cyclic softening of the marine mud. This paper investigates the probable cause of failure by reviewing: (1) All ground investigation data within the region; (2) Soil displacement caused by pile driving, using theories similar to spherical cavity expansion; (3) Transfer of stresses and vibrations through the entire system, including vibrations transmitted from the hammer to the pile, and the dynamic properties of the soil; and (4) Generation of excess pore pressure due to ground vibration and resulting cyclic softening. The evidence suggests that the problems encountered at the site were primarily caused by the “side effects” of the pile driving operations.Keywords: pile driving, ground vibration, excess pore pressure, cyclic softening
Procedia PDF Downloads 237237 Characterizing and Developing the Clinical Grade Microbiome Assay with a Robust Bioinformatics Pipeline for Supporting Precision Medicine Driven Clinical Development
Authors: Danyi Wang, Andrew Schriefer, Dennis O'Rourke, Brajendra Kumar, Yang Liu, Fei Zhong, Juergen Scheuenpflug, Zheng Feng
Abstract:
Purpose: It has been recognized that the microbiome plays critical roles in disease pathogenesis, including cancer, autoimmune disease, and multiple sclerosis. To develop a clinical-grade assay for exploring microbiome-derived clinical biomarkers across disease areas, a two-phase approach is implemented. 1) Identification of the optimal sample preparation reagents using pre-mixed bacteria and healthy donor stool samples coupled with proprietary Sigma-Aldrich® bioinformatics solution. 2) Exploratory analysis of patient samples for enabling precision medicine. Study Procedure: In phase 1 study, we first compared the 16S sequencing results of two ATCC® microbiome standards (MSA 2002 and MSA 2003) across five different extraction kits (Kit A, B, C, D & E). Both microbiome standards samples were extracted in triplicate across all extraction kits. Following isolation, DNA quantity was determined by Qubit assay. DNA quality was assessed to determine purity and to confirm extracted DNA is of high molecular weight. Bacterial 16S ribosomal ribonucleic acid (rRNA) amplicons were generated via amplification of the V3/V4 hypervariable region of the 16S rRNA. Sequencing was performed using a 2x300 bp paired-end configuration on the Illumina MiSeq. Fastq files were analyzed using the Sigma-Aldrich® Microbiome Platform. The Microbiome Platform is a cloud-based service that offers best-in-class 16S-seq and WGS analysis pipelines and databases. The Platform and its methods have been extensively benchmarked using microbiome standards generated internally by MilliporeSigma and other external providers. Data Summary: The DNA yield using the extraction kit D and E is below the limit of detection (100 pg/µl) of Qubit assay as both extraction kits are intended for samples with low bacterial counts. The pre-mixed bacterial pellets at high concentrations with an input of 2 x106 cells for MSA-2002 and 1 x106 cells from MSA-2003 were not compatible with the kits. Among the remaining 3 extraction kits, kit A produced the greatest yield whereas kit B provided the least yield (Kit-A/MSA-2002: 174.25 ± 34.98; Kit-A/MSA-2003: 179.89 ± 30.18; Kit-B/MSA-2002: 27.86 ± 9.35; Kit-B/MSA-2003: 23.14 ± 6.39; Kit-C/MSA-2002: 55.19 ± 10.18; Kit-C/MSA-2003: 35.80 ± 11.41 (Mean ± SD)). Also, kit A produced the greatest yield, whereas kit B provided the least yield. The PCoA 3D visualization of the Weighted Unifrac beta diversity shows that kits A and C cluster closely together while kit B appears as an outlier. The kit A sequencing samples cluster more closely together than both the other kits. The taxonomic profiles of kit B have lower recall when compared to the known mixture profiles indicating that kit B was inefficient at detecting some of the bacteria. Conclusion: Our data demonstrated that the DNA extraction method impacts DNA concentration, purity, and microbial communities detected by next-generation sequencing analysis. Further microbiome analysis performance comparison of using healthy stool samples is underway; also, colorectal cancer patients' samples will be acquired for further explore the clinical utilities. Collectively, our comprehensive qualification approach, including the evaluation of optimal DNA extraction conditions, the inclusion of positive controls, and the implementation of a robust qualified bioinformatics pipeline, assures accurate characterization of the microbiota in a complex matrix for deciphering the deep biology and enabling precision medicine.Keywords: 16S rRNA sequencing, analytical validation, bioinformatics pipeline, metagenomics
Procedia PDF Downloads 170236 Comparative Investigation of Two Non-Contact Prototype Designs Based on a Squeeze-Film Levitation Approach
Authors: A. Almurshedi, M. Atherton, C. Mares, T. Stolarski, M. Miyatake
Abstract:
Transportation and handling of delicate and lightweight objects is currently a significant issue in some industries. Two common contactless movement prototype designs, ultrasonic transducer design and vibrating plate design, are compared. Both designs are based on the method of squeeze-film levitation, and this study aims to identify the limitations, and challenges of each. The designs are evaluated in terms of levitation capabilities, and characteristics. To this end, theoretical and experimental explorations are made. It is demonstrated that the ultrasonic transducer prototype design is better suited to the terms of levitation capabilities. However, the design has some operating and mechanical designing difficulties. For making accurate industrial products in micro-fabrication and nanotechnology contexts, such as semiconductor silicon wafers, micro-components and integrated circuits, non-contact oil-free, ultra-precision and low wear transport along the production line is crucial for enabling. One of the designs (design A) is called the ultrasonic chuck, for which an ultrasonic transducer (Langevin, FBI 28452 HS) comprises the main part. Whereas the other (design B), is a vibrating plate design, which consists of a plain rectangular plate made of Aluminium firmly fastened at both ends. The size of the rectangular plate is 200x100x2 mm. In addition, four rounded piezoelectric actuators of size 28 mm diameter with 0.5 mm thickness are glued to the underside of the plate. The vibrating plate is clamped at both ends in the horizontal plane through a steel supporting structure. In addition, the dynamic of levitation using the designs (A and B) has been investigated based on the squeeze film levitation (SFL). The input apparatus that is used with designs consist of a sine wave signal generator connected to an amplifier type ENP-1-1U (Echo Electronics). The latter has to be utilised to magnify the sine wave voltage that is produced by the signal generator. The measurements of the maximum levitation for three different semiconductor wafers of weights 52, 70 and 88 [g] for design A are 240, 205 and 187 [um], respectively. Whereas the physical results show that the average separation distance for a disk of 5 [g] weight for design B reaches 70 [um]. By using the methodology of squeeze film levitation, it is possible to hold an object in a non-contact manner. The analyses of the investigation outcomes signify that the non-contact levitation of design A provides more improvement than design B. However, design A is more complicated than design B in terms of its manufacturing. In order to identify an adequate non-contact SFL design, a comparison between two common such designs has been adopted for the current investigation. Specifically, the study will involve making comparisons in terms of the following issues: floating component geometries and material type constraints; final created pressure distributions; dangerous interactions with the surrounding space; working environment constraints; and complication and compactness of the mechanical design. Considering all these matters is essential for proficiently distinguish the better SFL design.Keywords: ANSYS, floating, piezoelectric, squeeze-film
Procedia PDF Downloads 149235 Seismic Assessment of Flat Slab and Conventional Slab System for Irregular Building Equipped with Shear Wall
Authors: Muhammad Aji Fajari, Ririt Aprilin Sumarsono
Abstract:
Particular instability of structural building under lateral load (e.g earthquake) will rise due to irregularity in vertical and horizontal direction as stated in SNI 03-1762-2012. The conventional slab has been considered for its less contribution in increasing the stability of the structure, except special slab system such as flat slab turned into account. In this paper, the analysis of flat slab system at Sequis Tower located in South Jakarta will be assessed its performance under earthquake. It consists of 6 floors of the basement where the flat slab system is applied. The flat slab system will be the main focus in this paper to be compared for its performance with conventional slab system under earthquake. Regarding the floor plan of Sequis Tower basement, re-entrant corner signed for this building is 43.21% which exceeded the allowable re-entrant corner is 15% as stated in ASCE 7-05 Based on that, the horizontal irregularity will be another concern for analysis, otherwise vertical irregularity does not exist for this building. Flat slab system is a system where the slabs use drop panel with shear head as their support instead of using beams. Major advantages of flat slab application are decreasing dead load of structure, removing beams so that the clear height can be maximized, and providing lateral resistance due to lateral load. Whilst, deflection at middle strip and punching shear are problems to be detail considered. Torsion usually appears when the structural member under flexure such as beam or column dimension is improper in ratio. Considering flat slab as alternative slab system will keep the collapse due to torsion down. Common seismic load resisting system applied in the building is a shear wall. Installation of shear wall will keep the structural system stronger and stiffer affecting in reduced displacement under earthquake. Eccentricity of shear wall location of this building resolved the instability due to horizontal irregularity so that the earthquake load can be absorbed. Performing linear dynamic analysis such as response spectrum and time history analysis due to earthquake load is suitable as the irregularity arise so that the performance of structure can be significantly observed. Utilization of response spectrum data for South Jakarta which PGA 0.389g is basic for the earthquake load idealization to be involved in several load combinations stated on SNI 03-1726-2012. The analysis will result in some basic seismic parameters such as period, displacement, and base shear of the system; besides the internal forces of the critical member will be presented. Predicted period of a structure under earthquake load is 0.45 second, but as different slab system applied in the analysis then the period will show a different value. Flat slab system will probably result in better performance for the displacement parameter compare to conventional slab system due to higher contribution of stiffness to the whole system of the building. In line with displacement, the deflection of the slab will result smaller for flat slab than a conventional slab. Henceforth, shear wall will be effective to strengthen the conventional slab system than flat slab system.Keywords: conventional slab, flat slab, horizontal irregularity, response spectrum, shear wall
Procedia PDF Downloads 191234 Assessing the Outcomes of Collaboration with Students on Curriculum Development and Design on an Undergraduate Art History Module
Authors: Helen Potkin
Abstract:
This paper presents a practice-based case study of a project in which the student group designed and planned the curriculum content, classroom activities and assessment briefs in collaboration with the tutor. It focuses on the co-creation of the curriculum within a history and theory module, Researching the Contemporary, which runs for BA (Hons) Fine Art and Art History and for BA (Hons) Art Design History Practice at Kingston University, London. The paper analyses the potential of collaborative approaches to engender students’ investment in their own learning and to encourage reflective and self-conscious understandings of themselves as learners. It also addresses some of the challenges of working in this way, attending to the risks involved and feelings of uncertainty produced in experimental, fluid and open situations of learning. Alongside this, it acknowledges the tensions inherent in adopting such practices within the framework of the institution and within the wider of context of the commodification of higher education in the United Kingdom. The concept underpinning the initiative was to test out co-creation as a creative process and to explore the possibilities of altering the traditional hierarchical relationship between teacher and student in a more active, participatory environment. In other words, the project asked about: what kind of learning could be imagined if we were all in it together? It considered co-creation as producing different ways of being, or becoming, as learners, involving us reconfiguring multiple relationships: to learning, to each other, to research, to the institution and to our emotions. The project provided the opportunity for students to bring their own research and wider interests into the classroom, take ownership of sessions, collaborate with each other and to define the criteria against which they would be assessed. Drawing on students’ reflections on their experience of co-creation alongside theoretical considerations engaging with the processual nature of learning, concepts of equality and the generative qualities of the interrelationships in the classroom, the paper suggests that the dynamic nature of collaborative and participatory modes of engagement have the potential to foster relevant and significant learning experiences. The findings as a result of the project could be quantified in terms of the high level of student engagement in the project, specifically investment in the assessment, alongside the ambition and high quality of the student work produced. However, reflection on the outcomes of the experiment prompts a further set of questions about the nature of positionality in connection to learning, the ways our identities as learners are formed in and through our relationships in the classroom and the potential and productive nature of creative practice in education. Overall, the paper interrogates questions of what it means to work with students to invent and assemble the curriculum and it assesses the benefits and challenges of co-creation. Underpinning it is the argument that, particularly in the current climate of higher education, it is increasingly important to ask what it means to teach and to envisage what kinds of learning can be possible.Keywords: co-creation, collaboration, learning, participation, risk
Procedia PDF Downloads 123233 The Touch Sensation: Ageing and Gender Influences
Authors: A. Abdouni, C. Thieulin, M. Djaghloul, R. Vargiolu, H. Zahouani
Abstract:
A decline in the main sensory modalities (vision, hearing, taste, and smell) is well reported to occur with advancing age, it is expected a similar change to occur with touch sensation and perception. In this study, we have focused on the touch sensations highlighting ageing and gender influences with in vivo systems. The touch process can be divided into two main phases: The first phase is the first contact between the finger and the object, during this contact, an adhesive force has been created which is the needed force to permit an initial movement of the finger. In the second phase, the finger mechanical properties with their surface topography play an important role in the obtained sensation. In order to understand the age and gender effects on the touch sense, we develop different ideas and systems for each phase. To better characterize the contact, the mechanical properties and the surface topography of human finger, in vivo studies on the pulp of 40 subjects (20 of each gender) of four age groups of 26±3, 35+-3, 45+-2 and 58±6 have been performed. To understand the first touch phase a classical indentation system has been adapted to measure the finger contact properties. The normal force load, the indentation speed, the contact time, the penetration depth and the indenter geometry have been optimized. The penetration depth of a glass indenter is recorded as a function of the applied normal force. Main assessed parameter is the adhesive force F_ad. For the second phase, first, an innovative approach is proposed to characterize the dynamic finger mechanical properties. A contactless indentation test inspired from the techniques used in ophthalmology has been used. The test principle is to blow an air blast to the finger and measure the caused deformation by a linear laser. The advantage of this test is the real observation of the skin free return without any outside influence. Main obtained parameters are the wave propagation speed and the Young's modulus E. Second, negative silicon replicas of subject’s fingerprint have been analyzed by a probe laser defocusing. A laser diode transmits a light beam on the surface to be measured, and the reflected signal is returned to a set of four photodiodes. This technology allows reconstructing three-dimensional images. In order to study the age and gender effects on the roughness properties, a multi-scale characterization of roughness has been realized by applying continuous wavelet transform. After determining the decomposition of the surface, the method consists of quantifying the arithmetic mean of surface topographic at each scale SMA. Significant differences of the main parameters are shown with ageing and gender. The comparison between men and women groups reveals that the adhesive force is higher for women. The results of mechanical properties show a Young’s modulus higher for women and also increasing with age. The roughness analysis shows a significant difference in function of age and gender.Keywords: ageing, finger, gender, touch
Procedia PDF Downloads 265232 Killing for the Great Peace: An Internal Perspective on the Anti-Manchu Theme in the Taiping Movement
Authors: Zihao He
Abstract:
The majority of existing studies on the Taiping Movement (1851-1864) viewed their anti-Manchu attitudes as nationalist agendas: Taiping was aimed at revolting against the Manchu government and establishing a new political regime. To explain these aggressive and violent attitudes towards Manchu, these studies mainly found socio-economic factors and stressed the status of “being deprived”. Even the ‘demon-slaying’ narrative of the Taiping to dehumanize the Manchu tends to be viewed as a “religious tool” to achieve their political, nationalist aim. This paper argues that these studies on Taiping’s anti-Manchu attitudes and behaviors are analyzed from an external angle and have two major problems. Firstly, they distinguished “religion” from “nationalist” or “political”, focusing on the “political” nature of the movement. “Religion” and the religious experience within Taiping were largely ignored. This paper argues that there was no separable and independent “religion” in the Taiping Movement, as opposed to secular, nationalist politics. Secondly, these analyses held an external perspective on Taiping’s anti-Manchu agenda. Demonizing and killing Manchu were viewed as purely political actions. On the contrary, this paper focuses on the internal perspective of anti-Manchu narratives in the Taiping Movement. The method of this paper is mainly textual analysis, focusing on the official documents, edicts, and proclamations of the Taiping movement. It views the writing of the Taiping as a coherent narrative and rhetoric, which was attractive and convincing for its followers. In terms of the main findings, firstly, internal and external perspectives on anti-Manchu violence are different. Externally, violence was viewed as a tool and necessary process to achieve the political goal. However, internally speaking, in Taiping’s writing, violence was a result of Godlessness, which would be solved as far as the faith in God is restored in China. Having a framework of universal love among human beings as sons and daughters of the Heavenly Father and killing was forbidden, the Taiping excluded Manchus from the family of human beings and demonized them. “Demon-slaying” was not violence. It was constructed as a necessary process to achieve the Great Peace. Moreover, Taiping’s anti-Manchu violence was not merely “political.” Rather, the category “religion” and its binary opposition, “secular,” is not suitable for Taiping. A key point related to this argument is the revolutionary violence against the Manchu government, which inherited the traditional “Heavenly Mandate” model. From an internal, theological perspective, anti-Manchu was ordained and commanded by the Heavenly Father. Manchu, as a regime, was standing as a hindrance in the path toward God. Besides, Manchu was not only viewed as a regime, but they were also “demons.” Therefore, the paper examines how Manchus were dehumanized in Taiping’s writings and were situated outside of the consideration of nonviolent and love. Manchu as a regime and Manchu as demons are in a dynamic relationship. As a regime, the Manchu government was preventing Chinese people from worshipping the Heavenly Father, so they were demonized. As they were demons, killing Manchus during the revolt was justified and not viewed as being contradicted the universal love among human beings.Keywords: anti-manchu, demon-slaying, heavenly mandate, religion and violence, the taiping movement.
Procedia PDF Downloads 71231 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks
Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo
Abstract:
In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm
Procedia PDF Downloads 228230 Linking the Genetic Signature of Free-Living Soil Diazotrophs with Process Rates under Land Use Conversion in the Amazon Rainforest
Authors: Rachel Danielson, Brendan Bohannan, S.M. Tsai, Kyle Meyer, Jorge L.M. Rodrigues
Abstract:
The Amazon Rainforest is a global diversity hotspot and crucial carbon sink, but approximately 20% of its total extent has been deforested- primarily for the establishment of cattle pasture. Understanding the impact of this large-scale disturbance on soil microbial community composition and activity is crucial in understanding potentially consequential shifts in nutrient or greenhouse gas cycling, as well as adding to the body of knowledge concerning how these complex communities respond to human disturbance. In this study, surface soils (0-10cm) were collected from three forests and three 45-year-old pastures in Rondonia, Brazil (the Amazon state with the greatest rate of forest destruction) in order to determine the impact of forest conversion on microbial communities involved in nitrogen fixation. Soil chemical and physical parameters were paired with measurements of microbial activity and genetic profiles to determine how community composition and process rates relate to environmental conditions. Measuring both the natural abundance of 15N in total soil N, as well as incorporation of enriched 15N2 under incubation has revealed that conversion of primary forest to cattle pasture results in a significant increase in the rate of nitrogen fixation by free-living diazotrophs. Quantification of nifH gene copy numbers (an essential subunit encoding the nitrogenase enzyme) correspondingly reveals a significant increase of genes in pasture compared to forest soils. Additionally, genetic sequencing of both nifH genes and transcripts shows a significant increase in the diversity of the present and metabolically active diazotrophs within the soil community. Levels of both organic and inorganic nitrogen tend to be lower in pastures compared to forests, with ammonium rather than nitrate as the dominant inorganic form. However, no significant or consistent differences in total, extractable, permanganate-oxidizable, or loss-on-ignition carbon are present between the two land-use types. Forest conversion is associated with a 0.5- 1.0 unit pH increase, but concentrations of many biologically relevant nutrients such as phosphorus do not increase consistently. Increases in free-living diazotrophic community abundance and activity appear to be related to shifts in carbon to nitrogen pool ratios. Furthermore, there may be an important impact of transient, low molecular weight plant-root-derived organic carbon on free-living diazotroph communities not captured in this study. Preliminary analysis of nitrogenase gene variant composition using NovoSeq metagenomic sequencing indicates that conversion of forest to pasture may significantly enrich vanadium-based nitrogenases. This indication is complemented by a significant decrease in available soil molybdenum. Very little is known about the ecology of diazotrophs utilizing vanadium-based nitrogenases, so further analysis may reveal important environmental conditions favoring their abundance and diversity in soil systems. Taken together, the results of this study indicate a significant change in nitrogen cycling and diazotroph community composition with the conversion of the Amazon Rainforest. This may have important implications for the sustainability of cattle pastures once established since nitrogen is a crucial nutrient for forage grass productivity.Keywords: free-living diazotrophs, land use change, metagenomic sequencing, nitrogen fixation
Procedia PDF Downloads 195229 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach
Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista
Abstract:
The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.Keywords: depth, deep learning, geovisualisation, satellite images
Procedia PDF Downloads 12228 Elastoplastic Modified Stillinger Weber-Potential Based Discretized Virtual Internal Bond and Its Application to the Dynamic Fracture Propagation
Authors: Dina Kon Mushid, Kabutakapua Kakanda, Dibu Dave Mbako
Abstract:
The failure of material usually involves elastoplastic deformation and fracturing. Continuum mechanics can effectively deal with plastic deformation by using a yield function and the flow rule. At the same time, it has some limitations in dealing with the fracture problem since it is a theory based on the continuous field hypothesis. The lattice model can simulate the fracture problem very well, but it is inadequate for dealing with plastic deformation. Based on the discretized virtual internal bond model (DVIB), this paper proposes a lattice model that can account for plasticity. DVIB is a lattice method that considers material to comprise bond cells. Each bond cell may have any geometry with a finite number of bonds. The two-body or multi-body potential can characterize the strain energy of a bond cell. The two-body potential leads to the fixed Poisson ratio, while the multi-body potential can overcome the limitation of the fixed Poisson ratio. In the present paper, the modified Stillinger-Weber (SW), a multi-body potential, is employed to characterize the bond cell energy. The SW potential is composed of two parts. One part is the two-body potential that describes the interatomic interactions between particles. Another is the three-body potential that represents the bond angle interactions between particles. Because the SW interaction can represent the bond stretch and bond angle contribution, the SW potential-based DVIB (SW-DVIB) can represent the various Poisson ratios. To embed the plasticity in the SW-DVIB, the plasticity is considered in the two-body part of the SW potential. It is done by reducing the bond stiffness to a lower level once the bond reaches the yielding point. While before the bond reaches the yielding point, the bond is elastic. When the bond deformation exceeds the yielding point, the bond stiffness is softened to a lower value. When unloaded, irreversible deformation occurs. With the bond length increasing to a critical value, termed the failure bond length, the bond fails. The critical failure bond length is related to the cell size and the macro fracture energy. By this means, the fracture energy is conserved so that the cell size sensitivity problem is relieved to a great extent. In addition, the plasticity and the fracture are also unified at the bond level. To make the DVIB able to simulate different Poisson ratios, the three-body part of the SW potential is kept elasto-brittle. The bond angle can bear the moment before the bond angle increment is smaller than a critical value. By this method, the SW-DVIB can simulate the plastic deformation and the fracturing process of material with various Poisson ratios. The elastoplastic SW-DVIB is used to simulate the plastic deformation of a material, the plastic fracturing process, and the tunnel plastic deformation. It has been shown that the current SW-DVIB method is straightforward in simulating both elastoplastic deformation and plastic fracture.Keywords: lattice model, discretized virtual internal bond, elastoplastic deformation, fracture, modified stillinger-weber potential
Procedia PDF Downloads 99227 Al2O3-Dielectric AlGaN/GaN Enhancement-Mode MOS-HEMTs by Using Ozone Water Oxidization Technique
Authors: Ching-Sung Lee, Wei-Chou Hsu, Han-Yin Liu, Hung-Hsi Huang, Si-Fu Chen, Yun-Jung Yang, Bo-Chun Chiang, Yu-Chuang Chen, Shen-Tin Yang
Abstract:
AlGaN/GaN high electron mobility transistors (HEMTs) have been intensively studied due to their intrinsic advantages of high breakdown electric field, high electron saturation velocity, and excellent chemical stability. They are also suitable for ultra-violet (UV) photodetection due to the corresponding wavelengths of GaN bandgap. To improve the optical responsivity by decreasing the dark current due to gate leakage problems and limited Schottky barrier heights in GaN-based HEMT devices, various metal-oxide-semiconductor HEMTs (MOS-HEMTs) have been devised by using atomic layer deposition (ALD), molecular beam epitaxy (MBE), metal-organic chemical vapor deposition (MOCVD), liquid phase deposition (LPD), and RF sputtering. The gate dielectrics include MgO, HfO2, Al2O3, La2O3, and TiO2. In order to provide complementary circuit operation, enhancement-mode (E-mode) devices have been lately studied using techniques of fluorine treatment, p-type capper, piezoneutralization layer, and MOS-gate structure. This work reports an Al2O3-dielectric Al0.25Ga0.75N/GaN E-mode MOS-HEMT design by using a cost-effective ozone water oxidization technique. The present ozone oxidization method advantages of low cost processing facility, processing simplicity, compatibility to device fabrication, and room-temperature operation under atmospheric pressure. It can further reduce the gate-to-channel distance and improve the transocnductance (gm) gain for a specific oxide thickness, since the formation of the Al2O3 will consume part of the AlGaN barrier at the same time. The epitaxial structure of the studied devices was grown by using the MOCVD technique. On a Si substrate, the layer structures include a 3.9 m C-doped GaN buffer, a 300 nm GaN channel layer, and a 5 nm Al0.25Ga0.75N barrier layer. Mesa etching was performed to provide electrical isolation by using an inductively coupled-plasma reactive ion etcher (ICP-RIE). Ti/Al/Au were thermally evaporated and annealed to form the source and drain ohmic contacts. The device was immersed into the H2O2 solution pumped with ozone gas generated by using an OW-K2 ozone generator. Ni/Au were deposited as the gate electrode to complete device fabrication of MOS-HEMT. The formed Al2O3 oxide thickness 7 nm and the remained AlGaN barrier thickness is 2 nm. A reference HEMT device has also been fabricated in comparison on the same epitaxial structure. The gate dimensions are 1.2 × 100 µm 2 with a source-to-drain spacing of 5 μm for both devices. The dielectric constant (k) of Al2O3 was characterized to be 9.2 by using C-V measurement. Reduced interface state density after oxidization has been verified by the low-frequency noise spectra, Hooge coefficients, and pulse I-V measurement. Improved device characteristics at temperatures of 300 K-450 K have been achieved for the present MOS-HEMT design. Consequently, Al2O3-dielectric Al0.25Ga0.75N/GaN E-mode MOS-HEMTs by using the ozone water oxidization method are reported. In comparison with a conventional Schottky-gate HEMT, the MOS-HEMT design has demonstrated excellent enhancements of 138% (176%) in gm, max, 118% (139%) in IDS, max, 53% (62%) in BVGD, 3 (2)-order reduction in IG leakage at VGD = -60 V at 300 (450) K. This work is promising for millimeter-wave integrated circuit (MMIC) and three-terminal active UV photodetector applications.Keywords: MOS-HEMT, enhancement mode, AlGaN/GaN, passivation, ozone water oxidation, gate leakage
Procedia PDF Downloads 263226 Enhancement of Radiosensitization by Aptamer 5TR1-Functionalized AgNCs for Triple-Negative Breast Cancer
Authors: Xuechun Kan, Dongdong Li, Fan Li, Peidang Liu
Abstract:
Triple-negative breast cancer (TNBC) is the most malignant subtype of breast cancer with a poor prognosis, and radiotherapy is one of the main treatment methods. However, due to the obvious resistance of tumor cells to radiotherapy, high dose of ionizing radiation is required during radiotherapy, which causes serious damage to normal tissues near the tumor. Therefore, how to improve radiotherapy resistance and enhance the specific killing of tumor cells by radiation is a hot issue that needs to be solved in clinic. Recent studies have shown that silver-based nanoparticles have strong radiosensitization, and silver nanoclusters (AgNCs) also provide a broad prospect for tumor targeted radiosensitization therapy due to their ultra-small size, low toxicity or non-toxicity, self-fluorescence and strong photostability. Aptamer 5TR1 is a 25-base oligonucleotide aptamer that can specifically bind to mucin-1 highly expressed on the membrane surface of TNBC 4T1 cells, and can be used as a highly efficient tumor targeting molecule. In this study, AgNCs were synthesized by DNA template based on 5TR1 aptamer (NC-T5-5TR1), and its role as a targeted radiosensitizer in TNBC radiotherapy was investigated. The optimal DNA template was first screened by fluorescence emission spectroscopy, and NC-T5-5TR1 was prepared. NC-T5-5TR1 was characterized by transmission electron microscopy, ultraviolet-visible spectroscopy and dynamic light scattering. The inhibitory effect of NC-T5-5TR1 on cell activity was evaluated using the MTT method. Laser confocal microscopy was employed to observe NC-T5-5TR1 targeting 4T1 cells and verify its self-fluorescence characteristics. The uptake of NC-T5-5TR1 by 4T1 cells was observed by dark-field imaging, and the uptake peak was evaluated by inductively coupled plasma mass spectrometry. The radiation sensitization effect of NC-T5-5TR1 was evaluated through cell cloning and in vivo anti-tumor experiments. Annexin V-FITC/PI double staining flow cytometry was utilized to detect the impact of nanomaterials combined with radiotherapy on apoptosis. The results demonstrated that the particle size of NC-T5-5TR1 is about 2 nm, and the UV-visible absorption spectrum detection verifies the successful construction of NC-T5-5TR1, and it shows good dispersion. NC-T5-5TR1 significantly inhibited the activity of 4T1 cells and effectively targeted and fluoresced within 4T1 cells. The uptake of NC-T5-5TR1 reached its peak at 3 h in the tumor area. Compared with AgNCs without aptamer modification, NC-T5-5TR1 exhibited superior radiation sensitization, and combined radiotherapy significantly inhibited the activity of 4T1 cells and tumor growth in 4T1-bearing mice. The apoptosis level of NC-T5-5TR1 combined with radiation was significantly increased. These findings provide important theoretical and experimental support for NC-T5-5TR1 as a radiation sensitizer for TNBC.Keywords: 5TR1 aptamer, silver nanoclusters, radio sensitization, triple-negative breast cancer
Procedia PDF Downloads 62225 Iraqi Women’s Rights Under State Civil Law and Conservative Influences: A Study of Legal Documents and Social Implementation
Authors: Rose Hattab
Abstract:
Women have been an important dynamic in religious context and the state-building process of Arab countries throughout history. During the 1970s as the movement for women’s activism and rights developed, the Iraqi state under the Ba’ath Party began to provide Iraqi women with legal and civil rights. This was done to liberate women from the grasps of social traditions and was a tangible espousing of equality between men and women in the process of nation-building. Whereas women’s rights were stronger and more supported throughout the earliest years of the Ba’ath Regime (1970-1990), the aftermath of the Gulf War and economic sanctions on the conditions of Iraqi society laid the foundation for a division of women’s rights between civil and religious authorities. Personal status codes that were secured in 1959 were being pushed back by amendments made in coordination with religious leaders. Civil laws were present on paper, but religious authority took prominence in practice. The written legal codes were inclusive of women’s rights, but there is not an active or ensured practice of these rights within Iraqi society. This is due to many different factors, such as religious, sectarian, political and conservative reasons that hold back or limit the ability for Iraqi women to have autonomy in aspects such as participation in the workforce, getting married, and ensuring social justice. This paper argues that the Personal Status Code introduced in 1959 – which replaced Sharia-run courts with personal status courts – provided Iraqi women with equality and increased mobility in social and economic dynamics. The statewide crisis felt after the Gulf War and the economic sanctions imposed by the United Nations led to a stark shift in the Ba’ath party’s political ideology. This ideological turn guided the social system to the embracement of social conservatism and religious traditions in the 1990s. The effect of this implementation continued after the establishment of a new Iraqi government during 2003-2005. Consequently, Iraqi women's rights in employment, marriage, and family became divided into paper and practice by religious authorities and civil law from that period to the present day. This paper also contributes to the literature by expanding on the gap between legal codes on paper and in practice, through providing an analysis of Iraqi women’s rights in the Iraqi Constitution of 2005 and Iraq’s Penal Code. The turn to conservative and religious traditions is derived from the multiplicity of identities that make up the Iraqi social fabric. In the aftermath of a totalitarian regime, active wars, and economic sanctions, the Iraqi people attempted to unite together through their different identities to create a sense of security in the midst of violence and chaos. This is not an excuse to diminish the importance of women’s rights, but in the process of building a new nation-state, women were lost from the narrative. Thus, the presence of gender equity is found in the written text but is not practiced and upheld in the social context.Keywords: civil rights, Iraqi women, nation building, religion and conflict
Procedia PDF Downloads 143224 Improving Efficiency of Organizational Performance: The Role of Human Resources in Supply Chains and Job Rotation Practice
Authors: Moh'd Anwer Al-Shboul
Abstract:
Jordan Customs (JC) has been established to achieve objectives that must be consistent with the guidance of the wise leadership and its aspirations toward tomorrow. Therefore, it has developed several needed tools to provide a distinguished service to simplify work procedures and used modern technologies. A supply chain (SC) consists of all parties that are involved directly or indirectly in order to fulfill a customer request, which includes manufacturers, suppliers, shippers, retailers and even customer brokers. Within each firm, the SC includes all functions involved in receiving a filling a customers’ requests; one of the main functions include customer service. JC and global SCs are evolving into dynamic environment, which requires flexibility, effective communication, and team management. Thus, human resources (HRs) insight in these areas are critical for the effective development of global process network. The importance of HRs has increased significantly due to the role of employees depends on their knowledge, competencies, abilities, skills, and motivations. Strategic planning in JC began at the end of the 1990’s including operational strategy for Human Resource Management and Development (HRM&D). However, a huge transformation in human resources happened at the end of 2006; new employees’ regulation for customs were prepared, approved and applied at the end of 2007. Therefore, many employees lost their positions, while others were selected based on professorial recruitment and selection process (enter new blood). One of several policies that were applied by human resources in JC department is job rotation. From the researcher’s point of view, it was not based on scientific basis to achieve its goals and objectives, which at the end leads to having a significant negative impact on the Organizational Performance (OP) and weak job rotation approach. The purpose of this study is to call attention to re-review the applying process and procedure of job rotation that HRM directorate is currently applied at JC. Furthermore, it presents an overview of managing the HRs in the SC network that affects their success. The research methodology employed in this study was described as qualitative by conducting few interviews with managers, internal employee, external clients and reviewing the related literature to collect some qualitative data from secondary sources. Thus, conducting frequently and unstructured job rotation policy (i.e. monthly) will have a significant negative impact on JC performance as a whole. The results of this study show that the main impacts will affect on three main elements in JC: (1) internal employees' performance; (2) external clients, who are dealing with customs services; and finally, JC performance as a whole. In order to implement a successful and perfect job rotation technique at JC in a scientific way and to achieve its goals and objectives; JCs should be taken into consideration the proposed solutions and recommendations that will be presented in this study.Keywords: efficiency, supply chain, human resources, job rotation, organizational performance, Jordan customs
Procedia PDF Downloads 213223 Benefits of The ALIAmide Palmitoyl-Glucosamine Co-Micronized with Curcumin for Osteoarthritis Pain: A Preclinical Study
Authors: Enrico Gugliandolo, Salvatore Cuzzocrea, Rosalia Crupi
Abstract:
Osteoarthritis (OA) is one of the most common chronic pain conditions in dogs and cats. OA pain is currently viewed as a mixed phenomenon involving both inflammatory and neuropathic mechanisms at the peripheral (joint) and central (spinal and supraspinal) levels. Oxidative stress has been implicated in OA pain. Although nonsteroidal anti-inflammatory drugs are commonly prescribed for OA pain, they should be used with caution in pets because of adverse effects in the long term and controversial efficacy on neuropathic pain. An unmet need remains for safe and effective long-term treatments for OA pain. Palmitoyl-glucosamine (PGA) is an analogue of the ALIAamide palmitoylethanolamide, i.e., a body’s own endocannabinoid-like compound playing a sentinel role in nociception. PGA, especially in the micronized formulation, was shown safe and effective in OA pain. The aim of this study was to investigate the effect of a co-micronized formulation of PGA with the natural antioxidant curcumin (PGA-cur) on OA pain. Ten Sprague-Dawley male rats were used for each treatment group. The University of Messina Review Board for the care and use of animals authorized the study. On day 0, rats were anesthetized (5.0% isoflurane in 100% O2) and received intra-articular injection of MIA (3 mg in 25 μl saline) in the right knee joint, with the left being injected an equal volume of saline. Starting the third day after MIA injection, treatments were administered orally three times per week for 21 days, at the following doses: PGA 20 mg/kg, curcumin 10 mg/kg, PGA-cur (2:1 ratio) 30 mg/kg. On day 0 and 3, 7, 14 and 21 days post-injection, mechanical allodynia was measured using a dynamic plantar Von Frey hair aesthesiometer and expressed as paw withdrawal threshold (PWT) and latency (PWL). Motor functional recovery of the rear limb was evaluated on the same time points by walking track analysis using the sciatic functional index. On day 21 post-MIA injection, the concentration of the following inflammatory and nociceptive mediators was measured in serum using commercial ELISA kits: tumor necrosis factor alpha (TNF-α), interleukin-1 beta (IL-1β), nerve growth factor (NGF) and matrix metalloproteinase-1-3-9 (MMP-1, MMP-3, MMP-9). The results were analyzed by ANOVA followed by Bonferroni post-hoc test for multiple comparisons. Micronized PGA reduced neuropathic pain, as shown by the significant higher PWT and PWL values compared to vehicle group (p < 0.0001 for all the evaluated time points). The effect of PGA-cur was superior at all time points (p < 0.005). PGA-cur restored motor function already on day 14 (p < 0.005), while micronized PGA was effective a week later (D21). MIA-induced increase in the serum levels of all the investigated mediators was inhibited by PGA-cur (p < 0.01). PGA was also effective, except on IL-1 and MMP-3. Curcumin alone was inactive in all the experiments at any time point. The encouraging results suggest that PGA-cur may represent a valuable option in OA pain management and warrant further confirmation in well-powered clinical trials.Keywords: ALIAmides, curcumin, osteoarthritis, palmitoyl-glucosamine
Procedia PDF Downloads 115222 Thermal Stress and Computational Fluid Dynamics Analysis of Coatings for High-Temperature Corrosion
Authors: Ali Kadir, O. Anwar Beg
Abstract:
Thermal barrier coatings are among the most popular methods for providing corrosion protection in high temperature applications including aircraft engine systems, external spacecraft structures, rocket chambers etc. Many different materials are available for such coatings, of which ceramics generally perform the best. Motivated by these applications, the current investigation presents detailed finite element simulations of coating stress analysis for a 3- dimensional, 3-layered model of a test sample representing a typical gas turbine component scenario. Structural steel is selected for the main inner layer, Titanium (Ti) alloy for the middle layer and Silicon Carbide (SiC) for the outermost layer. The model dimensions are 20 mm (width), 10 mm (height) and three 1mm deep layers. ANSYS software is employed to conduct three types of analysis- static structural, thermal stress analysis and also computational fluid dynamic erosion/corrosion analysis (via ANSYS FLUENT). The specified geometry which corresponds to corrosion test samples exactly is discretized using a body-sizing meshing approach, comprising mainly of tetrahedron cells. Refinements were concentrated at the connection points between the layers to shift the focus towards the static effects dissipated between them. A detailed grid independence study is conducted to confirm the accuracy of the selected mesh densities. To recreate gas turbine scenarios; in the stress analysis simulations, static loading and thermal environment conditions of up to 1000 N and 1000 degrees Kelvin are imposed. The default solver was used to set the controls for the simulation with the fixed support being set as one side of the model while subjecting the opposite side to a tabular force of 500 and 1000 Newtons. Equivalent elastic strain, total deformation, equivalent stress and strain energy were computed for all cases. Each analysis was duplicated twice to remove one of the layers each time, to allow testing of the static and thermal effects with each of the coatings. ANSYS FLUENT simulation was conducted to study the effect of corrosion on the model under similar thermal conditions. The momentum and energy equations were solved and the viscous heating option was applied to represent improved thermal physics of heat transfer between the layers of the structures. A Discrete Phase Model (DPM) in ANSYS FLUENT was employed which allows for the injection of continuous uniform air particles onto the model, thereby enabling an option for calculating the corrosion factor caused by hot air injection (particles prescribed 5 m/s velocity and 1273.15 K). Extensive visualization of results is provided. The simulations reveal interesting features associated with coating response to realistic gas turbine loading conditions including significantly different stress concentrations with different coatings.Keywords: thermal coating, corrosion, ANSYS FEA, CFD
Procedia PDF Downloads 136221 Integrated Manufacture of Polymer and Conductive Tracks for Functional Objects Fabrication
Authors: Barbara Urasinska-Wojcik, Neil Chilton, Peter Todd, Christopher Elsworthy, Gregory J. Gibbons
Abstract:
The recent increase in the application of Additive Manufacturing (AM) of products has resulted in new demands on capability. The ability to integrate both form and function within printed objects is the next frontier in the 3D printing area. To move beyond prototyping into low volume production, we demonstrate a UK-designed and built AM hybrid system that combines polymer based structural deposition with digital deposition of electrically conductive elements. This hybrid manufacturing system is based on a multi-planar build approach to improve on many of the limitations associated with AM, such as poor surface finish, low geometric tolerance, and poor robustness. Specifically, the approach involves a multi-planar Material Extrusion (ME) process in which separated build stations with up to 5 axes of motion replace traditional horizontally-sliced layer modeling. The construction of multi-material architectures also involved using multiple print systems in order to combine both ME and digital deposition of conductive material. To demonstrate multi-material 3D printing, three thermoplastics, acrylonitrile butadiene styrene (ABS), polyamide 6,6/6 copolymers (CoPA) and polyamide 12 (PA) were used to print specimens, on top of which our high viscosity Ag-particulate ink was printed in a non-contact process, during which drop characteristics such as shape, velocity, and volume were assessed using a drop watching system. Spectroscopic analysis of these 3D printed materials in the IR region helped to determine the optimum in-situ curing system for implementation into the AM system to achieve improved adhesion and surface refinement. Thermal Analyses were performed to determine the printed materials glass transition temperature (Tg), stability and degradation behavior to find the optimum annealing conditions post printing. Electrical analysis of printed conductive tracks on polymer surfaces during mechanical testing (static tensile and 3-point bending and dynamic fatigue) was performed to assess the robustness of the electrical circuits. The tracks on CoPA, ABS, and PA exhibited low electrical resistance, and in case of PA resistance values of tracks remained unchanged across hundreds of repeated tensile cycles up to 0.5% strain amplitude. Our developed AM printer has the ability to fabricate fully functional objects in one build, including complex electronics. It enables product designers and manufacturers to produce functional saleable electronic products from a small format modular platform. It will make 3D printing better, faster and stronger.Keywords: additive manufacturing, conductive tracks, hybrid 3D printer, integrated manufacture
Procedia PDF Downloads 168220 Considering Aerosol Processes in Nuclear Transport Package Containment Safety Cases
Authors: Andrew Cummings, Rhianne Boag, Sarah Bryson, Gordon Turner
Abstract:
Packages designed for transport of radioactive material must satisfy rigorous safety regulations specified by the International Atomic Energy Agency (IAEA). Higher Activity Waste (HAW) transport packages have to maintain containment of their contents during normal and accident conditions of transport (NCT and ACT). To ensure containment criteria is satisfied these packages are required to be leak-tight in all transport conditions to meet allowable activity release rates. Package design safety reports are the safety cases that provide the claims, evidence and arguments to demonstrate that packages meet the regulations and once approved by the competent authority (in the UK this is the Office for Nuclear Regulation) a licence to transport radioactive material is issued for the package(s). The standard approach to demonstrating containment in the RWM transport safety case is set out in BS EN ISO 12807. In this document a method for measuring a leak rate from the package is explained by way of a small interspace test volume situated between two O-ring seals on the underside of the package lid. The interspace volume is pressurised and a pressure drop measured. A small interspace test volume makes the method more sensitive enabling the measurement of smaller leak rates. By ascertaining the activity of the contents, identifying a releasable fraction of material and by treating that fraction of material as a gas, allowable leak rates for NCT and ACT are calculated. The adherence to basic safety principles in ISO12807 is very pessimistic and current practice in the demonstration of transport safety, which is accepted by the UK regulator. It is UK government policy that management of HAW will be through geological disposal. It is proposed that the intermediate level waste be transported to the geological disposal facility (GDF) in large cuboid packages. This poses a challenge for containment demonstration because such packages will have long seals and therefore large interspace test volumes. There is also uncertainty on the releasable fraction of material within the package ullage space. This is because the waste may be in many different forms which makes it difficult to define the fraction of material released by the waste package. Additionally because of the large interspace test volume, measuring the calculated leak rates may not be achievable. For this reason a justification for a lower releasable fraction of material is sought. This paper considers the use of aerosol processes to reduce the releasable fraction for both NCT and ACT. It reviews the basic coagulation and removal processes and applies the dynamic aerosol balance equation. The proposed solution includes only the most well understood physical processes namely; Brownian coagulation and gravitational settling. Other processes have been eliminated either on the basis that they would serve to reduce the release to the environment further (pessimistically in keeping with the essence of nuclear transport safety cases) or that they are not credible in the conditions of transport considered.Keywords: aerosol processes, Brownian coagulation, gravitational settling, transport regulations
Procedia PDF Downloads 117219 Numerical Analyses of Dynamics of Deployment of PW-Sat2 Deorbit Sail Compared with Results of Experiment under Micro-Gravity and Low Pressure Conditions
Authors: P. Brunne, K. Ciechowska, K. Gajc, K. Gawin, M. Gawin, M. Kania, J. Kindracki, Z. Kusznierewicz, D. Pączkowska, F. Perczyński, K. Pilarski, D. Rafało, E. Ryszawa, M. Sobiecki, I. Uwarowa
Abstract:
Big amount of space debris constitutes nowadays a real thread for operating space crafts; therefore the main purpose of PW-Sat2’ team was to create a system that could help cleanse the Earth’s orbit after each small satellites’ mission. After 4 years of development, the motorless, low energy consumption and low weight system has been created. During series of tests, the system has shown high reliable efficiency. The PW-Sat2’s deorbit system is a square-shaped sail which covers an area of 4m². The sail surface is made of 6 μm aluminized Mylar film which is stretched across 4 diagonally placed arms, each consisting of two C-shaped flat springs and enveloped in Mylar sleeves. The sail is coiled using a special, custom designed folding stand that provides automation and repeatability of the sail unwinding tests and placed in a container with inner diameter of 85 mm. In the final configuration the deorbit system weights ca. 600 g and occupies 0.6U (in accordance with CubeSat standard). The sail’s releasing system requires minimal amount of power based on thermal knife that burns out the Dyneema wire, which holds the system before deployment. The Sail is being pushed out of the container within a safe distance (20 cm away) from the satellite. The energy for the deployment is completely assured by coiled C-shaped flat springs, which during the release, unfold the sail surface. To avoid dynamic effects on the satellite’s structure, there is the rotational link between the sail and satellite’s main body. To obtain complete knowledge about complex dynamics of the deployment, a number of experiments have been performed in varied environments. The numerical model of the dynamics of the Sail’s deployment has been built and is still under continuous development. Currently, the integration of the flight model and Deorbit Sail is performed. The launch is scheduled for February 2018. At the same time, in cooperation with United Nations Office for Outer Space Affairs, sail models and requested facilities are being prepared for the sail deployment experiment under micro-gravity and low pressure conditions at Bremen Drop Tower, Germany. Results of those tests will provide an ultimate and wide knowledge about deployment in space environment to which system will be exposed during its mission. Outcomes of the numerical model and tests will be compared afterwards and will help the team in building a reliable and correct model of a very complex phenomenon of deployment of 4 c-shaped flat springs with surface attached. The verified model could be used inter alia to investigate if the PW-Sat2’s sail is scalable and how far is it possible to go with enlarging when creating systems for bigger satellites.Keywords: cubesat, deorbitation, sail, space, debris
Procedia PDF Downloads 292218 Identification and Understanding of Colloidal Destabilization Mechanisms in Geothermal Processes
Authors: Ines Raies, Eric Kohler, Marc Fleury, Béatrice Ledésert
Abstract:
In this work, the impact of clay minerals on the formation damage of sandstone reservoirs is studied to provide a better understanding of the problem of deep geothermal reservoir permeability reduction due to fine particle dispersion and migration. In some situations, despite the presence of filters in the geothermal loop at the surface, particles smaller than the filter size (<1 µm) may surprisingly generate significant permeability reduction affecting in the long term the overall performance of the geothermal system. Our study is carried out on cores from a Triassic reservoir in the Paris Basin (Feigneux, 60 km Northeast of Paris). Our goal is to first identify the clays responsible for clogging, a mineralogical characterization of these natural samples was carried out by coupling X-Ray Diffraction (XRD), Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS). The results show that the studied stratigraphic interval contains mostly illite and chlorite particles. Moreover, the spatial arrangement of the clays in the rocks as well as the morphology and size of the particles, suggest that illite is more easily mobilized than chlorite by the flow in the pore network. Thus, based on these results, illite particles were prepared and used in core flooding in order to better understand the factors leading to the aggregation and deposition of this type of clay particles in geothermal reservoirs under various physicochemical and hydrodynamic conditions. First, the stability of illite suspensions under geothermal conditions has been investigated using different characterization techniques, including Dynamic Light Scattering (DLS) and Scanning Transmission Electron Microscopy (STEM). Various parameters such as the hydrodynamic radius (around 100 nm), the morphology and surface area of aggregates were measured. Then, core-flooding experiments were carried out using sand columns to mimic the permeability decline due to the injection of illite-containing fluids in sandstone reservoirs. In particular, the effects of ionic strength, temperature, particle concentration and flow rate of the injected fluid were investigated. When the ionic strength increases, a permeability decline of more than a factor of 2 could be observed for pore velocities representative of in-situ conditions. Further details of the retention of particles in the columns were obtained from Magnetic Resonance Imaging and X-ray Tomography techniques, showing that the particle deposition is nonuniform along the column. It is clearly shown that very fine particles as small as 100 nm can generate significant permeability reduction under specific conditions in high permeability porous media representative of the Triassic reservoirs of the Paris basin. These retention mechanisms are explained in the general framework of the DLVO theoryKeywords: geothermal energy, reinjection, clays, colloids, retention, porosity, permeability decline, clogging, characterization, XRD, SEM-EDS, STEM, DLS, NMR, core flooding experiments
Procedia PDF Downloads 178217 Geospatial and Statistical Evidences of Non-Engineered Landfill Leachate Effects on Groundwater Quality in a Highly Urbanised Area of Nigeria
Authors: David A. Olasehinde, Peter I. Olasehinde, Segun M. A. Adelana, Dapo O. Olasehinde
Abstract:
An investigation was carried out on underground water system dynamics within Ilorin metropolis to monitor the subsurface flow and its corresponding pollution. Africa population growth rate is the highest among the regions of the world, especially in urban areas. A corresponding increase in waste generation and a change in waste composition from predominantly organic to non-organic waste has also been observed. Percolation of leachate from non-engineered landfills, the chief means of waste disposal in many of its cities, constitutes a threat to the underground water bodies. Ilorin city, a transboundary town in southwestern Nigeria, is a ready microcosm of Africa’s unique challenge. In spite of the fact that groundwater is naturally protected from common contaminants such as bacteria as the subsurface provides natural attenuation process, groundwater samples have been noted to however possesses relatively higher dissolved chemical contaminants such as bicarbonate, sodium, and chloride which poses a great threat to environmental receptors and human consumption. The Geographic Information System (GIS) was used as a tool to illustrate, subsurface dynamics and the corresponding pollutant indicators. Forty-four sampling points were selected around known groundwater pollutant, major old dumpsites without landfill liners. The results of the groundwater flow directions and the corresponding contaminant transport were presented using expert geospatial software. The experimental results were subjected to four descriptive statistical analyses, namely: principal component analysis, Pearson correlation analysis, scree plot analysis, and Ward cluster analysis. Regression model was also developed aimed at finding functional relationships that can adequately relate or describe the behaviour of water qualities and the hypothetical factors landfill characteristics that may influence them namely; distance of source of water body from dumpsites, static water level of groundwater, subsurface permeability (inferred from hydraulic gradient), and soil infiltration. The regression equations developed were validated using the graphical approach. Underground water seems to flow from the northern portion of Ilorin metropolis down southwards transporting contaminants. Pollution pattern in the study area generally assumed a bimodal pattern with the major concentration of the chemical pollutants in the underground watershed and the recharge. The correlation between contaminant concentrations and the spread of pollution indicates that areas of lower subsurface permeability display a higher concentration of dissolved chemical content. The principal component analysis showed that conductivity, suspended solids, calcium hardness, total dissolved solids, total coliforms, and coliforms were the chief contaminant indicators in the underground water system in the study area. Pearson correlation revealed a high correlation of electrical conductivity for many parameters analyzed. In the same vein, the regression models suggest that the heavier the molecular weight of a chemical contaminant of a pollutant from a point source, the greater the pollution of the underground water system at a short distance. The study concludes that the associative properties of landfill have a significant effect on groundwater quality in the study area.Keywords: dumpsite, leachate, groundwater pollution, linear regression, principal component
Procedia PDF Downloads 117216 Brand Building in Higher Education: A Grounded Theory Investigation of the Impact of the ‘Positive-Visualization-Course in Brand Identity’ upon Freshmen Student's Perception
Authors: Maria Kountouridou, Dino Domic
Abstract:
Within an increasingly competitive and dynamic environment, the higher education sector is becoming more commodified, with the concept of branding to become exceedingly imperative and an inextricable ingredient for the university’s success. Branding in higher education has proven to be an effective strategy that managed to receive considerable attention in the recent few years, and a growing number of articles have begun to appear in the literature. However, a clear void in the literature confirms that the concept of students’ perceptions towards the university’s brand image has not been researched extensively. An investigation on this central concept is of paramount importance since it will facilitate the development of an inductively generated theoretical model concerning branding in higher education. This research focuses on examining the impact of the ‘positive-visualization-course in brand identity’ upon the perception of freshmen students towards a university’s brand image. A grounded theory methodology has been selected, consisting of semi-structured interviews. Forty-two students have participated in the research, among which twenty-five women and seventeen men. The identification of the sample emerged through the use of the snowball sampling technique. The participants were divided into two groups (experimental and control group) after the researcher had taken into consideration the factor ‘program of study’, to eliminate any possible interaction between the participants of each group. An experiment was carried out where a ‘positive-visualization-course in brand identity’ was conducted among the participants of the experimental group, while the participants of the control group have not been exposed to the course. For the purpose of this research, the term ‘positive-visualization-course in brand identity’ refers to a course where brand history, past achievements/recognitions/awards, its values, and its mission are presented. Prior to the course implementation, face-to-face semi-structured interviews were carried out among the participants of both groups, with the aim of examining the freshmen students’ perceptions towards the university’s brand image. One week after the course implementation, the researcher carried out semi-structured interviews with the participants of the experimental group only in order to identify whether students’ perceptions had been affected after the course completion. Four months after the course completion, semi-structured interviews were carried out among the participants of both groups. Eight months after the course completion, semi-structured interviews were conducted with the aim of identifying the freshmen students’ updated perceptions. Data has been analyzed using substantive coding (open and selective coding), theoretical coding, field memos, and constant comparative analysis. The findings strongly suggest that the ‘positive-visualization-course in brand identity’ can positively affect freshmen students’ perceptions towards a university’s brand image. Additionally, other factors conduce to the formation of perception throughout the months. This study contributes and expands upon the existing literature by presenting an inductively generated theoretical model to guide future research in the links between ‘positive-visualization-course in brand identity’ and the perception of freshmen students towards a university’s brand image.Keywords: brand image, brand name, branding, higher education marketing, perception
Procedia PDF Downloads 178215 Study on Changes of Land Use impacting the Process of Urbanization, by Using Landsat Data in African Regions: A Case Study in Kigali, Rwanda
Authors: Delphine Mukaneza, Lin Qiao, Wang Pengxin, Li Yan, Chen Yingyi
Abstract:
Human activities on land use make the land-cover gradually change or transit. In this study, we examined the use of Landsat TM data to detect the land use change of Kigali between 1987 and 2009 using remote sensing techniques and analysis of data using ENVI and ArcGIS, a GIS software. Six different categories of land use were distinguished: bare soil, built up land, wetland, water, vegetation, and others. With remote sensing techniques, we analyzed land use data in 1987, 1999 and 2009, changed areas were found and a dynamic situation of land use in Kigali city was found during the 22 years studied. According to relevant Landsat data, the research focused on land use change in accordance with the role of remote sensing in the process of urbanization. The result of the work has shown the rapid increase of built up land between 1987 and 1999 and a big decrease of vegetation caused by the rebuild of the city after the 1994 genocide, while in the period of 1999 to 2009 there was a reduction in built up land and vegetation, after the authority of Kigali city established, a Master Plan where all constructions which were not in the range of the master Plan were destroyed. Rwanda's capital, Kigali City, through the expansion of the urban area, it is increasing the internal employment rate and attracts business investors and the service sector to improve their economy, which will increase the population growth and provide a better life. The overall planning of the city of Kigali considers the environment, land use, infrastructure, cultural and socio-economic factors, the economic development and population forecast, urban development, and constraints specification. To achieve the above purpose, the Government has set for the overall planning of city Kigali, different stages of the detailed description of the design, strategy and action plan that would guide Kigali planners and members of the public in the future to have more detailed regional plans and practical measures. Thus, land use change is significantly the performance of Kigali active human area, which plays an important role for the country to take certain decisions. Another area to take into account is the natural situation of Kigali city. Agriculture in the region does not occupy a dominant position, and with the population growth and socio-economic development, the construction area will gradually rise and speed up the process of urbanization. Thus, as a developing country, Rwanda's population continues to grow and there is low rate of utilization of land, where urbanization remains low. As mentioned earlier, the 1994 genocide massacres, population growth and urbanization processes, have been the factors driving the dramatic changes in land use. The focus on further research would be on analysis of Rwanda’s natural resources, social and economic factors that could be, the driving force of land use change.Keywords: land use change, urbanization, Kigali City, Landsat
Procedia PDF Downloads 309