Search results for: distribution patterns
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7548

Search results for: distribution patterns

5898 Applying Concept Mapping to Explore Temperature Abuse Factors in the Processes of Cold Chain Logistics Centers

Authors: Marco F. Benaglia, Mei H. Chen, Kune M. Tsai, Chia H. Hung

Abstract:

As societal and family structures, consumer dietary habits, and awareness about food safety and quality continue to evolve in most developed countries, the demand for refrigerated and frozen foods has been growing, and the issues related to their preservation have gained increasing attention. A well-established cold chain logistics system is essential to avoid any temperature abuse; therefore, assessing potential disruptions in the operational processes of cold chain logistics centers becomes pivotal. This study preliminarily employs HACCP to find disruption factors in cold chain logistics centers that may cause temperature abuse. Then, concept mapping is applied: selected experts engage in brainstorming sessions to identify any further factors. The panel consists of ten experts, including four from logistics and home delivery, two from retail distribution, one from the food industry, two from low-temperature logistics centers, and one from the freight industry. Disruptions include equipment-related aspects, human factors, management aspects, and process-related considerations. The areas of observation encompass freezer rooms, refrigerated storage areas, loading docks, sorting areas, and vehicle parking zones. The experts also categorize the disruption factors based on perceived similarities and build a similarity matrix. Each factor is evaluated for its impact, frequency, and investment importance. Next, multiple scale analysis, cluster analysis, and other methods are used to analyze these factors. Simultaneously, key disruption factors are identified based on their impact and frequency, and, subsequently, the factors that companies prioritize and are willing to invest in are determined by assessing investors’ risk aversion behavior. Finally, Cumulative Prospect Theory (CPT) is applied to verify the risk patterns. 66 disruption factors are found and categorized into six clusters: (1) "Inappropriate Use and Maintenance of Hardware and Software Facilities", (2) "Inadequate Management and Operational Negligence", (3) "Product Characteristics Affecting Quality and Inappropriate Packaging", (4) "Poor Control of Operation Timing and Missing Distribution Processing", (5) "Inadequate Planning for Peak Periods and Poor Process Planning", and (6) "Insufficient Cold Chain Awareness and Inadequate Training of Personnel". This study also identifies five critical factors in the operational processes of cold chain logistics centers: "Lack of Personnel’s Awareness Regarding Cold Chain Quality", "Personnel Not Following Standard Operating Procedures", "Personnel’s Operational Negligence", "Management’s Inadequacy", and "Lack of Personnel’s Knowledge About Cold Chain". The findings show that cold chain operators prioritize prevention and improvement efforts in the "Inappropriate Use and Maintenance of Hardware and Software Facilities" cluster, particularly focusing on the factors of "Temperature Setting Errors" and "Management’s Inadequacy". However, through the application of CPT theory, this study reveals that companies are not usually willing to invest in the improvement of factors related to the "Inappropriate Use and Maintenance of Hardware and Software Facilities" cluster due to its low occurrence likelihood, but they acknowledge the severity of the consequences if it does occur. Hence, the main implication is that the key disruption factors in cold chain logistics centers’ processes are associated with personnel issues; therefore, comprehensive training, periodic audits, and the establishment of reasonable incentives and penalties for both new employees and managers may significantly reduce disruption issues.

Keywords: concept mapping, cold chain, HACCP, cumulative prospect theory

Procedia PDF Downloads 61
5897 Groundwater Flow Assessment Based on Numerical Simulation at Omdurman Area, Khartoum State, Sudan

Authors: Adil Balla Elkrail

Abstract:

Visual MODFLOW computer codes were selected to simulate head distribution, calculate the groundwater budgets of the area, and evaluate the effect of external stresses on the groundwater head and to demonstrate how the groundwater model can be used as a comparative technique in order to optimize utilization of the groundwater resource. A conceptual model of the study area, aquifer parameters, boundary, and initial conditions were used to simulate the flow model. The trial-and-error technique was used to calibrate the model. The most important criteria used to check the calibrated model were Root Mean Square error (RMS), Mean Absolute error (AM), Normalized Root Mean Square error (NRMS) and mass balance. The maps of the simulated heads elaborated acceptable model calibration compared to observed heads map. A time length of eight years and the observed heads of the year 2004 were used for model prediction. The predictive simulation showed that the continuation of pumping will cause relatively high changes in head distribution and components of groundwater budget whereas, the low deficit computed (7122 m3/d) between inflows and outflows cannot create a significant drawdown of the potentiometric level. Hence, the area under consideration may represent a high permeability and productive zone and strongly recommended for further groundwater development.

Keywords: aquifers, model simulation, groundwater, calibrations, trail-and- error, prediction

Procedia PDF Downloads 235
5896 Comparison with Mechanical Behaviors of Mastication in Teeth Movement Cases

Authors: Jae-Yong Park, Yeo-Kyeong Lee, Hee-Sun Kim

Abstract:

Purpose: This study aims at investigating the mechanical behaviors of mastication, according to various teeth movement. There are three masticatory cases which are general case and 2 cases of teeth movement. General case includes the common arrange of all teeth and 2 cases of teeth movement are that one is the half movement location case of molar teeth in no. 14 tooth seat after extraction of no. 14 tooth and the other is no. 14 tooth seat location case of molar teeth after extraction in the same case before. Materials and Methods: In order to analyze these cases, 3 dimensional finite element (FE) model of the skull were generated based on computed tomography images, 964 dicom files of 38 year old male having normal occlusion status. An FE model in general occlusal case was used to develop CAE procedure. This procedure was applied to FE models in other occlusal cases. The displacement controls according to loading condition were applied effectively to simulate occlusal behaviors in all cases. From the FE analyses, von Mises stress distribution of skull and teeth was observed. The von Mises stress, effective stress, had been widely used to determine the absolute stress value, regardless of stress direction and yield characteristics of materials. Results: High stress was distributed over the periodontal area of mandible under molar teeth when the mandible was transmitted to the coronal-apical direction in the general occlusal case. According to the stress propagation from teeth to cranium, stress distribution decreased as the distribution propagated from molar teeth to infratemporal crest of the greater wing of the sphenoid bone and lateral pterygoid plate in general case. In 2 cases of teeth movement, there were observed that high stresses were distributed over the periodontal area of mandible under teeth where they are located under the moved molar teeth in cranium. Conclusion: The predictions of the mechanical behaviors of general case and 2 cases of teeth movement during the masticatory process were investigated including qualitative validation. The displacement controls as the loading condition were applied effectively to simulate occlusal behaviors in 2 cases of teeth movement of molar teeth.

Keywords: cranium, finite element analysis, mandible, masticatory action, occlusal force

Procedia PDF Downloads 388
5895 The Effects of Vitamin D Supplementation on Anthropometric Indicators of Adiposity and Fat Distribution in Children and Adolescents: A Systematic Review and Meta-Analysis of Randomized Controlled Trials

Authors: Simin Zarea Karizi, Somaye Fatahi, Amirhossein Hosseni

Abstract:

Background: There are conflicting findings regarding the effect of vitamin D supplementation on obesity-related factors. This study aimed to investigate the effect of vitamin D supplementation on changes in anthropometric indicators of adiposity and fat distribution in children and adolescents. Methods: Original databases were searched using standard keywords to identify all controlled trials investigating the effects of vitamin D supplementation on obesity-related factors in children and adolescents. Pooled weighted mean difference and 95% confidence intervals were achieved by random-effects model analysis. Results: Fourteen treatment arms were included in this systematic review and meta-analysis. The quantitative meta-analysis revealed no significant effect of vitamin D supplement on BMI (-0.01 kg/m2; 95% CI: -0.09, 0.12; p= 0.74; I2=0.0%), BMI z score (0.02; 95% CI: -0.04, 0.07; p= 0.53; I2=0.0%) and fat mass (0.07%; 95% CI: -0.09 to 0.24; p= 0.38; I2=31.2%). However, the quantitative meta-analysis displayed a significant effect of vitamin D supplementation on WC compared with the control group (WMD=-1.17 cm, 95% CI: -2.05, -0.29, p=0.009; I2=32.0 %). It seems that this effect was greater in healthy children with duration>12 weeks, dose<=400 IU and baseline less than 50 nmol/l vitamin D than others. Conclusions: Our findings suggest that vitamin D supplementation may be a protective factor of abdominal obesity and should be evaluated on an individual basis in clinical practice.

Keywords: weight loss, vitamin D, anthropometry, children, adolescent

Procedia PDF Downloads 22
5894 An Implementation of Fuzzy Logic Technique for Prediction of the Power Transformer Faults

Authors: Omar M. Elmabrouk., Roaa Y. Taha., Najat M. Ebrahim, Sabbreen A. Mohammed

Abstract:

Power transformers are the most crucial part of power electrical system, distribution and transmission grid. This part is maintained using predictive or condition-based maintenance approach. The diagnosis of power transformer condition is performed based on Dissolved Gas Analysis (DGA). There are five main methods utilized for analyzing these gases. These methods are International Electrotechnical Commission (IEC) gas ratio, Key Gas, Roger gas ratio, Doernenburg, and Duval Triangle. Moreover, due to the importance of the transformers, there is a need for an accurate technique to diagnose and hence predict the transformer condition. The main objective of this technique is to avoid the transformer faults and hence to maintain the power electrical system, distribution and transmission grid. In this paper, the DGA was utilized based on the data collected from the transformer records available in the General Electricity Company of Libya (GECOL) which is located in Benghazi-Libya. The Fuzzy Logic (FL) technique was implemented as a diagnostic approach based on IEC gas ratio method. The FL technique gave better results and approved to be used as an accurate prediction technique for power transformer faults. Also, this technique is approved to be a quite interesting for the readers and the concern researchers in the area of FL mathematics and power transformer.

Keywords: dissolved gas-in-oil analysis, fuzzy logic, power transformer, prediction

Procedia PDF Downloads 138
5893 The Use of Corpora in Improving Modal Verb Treatment in English as Foreign Language Textbooks

Authors: Lexi Li, Vanessa H. K. Pang

Abstract:

This study aims to demonstrate how native and learner corpora can be used to enhance modal verb treatment in EFL textbooks in mainland China. It contributes to a corpus-informed and learner-centered design of grammar presentation in EFL textbooks that enhances the authenticity and appropriateness of textbook language for target learners. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the 'secondary school' section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was analyzed in terms of the use (distributional features, semantic functions, and co-occurring constructions) and the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The analysis of distribution indicates several discrepancies between the textbook corpus and BNCS2014. The first four most frequent modal verbs in BNCS2014 are can, would, will, could, while can, will, should, could are the top four in the textbooks. Most strikingly, there is an unusually high proportion of can (41.1%) in the textbooks. The results on different meanings shows that will, would and must are the most problematic. For example, for will, the textbooks contain 20% more occurrences of 'volition' and 20% less of 'prediction' than those in BNCS2014. Regarding co-occurring structures, the textbooks over-represented the structure 'modal +do' across the nine modal verbs. Another major finding is that the structure of 'modal +have done' that frequently co-occur with could, would, should, and must is underused in textbooks. Besides, these four modal verbs are the most difficult for learners, as the error analysis shows. This study demonstrates how the synergy of native and learner corpora can be harnessed to improve EFL textbook presentation of modal verbs in a way that textbooks can provide not only authentic language used in natural discourse but also appropriate design tailed for the needs of target learners.

Keywords: English as Foreign Language, EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 138
5892 The Admitting Hemogram as a Predictor for Severity and in-Hospital Mortality in Acute Pancreatitis

Authors: Florge Francis A. Sy

Abstract:

Acute pancreatitis (AP) is an inflammatory condition of the pancreas with local and systemic complications. Severe acute pancreatitis (SAP) has a higher mortality rate. Laboratory parameters like the neutrophil-to-lymphocyte ratio (NLR), red cell distribution width (RDW), and mean platelet volume (MPV) have been associated with SAP but with conflicting results. This study aims to determine the predictive value of these parameters on the severity and in-hospital mortality of AP. This retrospective, cross-sectional study was done in a private hospital in Cebu City, Philippines. One-hundred five patients were classified according to severity based on the modified Marshall scoring. The admitting hemogram, including the NLR, RDW, and MPV, was obtained from the complete blood count (CBC). Cut-off values for severity and in-hospital mortality were derived from the ROC. Association between NLR, RDW, and MPV with SAP and mortality were determined with a p-value of < 0.05 considered significant. The mean age for AP was 47.6 years, with 50.5% being male. Most had an unknown cause (49.5%), followed by a biliary cause (37.1%). Of the 105 patients, 23 patients had SAP, and 4 died. Older age, longer in-hospital duration, congestive heart failure, elevated creatinine, urea nitrogen, and white blood cell count were seen in SAP. The NLR was associated with in-hospital mortality using a cut-off of > 10.6 (OR 1.133, 95% CI, p-value 0.003) with 100% sensitivity, 70.3% specificity, 11.76% PPV and 100% NPV (AUC 0.855). The NLR was not associated with SAP. The RDW and MPV were not associated with SAP and mortality. The admitting NLR is, therefore, an easily accessible parameter that can predict in-hospital mortality in acute pancreatitis. Although the present study did not show an association of NLR with SAP nor RDW and MPV with both SAP and mortality, further studies are suggested to establish their clinical value.

Keywords: acute pancreatitis, mean platelet volume, neutrophil-lymphocyte ratio, red cell distribution width

Procedia PDF Downloads 117
5891 Mechanical Properties and Chloride Diffusion of Ceramic Waste Aggregate Mortar Containing Ground Granulated Blast-Furnace Slag

Authors: H. Higashiyama, M. Sappakittipakorn, M. Mizukoshi, O. Takahashi

Abstract:

Ceramic waste aggregates (CWAs) were made from electric porcelain insulator wastes supplied from an electric power company, which were crushed and ground to fine aggregate sizes. In this study, to develop the CWA mortar as an eco–efficient, ground granulated blast–furnace slag (GGBS) as a supplementary cementitious material (SCM) was incorporated. The water–to–binder ratio (W/B) of the CWA mortars was varied at 0.4, 0.5, and 0.6. The cement of the CWA mortar was replaced by GGBS at 20 and 40% by volume (at about 18 and 37% by weight). Mechanical properties of compressive and splitting tensile strengths, and elastic modulus were evaluated at the age of 7, 28, and 91 days. Moreover, the chloride ingress test was carried out on the CWA mortars in a 5.0% NaCl solution for 48 weeks. The chloride diffusion was assessed by using an electron probe microanalysis (EPMA). To consider the relation of the apparent chloride diffusion coefficient and the pore size, the pore size distribution test was also performed using a mercury intrusion porosimetry at the same time with the EPMA. The compressive strength of the CWA mortars with the GGBS was higher than that without the GGBS at the age of 28 and 91 days. The resistance to the chloride ingress of the CWA mortar was effective in proportion to the GGBS replacement level.

Keywords: ceramic waste aggregate, chloride diffusion, GGBS, pore size distribution

Procedia PDF Downloads 338
5890 3D Simulation and Modeling of Magnetic-Sensitive on n-type Double-Gate Metal-Oxide-Semiconductor Field-Effect Transistor (DGMOSFET)

Authors: M. Kessi

Abstract:

We investigated the effect of the magnetic field on carrier transport phenomena in the transistor channel region of Double-Gate Metal-Oxide-Semiconductor Field-Effect Transistor (MOSFET). This explores the Lorentz force and basic physical properties of solids exposed to a constant external magnetic field. The magnetic field modulates the electrons and potential distribution in the case of silicon Tunnel FETs. This modulation shows up in the device's external electrical characteristics such as ON current (ION), subthreshold leakage current (IOF), the threshold voltage (VTH), the magneto-transconductance (gm) and the output magneto-conductance (gDS) of Tunnel FET. Moreover, the channel doping concentration and potential distribution are obtained using the numerical method by solving Poisson’s transport equation in 3D modules semiconductor magnetic sensors available in Silvaco TCAD tools. The numerical simulations of the magnetic nano-sensors are relatively new. In this work, we present the results of numerical simulations based on 3D magnetic sensors. The results show excellent accuracy comportment and good agreement compared with that obtained in the experimental study of MOSFETs technology.

Keywords: single-gate MOSFET, magnetic field, hall field, Lorentz force

Procedia PDF Downloads 176
5889 Seasonal Variation in Aerosols Characteristics over Ahmedabad

Authors: Devansh Desai, Chamandeep Kaur, Nirmal Kullu, George Christopher

Abstract:

Study of aerosols has become very important tool in assuming the climatic changes over a region.Spectral and temporal variability’s in aerosol optical depth(AOD) and size distribution are investigated using ground base measurements over Ahmedabad during the months of January(2013) to may (2013). Angstrom coefficient (ἁ) was found to be higher in winter season (January to march) indicating the dominance of fine mode aerosol concentration over Ahmedabad, and the Angstrom coefficient (ἁ) was found to be lower indicating the dominance of coarse mode aerosol concentration over Ahmedabad. The different values of alpha are observed when calculated over different wavelength ranges indicating bimodal aerosol size distribution. Discrimination of aerosol size during different seasons is made using the coefficient of polynomial fit (ἁ1 and ἁ2) which shows the presence of changing dominant aerosol types as a function of season over Ahmedabad. The ἁ2- ἁ1 value is used to get the confirmation on the dominant aerosol mode over Ahmedabad in both seasons. During pre-monsoon about 90% of AOD spectra is dominated by coarse mode aerosols and during winter about 60% of AOD spectra is dominated by fine mode aerosols. This characterization of aerosols is important in assessing the response of different aerosols type in radiative forcing and over climate of Ahmedabad.

Keywords: radiative forcing, aerosol optical depth, fine mode, coarse mode

Procedia PDF Downloads 495
5888 Identification of Body Fluid at the Crime Scene by DNA Methylation Markers for Use in Forensic Science

Authors: Shirin jalili, Hadi Shirzad, Mahasti Modarresi, Samaneh Nabavi, Somayeh Khanjani

Abstract:

Identifying the source tissue of biological material found at crime scenes can be very informative in a number of cases. Despite their usefulness, current visual, catalytic, enzymatic, and immunologic tests for presumptive and confirmatory tissue identification are applicable only to a subset of samples, might suffer limitations such as low specificity, lack of sensitivity, and are substantially impacted by environmental insults. In addition their results are operator-dependent. Recently the possibility of discriminating body fluids using mRNA expression differences in tissues has been described but lack of long term stability of that Molecule and the need to normalize samples for each individual are limiting factors. The use of DNA should solve these issues because of its long term stability and specificity to each body fluid. Cells in the human body have a unique epigenome, which includes differences in DNA methylation in the promoter of genes. DNA methylation, which occurs at the 5′-position of the cytosine in CpG dinucleotides, has great potential for forensic identification of body fluids, because tissue-specific patterns of DNA methylation have been demonstrated, and DNA is less prone to degradation than proteins or RNA. Previous studies have reported several body fluid-specific DNA methylation markers.The presence or absence of a methyl group on the 5’ carbon of the cytosine pyridine ring in CpG dinucleotide regions called ‘CpG islands’ dictates whether the gene is expressed or silenced in the particular body fluid. Were described methylation patterns at tissue specific differentially methylated regions (tDMRs) to be stable and specific, making them excellent markers for tissue identification. The results demonstrate that methylation-based tissue identification is more than a proof-of-concept. The methodology holds promise as another viable forensic DNA analysis tool for characterization of biological materials.

Keywords: DNA methylation, forensic science, epigenome, tDMRs

Procedia PDF Downloads 422
5887 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients

Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi

Abstract:

Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.

Keywords: frailty model, latent variables, liver cirrhosis, parametric distribution

Procedia PDF Downloads 258
5886 Optimization of Manufacturing Process Parameters: An Empirical Study from Taiwan's Tech Companies

Authors: Chao-Ton Su, Li-Fei Chen

Abstract:

The parameter design is crucial to improving the uniformity of a product or process. In the product design stage, parameter design aims to determine the optimal settings for the parameters of each element in the system, thereby minimizing the functional deviations of the product. In the process design stage, parameter design aims to determine the operating settings of the manufacturing processes so that non-uniformity in manufacturing processes can be minimized. The parameter design, trying to minimize the influence of noise on the manufacturing system, plays an important role in the high-tech companies. Taiwan has many well-known high-tech companies, which show key roles in the global economy. Quality remains the most important factor that enables these companies to sustain their competitive advantage. In Taiwan however, many high-tech companies face various quality problems. A common challenge is related to root causes and defect patterns. In the R&D stage, root causes are often unknown, and defect patterns are difficult to classify. Additionally, data collection is not easy. Even when high-volume data can be collected, data interpretation is difficult. To overcome these challenges, high-tech companies in Taiwan use more advanced quality improvement tools. In addition to traditional statistical methods and quality tools, the new trend is the application of powerful tools, such as neural network, fuzzy theory, data mining, industrial engineering, operations research, and innovation skills. In this study, several examples of optimizing the parameter settings for the manufacturing process in Taiwan’s tech companies will be presented to illustrate proposed approach’s effectiveness. Finally, a discussion of using traditional experimental design versus the proposed approach for process optimization will be made.

Keywords: quality engineering, parameter design, neural network, genetic algorithm, experimental design

Procedia PDF Downloads 141
5885 Chinese on the Move: Residential Mobility and Evolution of People's Republic of China-Born Migrants in Australia

Authors: Siqin Wang, Jonathan Corcoran, Yan Liu, Thomas Sigler

Abstract:

Australia is a quintessentially immigrant nation with 28 percent of its residents being foreign-born. By 2011, People’s Republic of China (PRC) overtook the United Kingdom to become the largest source country in Australia. Significantly, the profile of PRC-born migrants has changed to mirror broader global shifts towards high-skilled labour, education-related, and investment-focussed migration, all of which reflect an increasing trend in the mobility of wealthy and/or educated cohorts. Together, these coalesce to form a more complex pattern of migrant settlement –both spatially and socio-economically. This paper focuses on the PRC-born migration, redresses these lacunae, with regard to the settlement outcomes of PRC migrants to Australia, with a particular focus on spatial evolution and residential mobility at both the metropolitan and national scales. By drawing on Census Data and migration Micro Datasets, the aim of this paper is to examine the shifting dynamics of PRC-born migrants in Australian capital cities to unveil their socioeconomic characteristics, residential patterns and change of spatial concentrations during their transition into the new host society. This paper finds out three general patterns in the residential evolution of PRC-born migrants depending on the size of capital cities where they settle down, as well as the association of socio-economic characters with the formation of enclaves. It also examines the residential mobility across states and cities from 2001 to 2011 indicating the rising status of median-size Australian capital cities for receiving PRC-born migrants. The paper concludes with a discussion of evidences for policy formation, facilitates the effective transition of PRC-born populations into the mainstream of host society and enhances social harmony to help Australia become a more successful multicultural nation.

Keywords: Australia, Chinese migrants, residential mobility, spatial evolution

Procedia PDF Downloads 229
5884 Inter-Complex Dependence of Production Technique and Preforms Construction on the Failure Pattern of Multilayer Homo-Polymer Composites

Authors: Ashraf Nawaz Khan, R. Alagirusamy, Apurba Das, Puneet Mahajan

Abstract:

The thermoplastic-based fibre composites are acquiring a market sector of conventional as well as thermoset composites. However, replacing the thermoset with a thermoplastic composite has never been an easy task. The inherent high viscosity of thermoplastic resin reveals poor interface properties. In this work, a homo-polymer towpreg is produced through an electrostatic powder spray coating methodology. The produced flexible towpreg offers a low melt-flow distance during the consolidation of the laminate. The reduced melt-flow distance demonstrates a homogeneous fibre/matrix distribution (and low void content) on consolidation. The composite laminate has been fabricated with two manufacturing techniques such as conventional film stack (FS) and powder-coated (PC) technique. This helps in understanding the distinct response of produced laminates on applying load since the laminates produced through the two techniques are comprised of the same constituent fibre and matrix (constant fibre volume fraction). The changed behaviour is observed mainly due to the different fibre/matrix configurations within the laminate. The interface adhesion influences the load transfer between the fibre and matrix. Therefore, it influences the elastic, plastic, and failure patterns of the laminates. Moreover, the effect of preform geometries (plain weave and satin weave structure) are also studied for corresponding composite laminates in terms of various mechanical properties. The fracture analysis is carried out to study the effect of resin at the interlacement points through micro-CT analysis. The PC laminate reveals a considerably small matrix-rich and deficient zone in comparison to the FS laminate. The different load tensile, shear, fracture toughness, and drop weight impact test) is applied to the laminates, and corresponding damage behaviour is analysed in the successive stage of failure. The PC composite has shown superior mechanical properties in comparison to the FS composite. The damage that occurs in the laminate is captured through the SEM analysis to identify the prominent mode of failure, such as matrix cracking, fibre breakage, delamination, debonding, and other phenomena.

Keywords: composite, damage, fibre, manufacturing

Procedia PDF Downloads 133
5883 Using Corpora in Semantic Studies of English Adjectives

Authors: Oxana Lukoshus

Abstract:

The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.

Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies

Procedia PDF Downloads 310
5882 Geo-Spatial Distribution of Radio Refractivity and the Influence of Fade Depth on Microwave Propagation Signals over Nigeria

Authors: Olalekan Lawrence Ojo

Abstract:

Designing microwave terrestrial propagation networks requires a thorough evaluation of the severity of multipath fading, especially at frequencies below 10 GHz. In nations like Nigeria, without a large enough databases to support the existing empirical models, the mistakes in the prediction technique intended for the evaluation may be severe. The need for higher bandwidth for various satellite applications makes the investigation of the effects of radio refractivity, fading due to multipath, and Geoclimatic factors on satellite propagation links more important. One of the key elements to take into account for the best functioning of microwave frequencies is the clear air effects. This work has taken into account the geographical distribution of radio refractivity and fades depth over a number of stations in Nigeria. Data from five locations in Nigeria—Akure, Enugu, Jos, Minna, and Sokoto—based on five-year (2017–2021) measurement methods of atmospheric pressure, relative, and humidity temperature—at two levels (ground surface and 100 m heights)—are studied to deduced their effects on signals propagated through a µwave communication links. The assessments included considerations for µwave communication systems as well as the impacts of the dry and wet components of radio refractivity, the effects of the fade depth at various frequencies, and a 20 km link distance. The results demonstrate that the percentage occurrence of the dry terms dominated the radio refractivity constituent at the surface level, contributing a minimum of about 78% and a maximum of about 92%, while at heights of 100 meters, the percentage occurrence of the dry terms dominated the radio refractivity constituent, contributing a minimum of about 79% and a maximum of about 92%. The spatial distribution reveals that, regardless of height, the country's tropical rainforest (TRF) and freshwater swampy mangrove (FWSM) regions reported the greatest values of radio refractivity. The statistical estimate shows that fading values can differ by as much as 1.5 dB, especially near the TRF and FWSM coastlines, even during clear air conditions. The current findings will be helpful for budgeting Earth-space microwave links, particularly for the rollout of Nigeria's 5G and 6G projected microcellular networks.

Keywords: fade depth, geoclimatic factor, refractivity, refractivity gradient

Procedia PDF Downloads 69
5881 Turbulent Channel Flow Synthesis using Generative Adversarial Networks

Authors: John M. Lyne, K. Andrea Scott

Abstract:

In fluid dynamics, direct numerical simulations (DNS) of turbulent flows require large amounts of nodes to appropriately resolve all scales of energy transfer. Due to the size of these databases, sharing these datasets amongst the academic community is a challenge. Recent work has been done to investigate the use of super-resolution to enable database sharing, where a low-resolution flow field is super-resolved to high resolutions using a neural network. Recently, Generative Adversarial Networks (GAN) have grown in popularity with impressive results in the generation of faces, landscapes, and more. This work investigates the generation of unique high-resolution channel flow velocity fields from a low-dimensional latent space using a GAN. The training objective of the GAN is to generate samples in which the distribution of the generated samplesis ideally indistinguishable from the distribution of the training data. In this study, the network is trained using samples drawn from a statistically stationary channel flow at a Reynolds number of 560. Results show that the turbulent statistics and energy spectra of the generated flow fields are within reasonable agreement with those of the DNS data, demonstrating that GANscan produce the intricate multi-scale phenomena of turbulence.

Keywords: computational fluid dynamics, channel flow, turbulence, generative adversarial network

Procedia PDF Downloads 201
5880 Bridging Urban Planning and Environmental Conservation: A Regional Analysis of Northern and Central Kolkata

Authors: Tanmay Bisen, Aastha Shayla

Abstract:

This study introduces an advanced approach to tree canopy detection in urban environments and a regional analysis of Northern and Central Kolkata that delves into the intricate relationship between urban development and environmental conservation. Leveraging high-resolution drone imagery from diverse urban green spaces in Kolkata, we fine-tuned the deep forest model to enhance its precision and accuracy. Our results, characterized by an impressive Intersection over Union (IoU) score of 0.90 and a mean average precision (mAP) of 0.87, underscore the model's robustness in detecting and classifying tree crowns amidst the complexities of aerial imagery. This research not only emphasizes the importance of model customization for specific datasets but also highlights the potential of drone-based remote sensing in urban forestry studies. The study investigates the spatial distribution, density, and environmental impact of trees in Northern and Central Kolkata. The findings underscore the significance of urban green spaces in met-ropolitan cities, emphasizing the need for sustainable urban planning that integrates green infrastructure for ecological balance and human well-being.

Keywords: urban greenery, advanced spatial distribution analysis, drone imagery, deep learning, tree detection

Procedia PDF Downloads 49
5879 Comparison of Automated Zone Design Census Output Areas with Existing Output Areas in South Africa

Authors: T. Mokhele, O. Mutanga, F. Ahmed

Abstract:

South Africa is one of the few countries that have stopped using the same Enumeration Areas (EAs) for census enumeration and dissemination. The advantage of this change is that confidentiality issue could be addressed for census dissemination as the design of geographic unit for collection is mainly to ensure that this unit is covered by one enumerator. The objective of this paper was to evaluate the performance of automated zone design output areas against non-zone design developed geographies using the 2001 census data, and 2011 census to some extent, as the main input. The comparison of the Automated Zone-design Tool (AZTool) census output areas with the Small Area Layers (SALs) and SubPlaces based on confidentiality limit, population distribution, and degree of homogeneity, as well as shape compactness, was undertaken. Further, SPSS was employed for validation of the AZTool output results. The results showed that AZTool developed output areas out-perform the existing official SAL and SubPlaces with regard to minimum population threshold, population distribution and to some extent to homogeneity. Therefore, it was concluded that AZTool program provides a new alternative to the creation of optimised census output areas for dissemination of population census data in South Africa.

Keywords: AZTool, enumeration areas, small areal layers, South Africa

Procedia PDF Downloads 178
5878 Tall Building Transit-Oriented Development (TB-TOD) and Energy Efficiency in Suburbia: Case Studies, Sydney, Toronto, and Washington D.C.

Authors: Narjes Abbasabadi

Abstract:

As the world continues to urbanize and suburbanize, where suburbanization associated with mass sprawl has been the dominant form of this expansion, sustainable development challenges will be more concerned. Sprawling, characterized by low density and automobile dependency, presents significant environmental issues regarding energy consumption and Co2 emissions. This paper examines the vertical expansion of suburbs integrated into mass transit nodes as a planning strategy for boosting density, intensification of land use, conversion of single family homes to multifamily dwellings or mixed use buildings and development of viable alternative transportation choices. It analyzes the spatial patterns of tall building transit-oriented development (TB-TOD) of suburban regions in Sydney (Australia), Toronto (Canada), and Washington D.C. (United States). The main objectives of this research seek to understand the effect of the new morphology of suburban tall, the physical dimensions of individual buildings and their arrangement at a larger scale with energy efficiency. This study aims to answer these questions: 1) why and how can the potential phenomenon of vertical expansion or high-rise development be integrated into suburb settings? 2) How can this phenomenon contribute to an overall denser development of suburbs? 3) Which spatial pattern or typologies/ sub-typologies of the TB-TOD model do have the greatest energy efficiency? It addresses these questions by focusing on 1) energy, heat energy demand (excluding cooling and lighting) related to design issues at two levels: macro, urban scale and micro, individual buildings—physical dimension, height, morphology, spatial pattern of tall buildings and their relationship with each other and transport infrastructure; 2) Examining TB-TOD to provide more evidence of how the model works regarding ridership. The findings of the research show that the TB-TOD model can be identified as the most appropriate spatial patterns of tall buildings in suburban settings. And among the TB-TOD typologies/ sub-typologies, compact tall building blocks can be the most energy efficient one. This model is associated with much lower energy demands in buildings at the neighborhood level as well as lower transport needs in an urban scale while detached suburban high rise or low rise suburban housing will have the lowest energy efficiency. The research methodology is based on quantitative study through applying the available literature and static data as well as mapping and visual documentations of urban regions such as Google Earth, Microsoft Bing Bird View and Streetview. It will examine each suburb within each city through the satellite imagery and explore the typologies/ sub-typologies which are morphologically distinct. The study quantifies heat energy efficiency of different spatial patterns through simulation via GIS software.

Keywords: energy efficiency, spatial pattern, suburb, tall building transit-oriented development (TB-TOD)

Procedia PDF Downloads 253
5877 Characteristics and Key Exploration Directions of Gold Deposits in China

Authors: Bin Wang, Yong Xu, Honggang Qu, Rongmei Liu, Zhenji Gao

Abstract:

Based on the geodynamic environment, basic geological characteristics of minerals and so on, gold deposits in China are divided into 11 categories, of which tectonic fracture altered rock, mid-intrudes and contact zone, micro-fine disseminated and continental volcanic types are the main prospecting kinds. The metallogenic age of gold deposits in China is dominated by the Mesozoic and Cenozoic. According to the geotectonic units, geological evolution, geological conditions, spatial distribution, gold deposits types, metallogenic factors etc., 42 gold concentration areas are initially determined and have a concentrated distribution feature. On the basis of the gold exploration density, gold concentration areas are divided into high, medium and low level areas. High ones are mainly distributed in the central and eastern regions. 93.04% of the gold exploration drillings are within 500 meters, but there are some problems, such as less and shallower of drilling verification etc.. The paper discusses the resource potentials of gold deposits and proposes the future prospecting directions and suggestions. The deep and periphery of old mines in the central and eastern regions and western area, especially in Xinjiang and Qinghai, will be the future key prospecting one and have huge potential gold reserves. If the exploration depth is extended to 2,000 meters shallow, the gold resources will double.

Keywords: gold deposits, gold deposits types, gold concentration areas, prospecting, resource potentiality

Procedia PDF Downloads 70
5876 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution

Authors: Saleem Z. Ramadan

Abstract:

This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the PTH percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.

Keywords: reliability, accelerated life testing, cumulative exposure model, Bayesian estimation, progressive type-I censoring, Weibull distribution

Procedia PDF Downloads 501
5875 Detecting Local Clusters of Childhood Malnutrition in the Island Province of Marinduque, Philippines Using Spatial Scan Statistic

Authors: Novee Lor C. Leyso, Maylin C. Palatino

Abstract:

Under-five malnutrition continues to persist in the Philippines, particularly in the island Province of Marinduque, with prevalence of some forms of malnutrition even worsening in recent years. Local spatial cluster detection provides a spatial perspective in understanding this phenomenon as key in analyzing patterns of geographic variation, identification of community-appropriate programs and interventions, and focused targeting on high-risk areas. Using data from a province-wide household-based census conducted in 2014–2016, this study aimed to determine and evaluate spatial clusters of under-five malnutrition, across the province and within each municipality at the individual level using household location. Malnutrition was defined as weight-for-age z-score that fall outside the 2 standard deviations from the median of the WHO reference population. The Kulldorff’s elliptical spatial scan statistic in binomial model was used to locate clusters with high-risk of malnutrition, while adjusting for age and membership to government conditional cash transfer program as proxy for socio-economic status. One large significant cluster of under-five malnutrition was found southwest of the province, in which living in these areas at least doubles the risk of malnutrition. Additionally, at least one significant cluster were identified within each municipality—mostly located along the coastal areas. All these indicate apparent geographical variations across and within municipalities in the province. There were also similarities and disparities in the patterns of risk of malnutrition in each cluster across municipalities, and even within municipality, suggesting underlying causes at work that warrants further investigation. Therefore, community-appropriate programs and interventions should be identified and should be focused on high-risk areas to maximize limited government resources. Further studies are also recommended to determine factors affecting variations in childhood malnutrition considering the evidence of spatial clustering found in this study.

Keywords: Binomial model, Kulldorff’s elliptical spatial scan statistic, Philippines, under-five malnutrition

Procedia PDF Downloads 135
5874 Ending Wars Over Water: Evaluating the Extent to Which Artificial Intelligence Can Be Used to Predict and Prevent Transboundary Water Conflicts

Authors: Akhila Potluru

Abstract:

Worldwide, more than 250 bodies of water are transboundary, meaning they cross the political boundaries of multiple countries. This creates a system of hydrological, economic, and social interdependence between communities reliant on these water sources. Transboundary water conflicts can occur as a result of this intense interdependence. Many factors contribute to the sparking of transboundary water conflicts, ranging from natural hydrological factors to hydro-political interactions. Previous attempts to predict transboundary water conflicts by analysing changes or trends in the contributing factors have typically failed because patterns in the data are hard to identify. However, there is potential for artificial intelligence and machine learning to fill this gap and identify future ‘hotspots’ up to a year in advance by identifying patterns in data where humans can’t. This research determines the extent to which AI can be used to predict and prevent transboundary water conflicts. This is done via a critical literature review of previous case studies and datasets where AI was deployed to predict water conflict. This research not only delivered a more nuanced understanding of previously undervalued factors that contribute toward transboundary water conflicts (in particular, culture and disinformation) but also by detecting conflict early, governance bodies can engage in processes to de-escalate conflict by providing pre-emptive solutions. Looking forward, this gives rise to significant policy implications and water-sharing agreements, which may be able to prevent water conflicts from developing into wide-scale disasters. Additionally, AI can be used to gain a fuller picture of water-based conflicts in areas where security concerns mean it is not possible to have staff on the ground. Therefore, AI enhances not only the depth of our knowledge about transboundary water conflicts but also the breadth of our knowledge. With demand for water constantly growing, competition between countries over shared water will increasingly lead to water conflict. There has never been a more significant time for us to be able to accurately predict and take precautions to prevent global water conflicts.

Keywords: artificial intelligence, machine learning, transboundary water conflict, water management

Procedia PDF Downloads 99
5873 Prediction of Temperature Distribution during Drilling Process Using Artificial Neural Network

Authors: Ali Reza Tahavvor, Saeed Hosseini, Nazli Jowkar, Afshin Karimzadeh Fard

Abstract:

Experimental & numeral study of temperature distribution during milling process, is important in milling quality and tools life aspects. In the present study the milling cross-section temperature is determined by using Artificial Neural Networks (ANN) according to the temperature of certain points of the work piece and the points specifications and the milling rotational speed of the blade. In the present work, at first three-dimensional model of the work piece is provided and then by using the Computational Heat Transfer (CHT) simulations, temperature in different nods of the work piece are specified in steady-state conditions. Results obtained from CHT are used for training and testing the ANN approach. Using reverse engineering and setting the desired x, y, z and the milling rotational speed of the blade as input data to the network, the milling surface temperature determined by neural network is presented as output data. The desired points temperature for different milling blade rotational speed are obtained experimentally and by extrapolation method for the milling surface temperature is obtained and a comparison is performed among the soft programming ANN, CHT results and experimental data and it is observed that ANN soft programming code can be used more efficiently to determine the temperature in a milling process.

Keywords: artificial neural networks, milling process, rotational speed, temperature

Procedia PDF Downloads 396
5872 Thermodynamic Cycle Analysis for Overall Efficiency Improvement and Temperature Reduction in Gas Turbines

Authors: Jeni A. Popescu, Ionut Porumbel, Valeriu A. Vilag, Cleopatra F. Cuciumita

Abstract:

The paper presents a thermodynamic cycle analysis for three turboshaft engines. The first is the cycle is a Brayton cycle, describing the evolution of a classical turboshaft, based on the Klimov TV2 engine. The other two cycles aim at approaching an Ericsson cycle, by replacing the Brayton cycle adiabatic expansion in the turbine by quasi-isothermal expansion. The maximum quasi-Ericsson cycles temperature is set to a lower value than the maximum Brayton cycle temperature, equal to the Brayton cycle power turbine inlet temperature, in order to decrease the engine NOx emissions. Also, the power distribution over the stages of the gas generator turbine is maintained the same. In the first of the two considered quasi-Ericsson cycle, the efficiencies of the gas generator turbine stage. Also, the power distribution over the stages of the gas generator turbine is maintained the same. In the first of the two considered quasi-Ericsson cycle, the efficiencies of the gas generator turbine stages are maintained the same as for the reference case, while for the second, the efficiencies are increased in order to obtain the same shaft power as in the reference case. It is found that in the first case, both the shaft power and the thermodynamic efficiency of the engine decrease, while in the second, the power is maintained, and even a slight increase in efficiency can be noted.

Keywords: combustion, Ericsson, thermodynamic analysis, turbine

Procedia PDF Downloads 603
5871 Analysis of the Discursive Dynamics of Preservice Physics Teachers in a Context of Curricular Innovation

Authors: M. A. Barros, M. V. Barros

Abstract:

The aim of this work is to analyze the discursive dynamics of preservice teachers during the implementation of a didactic sequence on topics of Quantum Mechanics for High School. Our research methodology was qualitative, case study type, in which we selected two prospective teachers on the Physics Teacher Training Course of the Sao Carlos Institute of Physics, at the University of Sao Paulo/Brazil. The set of modes of communication analyzed were the intentions and interventions of the teachers, the established communicative approach, the patterns and the contents of the interactions between teachers and students. Data were collected through video recording, interviews and questionnaires conducted before and after an 8 hour mini-course, which was offered to a group of 20 secondary students. As teaching strategy we used an active learning methodology, called: Peer Instruction. The episodes pointed out that both future teachers used interactive dialogic and authoritative communicative approaches to mediate the discussion between peers. In the interactive dialogic dimension the communication pattern was predominantly I-R-F (initiation-response-feedback), in which the future teachers assisted the students in the discussion by providing feedback to their initiations and contributing to the progress of the discussions between peers. Although the interactive dialogic dimension has been preferential during the use of the Peer Instruction method the authoritative communicative approach was also employed. In the authoritative dimension, future teachers used predominantly the type I-R-E (initiation-response-evaluation) communication pattern by asking the students several questions and leading them to the correct answer. Among the main implications the work contributes to the improvement of the practices of future teachers involved in applying active learning methodologies in classroom by identifying the types of communicative approaches and communication patterns used, as well as researches on curriculum innovation in physics in high school.

Keywords: curricular innovation, high school, physics teaching, discursive dynamics

Procedia PDF Downloads 179
5870 The Cartometric-Geographical Analysis of Ivane Javakhishvili 1922: The Map of the Republic of Georgia

Authors: Manana Kvetenadze, Dali Nikolaishvili

Abstract:

The study revealed the territorial changes of Georgia before the Soviet and Post-Soviet periods. This includes the estimation of the country's borders, its administrative-territorial arrangement change as well as the establishment of territorial losses. Georgia’s old and new borders marked on the map are of great interest. The new boundary shows the condition of 1922 year, following the Soviet period. Neither on this map nor in other works Ivane Javakhishvili talks about what he implies in the old borders, though it is evident that this is the Pre-Soviet boundary until 1921 – i.e., before the period when historical Tao, Zaqatala, Lore, Karaia represented the parts of Georgia. According to cartometric-geographical terms, the work presents detailed analysis of Georgia’s borders, along with this the comparison of research results has been carried out: 1) At the boundary line on Soviet topographic maps, the maps of 100,000; 50,000 and 25,000 scales are used; 2) According to Ivane Javakhishvili’s work ('The borders of Georgia in terms of historical and contemporary issues'). During that research, we used multi-disciplined methodology and software. We used Arc GIS for Georeferencing maps, and after that, we compare all post-Soviet Union maps, in order to determine how the borders have changed. During this work, we also use many historical data. The features of the spatial distribution of the territorial administrative units of Georgia, as well as the distribution of administrative-territorial units of the objects depicted on the map, have been established. The results obtained are presented in the forms of thematic maps and diagrams.

Keywords: border, GIS, georgia, historical cartography, old maps

Procedia PDF Downloads 236
5869 The Transient Reactive Power Regulation Capability of SVC for Large Scale WECS Connected to Distribution Networks

Authors: Y. Ates, A. R. Boynuegri, M. Uzunoglu, A. Karakas

Abstract:

The recent interest in alternative and renewable energy systems results in increased installed capacity ratio of such systems in total energy production of the world. Specifically, wind energy conversion systems (WECS) draw significant attention among possible alternative energy options, recently. On the contrary of the positive points of penetrating WECS in all over the world in terms of environment protection, energy independence of the countries, etc., there are significant problems to be solved for the grid connection of large scale WECS. The reactive power regulation, voltage variation suppression, etc. can be presented as major issues to be considered in this regard. Thus, this paper evaluates the application of a Static VAr Compensator (SVC) unit for the reactive power regulation and operation continuity of WECS during a fault condition. The system is modeled employing the IEEE 13 node test system. Thus, it is possible to evaluate the system performance with an overall grid simulation model close to real grid systems. The overall simulation model is developed in MATLAB/Simulink/SimPowerSystems® environments and the obtained results effectively match the target of the provided study.

Keywords: IEEE 13 bus distribution system, reactive power regulation, static VAr compensator, wind energy conversion system

Procedia PDF Downloads 731