Search results for: spatial distance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4178

Search results for: spatial distance

488 The Trade Flow of Small Association Agreements When Rules of Origin Are Relaxed

Authors: Esmat Kamel

Abstract:

This paper aims to shed light on the extent to which the Agadir Association agreement has fostered inter regional trade between the E.U_26 and the Agadir_4 countries; once that we control for the evolution of Agadir agreement’s exports to the rest of the world. The next valid question will be regarding any remarkable variation in the spatial/sectoral structure of exports, and to what extent has it been induced by the Agadir agreement itself and precisely after the adoption of rules of origin and the PANEURO diagonal cumulative scheme? The paper’s empirical dataset covering a timeframe from [2000 -2009] was designed to account for sector specific export and intermediate flows and the bilateral structured gravity model was custom tailored to capture sector and regime specific rules of origin and the Poisson Pseudo Maximum Likelihood Estimator was used to calculate the gravity equation. The methodological approach of this work is considered to be a threefold one which starts first by conducting a ‘Hierarchal Cluster Analysis’ to classify final export flows showing a certain degree of linkage between each other. The analysis resulted in three main sectoral clusters of exports between Agadir_4 and E.U_26: cluster 1 for Petrochemical related sectors, cluster 2 durable goods and finally cluster 3 for heavy duty machinery and spare parts sectors. Second step continues by taking export flows resulting from the 3 clusters to be subject to treatment with diagonal Rules of origin through ‘The Double Differences Approach’, versus an equally comparable untreated control group. Third step is to verify results through a robustness check applied by ‘Propensity Score Matching’ to validate that the same sectoral final export and intermediate flows increased when rules of origin were relaxed. Through all the previous analysis, a remarkable and partial significance of the interaction term combining both treatment effects and time for the coefficients of 13 out of the 17 covered sectors turned out to be partially significant and it further asserted that treatment with diagonal rules of origin contributed in increasing Agadir’s_4 final and intermediate exports to the E.U._26 on average by 335% and in changing Agadir_4 exports structure and composition to the E.U._26 countries.

Keywords: agadir association agreement, structured gravity model, hierarchal cluster analysis, double differences estimation, propensity score matching, diagonal and relaxed rules of origin

Procedia PDF Downloads 301
487 Linkage Disequilibrium and Haplotype Blocks Study from Two High-Density Panels and a Combined Panel in Nelore Beef Cattle

Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari

Abstract:

Genotype imputation has been used to reduce genomic selections costs. In order to increase haplotype detection accuracy in methods that considers the linkage disequilibrium, another approach could be used, such as combined genotype data from different panels. Therefore, this study aimed to evaluate the linkage disequilibrium and haplotype blocks in two high-density panels before and after the imputation to a combined panel in Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip (IHD), wherein 93 animals (23 bulls and 70 progenies) were also genotyped with the Affymetrix Axion Genome-Wide BOS 1 Array Plate (AHD). After the quality control, 809 IHD animals (509,107 SNPs) and 93 AHD (427,875 SNPs) remained for analyses. The combined genotype panel (CP) was constructed by merging both panels after quality control, resulting in 880,336 SNPs. Imputation analysis was conducted using software FImpute v.2.2b. The reference (CP) and target (IHD) populations consisted of 23 bulls and 786 animals, respectively. The linkage disequilibrium and haplotype blocks studies were carried out for IHD, AHD, and imputed CP. Two linkage disequilibrium measures were considered; the correlation coefficient between alleles from two loci (r²) and the |D’|. Both measures were calculated using the software PLINK. The haplotypes' blocks were estimated using the software Haploview. The r² measurement presented different decay when compared to |D’|, wherein AHD and IHD had almost the same decay. For r², even with possible overestimation by the sample size for AHD (93 animals), the IHD presented higher values when compared to AHD for shorter distances, but with the increase of distance, both panels presented similar values. The r² measurement is influenced by the minor allele frequency of the pair of SNPs, which can cause the observed difference comparing the r² decay and |D’| decay. As a sum of the combinations between Illumina and Affymetrix panels, the CP presented a decay equivalent to a mean of these combinations. The estimated haplotype blocks detected for IHD, AHD, and CP were 84,529, 63,967, and 140,336, respectively. The IHD were composed by haplotype blocks with mean of 137.70 ± 219.05kb, the AHD with mean of 102.10kb ± 155.47, and the CP with mean of 107.10kb ± 169.14. The majority of the haplotype blocks of these three panels were composed by less than 10 SNPs, with only 3,882 (IHD), 193 (AHD) and 8,462 (CP) haplotype blocks composed by 10 SNPs or more. There was an increase in the number of chromosomes covered with long haplotypes when CP was used as well as an increase in haplotype coverage for short chromosomes (23-29), which can contribute for studies that explore haplotype blocks. In general, using CP could be an alternative to increase density and number of haplotype blocks, increasing the probability to obtain a marker close to a quantitative trait loci of interest.

Keywords: Bos taurus indicus, decay, genotype imputation, single nucleotide polymorphism

Procedia PDF Downloads 257
486 A Semi-Automated GIS-Based Implementation of Slope Angle Design Reconciliation Process at Debswana Jwaneng Mine, Botswana

Authors: K. Mokatse, O. M. Barei, K. Gabanakgosi, P. Matlhabaphiri

Abstract:

The mining of pit slopes is often associated with some level of deviation from design recommendations, and this may translate to associated changes in the stability of the excavated pit slopes. Therefore slope angle design reconciliations are essential for assessing and monitoring compliance of excavated pit slopes to accepted slope designs. These associated changes in slope stability may be reflected by changes in the calculated factors of safety and/or probabilities of failure. Reconciliations of as-mined and slope design profiles are conducted periodically to assess the implications of these deviations on pit slope stability. Currently, the slope design reconciliation process being implemented in Jwaneng Mine involves the measurement of as-mined and design slope angles along vertical sections cut along the established geotechnical design section lines on the GEOVIA GEMS™ software. Bench retentions are calculated as a percentage of the available catchment area, less over-mined and under-mined areas, to that of the designed catchment area. This process has proven to be both tedious and requires a lot of manual effort and time to execute. Consequently, a new semi-automated mine-to-design reconciliation approach that utilizes laser scanning and GIS-based tools is being proposed at Jwaneng Mine. This method involves high-resolution scanning of targeted bench walls, subsequent creation of 3D surfaces from point cloud data and the derivation of slope toe lines and crest lines on the Maptek I-Site Studio software. The toe lines and crest lines are then exported to the ArcGIS software where distance offsets between the design and actual bench toe lines and crest lines are calculated. Retained bench catchment capacity is measured as distances between the toe lines and crest lines on the same bench elevations. The assessment of the performance of the inter-ramp and overall slopes entails the measurement of excavated and design slope angles along vertical sections on the ArcGIS software. Excavated and design toe-to-toe or crest-to-crest slope angles are measured for inter-ramp stack slope reconciliations. Crest-to-toe slope angles are also measured for overall slope angle design reconciliations. The proposed approach allows for a more automated, accurate, quick and easier workflow for carrying out slope angle design reconciliations. This process has proved highly effective and timeous in the assessment of slope performance in Jwaneng Mine. This paper presents a newly proposed process for assessing compliance to slope angle designs for Jwaneng Mine.

Keywords: slope angle designs, slope design recommendations, slope performance, slope stability

Procedia PDF Downloads 205
485 Rapid Detection of Cocaine Using Aggregation-Induced Emission and Aptamer Combined Fluorescent Probe

Authors: Jianuo Sun, Jinghan Wang, Sirui Zhang, Chenhan Xu, Hongxia Hao, Hong Zhou

Abstract:

In recent years, the diversification and industrialization of drug-related crimes have posed significant threats to public health and safety globally. The widespread and increasingly younger demographics of drug users and the persistence of drug-impaired driving incidents underscore the urgency of this issue. Drug detection, a specialized forensic activity, is pivotal in identifying and analyzing substances involved in drug crimes. It relies on pharmacological and chemical knowledge and employs analytical chemistry and modern detection techniques. However, current drug detection methods are limited by their inability to perform semi-quantitative, real-time field analyses. They require extensive, complex laboratory-based preprocessing, expensive equipment, and specialized personnel and are hindered by long processing times. This study introduces an alternative approach using nucleic acid aptamers and Aggregation-Induced Emission (AIE) technology. Nucleic acid aptamers, selected artificially for their specific binding to target molecules and stable spatial structures, represent a new generation of biosensors following antibodies. Rapid advancements in AIE technology, particularly in tetraphenyl ethene-based luminous, offer simplicity in synthesis and versatility in modifications, making them ideal for fluorescence analysis. This work successfully synthesized, isolated, and purified an AIE molecule and constructed a probe comprising the AIE molecule, nucleic acid aptamers, and exonuclease for cocaine detection. The probe demonstrated significant relative fluorescence intensity changes and selectivity towards cocaine over other drugs. Using 4-Butoxytriethylammonium Bromide Tetraphenylethene (TPE-TTA) as the fluorescent probe, the aptamer as the recognition unit, and Exo I as an auxiliary, the system achieved rapid detection of cocaine within 5 mins in aqueous and urine, with detection limits of 1.0 and 5.0 µmol/L respectively. The probe-maintained stability and interference resistance in urine, enabling quantitative cocaine detection within a certain concentration range. This fluorescent sensor significantly reduces sample preprocessing time, offers a basis for rapid onsite cocaine detection, and promises potential for miniaturized testing setups.

Keywords: drug detection, aggregation-induced emission (AIE), nucleic acid aptamer, exonuclease, cocaine

Procedia PDF Downloads 45
484 A (Morpho) Phonological Typology of Demonstratives: A Case Study in Sound Symbolism

Authors: Seppo Kittilä, Sonja Dahlgren

Abstract:

In this paper, a (morpho)phonological typology of proximal and distal demonstratives is proposed. Only the most basic proximal (‘this’) and distal (‘that’) forms have been considered, potential more fine-grained distinctions based on proximity are not relevant to our discussion, nor are the other functions the discussed demonstratives may have. The sample comprises 82 languages that represent the linguistic diversity of the world’s languages, although the study is not based on a systematic sample. Four different major types are distinguished; (1) Vowel type: front vs. back; high vs. low vowel (2) Consonant type: front-back consonants (3) Additional element –type (4) Varia. The proposed types can further be subdivided according to whether the attested difference concern only, e.g., vowels, or whether there are also other changes. For example, the first type comprises both languages such as Betta Kurumba, where only the vowel changes (i ‘this’, a ‘that’) and languages like Alyawarra (nhinha vs. nhaka), where there are also other changes. In the second type, demonstratives are distinguished based on whether the consonants are front or back; typically front consonants (e.g., labial and dental) appear on proximal demonstratives and back consonants on distal demonstratives (such as velar or uvular consonants). An example is provided by Bunaq, where bari marks ‘this’ and baqi ‘that’. In the third type, distal demonstratives typically have an additional element, making it longer in form than the proximal one (e.g., Òko òne ‘this’, ònébé ‘that’), but the type also comprises languages where the distal demonstrative is simply phonologically longer (e.g., Ngalakan nu-gaʔye vs. nu-gunʔbiri). Finally, the last type comprises cases that do not fit into the three other types, but a number of strategies are used by the languages of this group. The two first types can be explained by iconicity; front or high phonemes appear on the proximal demonstratives, while back/low phonemes are related to distal demonstratives. This means that proximal demonstratives are pronounced at the front and/or high part of the oral cavity, while distal demonstratives are pronounced lower and more back, which reflects the proximal/distal nature of their referents in the physical world. The first type is clearly the most common in our data (40/82 languages), which suggests a clear association with iconicity. Our findings support earlier findings that proximal and distal demonstratives have an iconic phonemic manifestation. For example, it has been argued that /i/ is related to smallness (small distance). Consonants, however, have not been considered before, or no systematic correspondences have been discovered. The third type, in turn, can be explained by markedness; the distal element is more marked than the proximal demonstrative. Moreover, iconicity is relevant also here: some languages clearly use less linguistic substance for referring to entities close to the speaker, which is manifested in the longer (morpho)phonological form of the distal demonstratives. The fourth type contains different kinds of cases, and systematic generalizations are hard to make.

Keywords: demonstratives, iconicity, language typology, phonology

Procedia PDF Downloads 132
483 Effects of Stokes Shift and Purcell Enhancement in Fluorescence Assisted Radiative Cooling

Authors: Xue Ma, Yang Fu, Dangyuan Lei

Abstract:

Passive daytime radiative cooling is an emerging technology which has attracted worldwide attention in recent years due to its huge potential in cooling buildings without the use of electricity. Various coating materials with different optical properties have been developed to improve the daytime radiative cooling performance. However, commercial cooling coatings comprising functional fillers with optical bandgaps within the solar spectral range suffers from severe intrinsic absorption, limiting their cooling performance. Fortunately, it has recently been demonstrated that introducing fluorescent materials into polymeric coatings can covert the absorbed sunlight to fluorescent emissions and hence increase the effective solar reflectance and cooling performance. In this paper, we experimentally investigate the key factors for fluorescence-assisted radiative cooling with TiO2-based white coatings. The surrounding TiO2 nanoparticles, which enable spatial and temporal light confinement through multiple Mie scattering, lead to Purcell enhancement of phosphors in the coating. Photoluminescence lifetimes of two phosphors (BaMgAl10O17:Eu2+ and (Sr, Ba)SiO4:Eu2+) exhibit significant reduction of ~61% and ~23%, indicating Purcell factors of 2.6 and 1.3, respectively. Moreover, smaller Stokes shifts of the phosphors are preferred to further diminish solar absorption. Field test of fluorescent cooling coatings demonstrate an improvement of ~4% solar reflectance for the BaMgAl10O17:Eu2+-based fluorescent cooling coating. However, to maximize solar reflectance, a white appearance is introduced based on multiple Mie scattering by the broad size distribution of fillers, which is visually pressurized and aesthetically bored. Besides, most colored pigments absorb visible light significantly and convert it to non-radiative thermal energy, offsetting the cooling effect. Therefore, current colored cooling coatings are facing the compromise between color saturation and cooling effect. To solve this problem, we introduced colored fluorescent materials into white coating based on SiO2 microspheres as a top layer, covering a white cooling coating based on TiO2. Compared with the colored pigments, fluorescent materials could re-emit the absorbed light, reducing the solar absorption introduced by coloration. Our work investigated the scattering properties of SiO2 dielectric spheres with different diameters and detailly discussed their impact on the PL properties of phosphors, paving the way for colored fluorescent-assisted cooling coting to application and industrialization.

Keywords: solar reflection, infrared emissivity, mie scattering, photoluminescent emission, radiative cooling

Procedia PDF Downloads 67
482 Nondestructive Inspection of Reagents under High Attenuated Cardboard Box Using Injection-Seeded THz-Wave Parametric Generator

Authors: Shin Yoneda, Mikiya Kato, Kosuke Murate, Kodo Kawase

Abstract:

In recent years, there have been numerous attempts to smuggle narcotic drugs and chemicals by concealing them in international mail. Combatting this requires a non-destructive technique that can identify such illicit substances in mail. Terahertz (THz) waves can pass through a wide variety of materials, and many chemicals show specific frequency-dependent absorption, known as a spectral fingerprint, in the THz range. Therefore, it is reasonable to investigate non-destructive mail inspection techniques that use THz waves. For this reason, in this work, we tried to identify reagents under high attenuation shielding materials using injection-seeded THz-wave parametric generator (is-TPG). Our THz spectroscopic imaging system using is-TPG consisted of two non-linear crystals for emission and detection of THz waves. A micro-chip Nd:YAG laser and a continuous wave tunable external cavity diode laser were used as the pump and seed source, respectively. The pump beam and seed beam were injected to the LiNbO₃ crystal satisfying the noncollinear phase matching condition in order to generate high power THz-wave. The emitted THz wave was irradiated to the sample which was raster scanned by the x-z stage while changing the frequencies, and we obtained multispectral images. Then the transmitted THz wave was focused onto another crystal for detection and up-converted to the near infrared detection beam based on nonlinear optical parametric effects, wherein the detection beam intensity was measured using an infrared pyroelectric detector. It was difficult to identify reagents in a cardboard box because of high noise levels. In this work, we introduce improvements for noise reduction and image clarification, and the intensity of the near infrared detection beam was converted correctly to the intensity of the THz wave. A Gaussian spatial filter is also introduced for a clearer THz image. Through these improvements, we succeeded in identification of reagents hidden in a 42-mm thick cardboard box filled with several obstacles, which attenuate 56 dB at 1.3 THz, by improving analysis methods. Using this system, THz spectroscopic imaging was possible for saccharides and may also be applied to cases where illicit drugs are hidden in the box, and multiple reagents are mixed together. Moreover, THz spectroscopic imaging can be achieved through even thicker obstacles by introducing an NIR detector with higher sensitivity.

Keywords: nondestructive inspection, principal component analysis, terahertz parametric source, THz spectroscopic imaging

Procedia PDF Downloads 158
481 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 171
480 Envisioning The Future of Language Learning: Virtual Reality, Mobile Learning and Computer-Assisted Language Learning

Authors: Jasmin Cowin, Amany Alkhayat

Abstract:

This paper will concentrate on a comparative analysis of both the advantages and limitations of using digital learning resources (DLRs). DLRs covered will be Virtual Reality (VR), Mobile Learning (M-learning) and Computer-Assisted Language Learning (CALL) together with their subset, Mobile Assisted Language Learning (MALL) in language education. In addition, best practices for language teaching and the application of established language teaching methodologies such as Communicative Language Teaching (CLT), the audio-lingual method, or community language learning will be explored. Education has changed dramatically since the eruption of the pandemic. Traditional face-to-face education was disrupted on a global scale. The rise of distance learning brought new digital tools to the forefront, especially web conferencing tools, digital storytelling apps, test authoring tools, and VR platforms. Language educators raced to vet, learn, and implement multiple technology resources suited for language acquisition. Yet, questions remain on how to harness new technologies, digital tools, and their ubiquitous availability while using established methods and methodologies in language learning paired with best teaching practices. In M-learning language, learners employ portable computing devices such as smartphones or tablets. CALL is a language teaching approach using computers and other technologies through presenting, reinforcing, and assessing language materials to be learned or to create environments where teachers and learners can meaningfully interact. In VR, a computer-generated simulation enables learner interaction with a 3D environment via screen, smartphone, or a head mounted display. Research supports that VR for language learning is effective in terms of exploration, communication, engagement, and motivation. Students are able to relate through role play activities, interact with 3D objects and activities such as field trips. VR lends itself to group language exercises in the classroom with target language practice in an immersive, virtual environment. Students, teachers, schools, language institutes, and institutions benefit from specialized support to help them acquire second language proficiency and content knowledge that builds on their cultural and linguistic assets. Through the purposeful application of different language methodologies and teaching approaches, language learners can not only make cultural and linguistic connections in DLRs but also practice grammar drills, play memory games or flourish in authentic settings.

Keywords: language teaching methodologies, computer-assisted language learning, mobile learning, virtual reality

Procedia PDF Downloads 215
479 Creative Mathematics – Action Research of a Professional Development Program in an Icelandic Compulsory School

Authors: Osk Dagsdottir

Abstract:

Background—Gait classifying allows clinicians to differentiate gait patterns into clinically important categories that help in clinical decision making. Reliable comparison of gait data between normal and patients requires knowledge of the gait parameters of normal children's specific age group. However, there is still a lack of the gait database for normal children of different ages. Objectives—This study aims to investigate the kinematics of the lower limb joints during gait for normal children in different age groups. Methods—Fifty-three normal children (34 boys, 19 girls) were recruited in this study. All the children were aged between 5 to 16 years old. Age groups were defined as three types: young child aged (5-7), child (8-11), and adolescent (12-16). When a participant agreed to take part in the project, their parents signed a consent form. Vicon® motion capture system was used to collect gait data. Participants were asked to walk at their comfortable speed along a 10-meter walkway. Each participant walked up to 20 trials. Three good trials were analyzed using the Vicon Plug-in-Gait model to obtain parameters of the gait, e.g., walking speed, cadence, stride length, and joint parameters, e.g., joint angle, force, moments, etc. Moreover, each gait cycle was divided into 8 phases. The range of motion (ROM) angle of pelvis, hip, knee, and ankle joints in three planes of both limbs were calculated using an in-house program. Results—The temporal-spatial variables of three age groups of normal children were compared between each other; it was found that there was a significant difference (p < 0.05) between the groups. The step length and walking speed were gradually increasing from young child to adolescent, while cadence was gradually decreasing from young child to adolescent group. The mean and standard deviation (SD) of the step length of young child, child and adolescent groups were 0.502 ± 0.067 m, 0.566 ± 0.061 m and 0.672 ± 0.053 m, respectively. The mean and SD of the cadence of the young child, child and adolescent groups were 140.11±15.79 step/min, 129±11.84 step/min, and a 115.96±6.47 step/min, respectively. Moreover, it was observed that there were significant differences in kinematic parameters, either whole gait cycle or each phase. For example, RoM of knee angle in the sagittal plane in the whole cycle of young child group is (65.03±0.52 deg) larger than child group (63.47±0.47 deg). Conclusion—Our result showed that there are significant differences between each age group in the gait phases and thus children walking performance changes with ages. Therefore, it is important for the clinician to consider the age group when analyzing the patients with lower limb disorders before any clinical treatment.

Keywords: action research, creative learning, mathematics education, professional development

Procedia PDF Downloads 91
478 The Effects of Geographical and Functional Diversity of Collaborators on Quality of Knowledge Generated

Authors: Ajay Das, Sandip Basu

Abstract:

Introduction: There is increasing recognition that diverse streams of knowledge can often be recombined in novel ways to generate new knowledge. However, knowledge recombination theory has not been applied to examine the effects of collaborator diversity on the quality of knowledge such collaborators produce. This is surprising because one would expect that a collaborative team with certain aspects of diversity should be able to recombine process elements related to knowledge development, which are relatively tacit, but also complementary because of the collaborator’s varying backgrounds. Theory and Hypotheses: We propose to examine two aspects of diversity in the environments of collaborative teams to try and capture such potential recombinations of relatively tacit, process knowledge. The first aspect of diversity in team members’ environments is geographical. Collaborators with more geographical distance between them (perhaps working in different countries) often have more autonomy in the processes they adopt for knowledge development. In the absence of overt monitoring, such collaborators are likely to adopt differing approaches to knowledge development. The sharing of such varying approaches among collaborators is likely to result in greater quality of the common collaborative pursuit. The second aspect is diversity in the work backgrounds of team members. Such diversity can also increase the potential for knowledge recombination. For example, if one or more members are from a manufacturing center (versus all of them being from a purely R&D center), such members will provide unique perspectives on the implementation of innovative ideas. Again, knowledge that has been evaluated from these diverse perspectives is likely to be of a higher quality. In addition to the above aspects of environmental diversity among team members, we also plan to examine the extent to which individual collaborators are in different environments from the primary innovation center of their employing firms. Proposed Methods: We will test our model on a sample of firms in the semiconductor industry. Our level of analysis will be individual patents generated by these firms and the teams involved in the generation of these. Information on manufacturing activities of our sample firms will be obtained from SEMI, a proprietary database of the semiconductor industry, as well as company 10-K reports. Conclusion: We believe that our results will represent a preliminary attempt to understand how various forms of diversity in collaborative teams impact the knowledge development process. Our dependent variable of knowledge quality is important to study since higher values of this variable can not only drive firm performance but the broader development of regions and societies through spillover impacts on future innovation. The results of this study will, therefore, inform future research and practice in innovation, geographical location, and vertical integration.

Keywords: innovation, manufacturing strategy, knowledge, diversity

Procedia PDF Downloads 329
477 Transverse Behavior of Frictional Flat Belt Driven by Tapered Pulley -Change of Transverse Force Under Driving State–

Authors: Satoko Fujiwara, Kiyotaka Obunai, Kazuya Okubo

Abstract:

A skew is one of important problems for designing the conveyor and transmission with frictional flat belt, in which running belt is deviated in width direction due to the transverse force applied to the belt. The skew often not only degrades the stability of the path of belt but also causes some damages of the belt and auxiliary machines. However, the transverse behavior such as the skew has not been discussed quantitatively in detail for frictional belts. The objective of this study is to clarify the transverse behavior of frictional flat belt driven by tapered pulley. Commercially available rubber flat belt reinforced by polyamide film was prepared as the test belt where the thickness and length were 1.25 mm and 630 mm, respectively. Test belt was driven between two pulleys made of aluminum alloy, where diameter and inter-axial length were 50 mm and 150 mm, respectively. Some tapered pulleys were applied where tapered angles were 0 deg (for comparison), 2 deg, 4 deg, and 6 deg. In order to alternatively investigate the transverse behavior, the transverse force applied to the belt was measured when the skew was constrained at the string under driving state. The transverse force was measured by a load cell having free rollers contacting on the side surface of the belt when the displacement in the belt width direction was constrained. The conditions of observed bending stiffness in-plane of the belt were changed by preparing three types of belts (the width of the belt was 20, 30, and 40 mm) where their observed stiffnesses were changed. The contributions of the bending stiffness in-plane of belt and initial inter-axial force to the transverse were discussed in experiments. The inter-axial force was also changed by setting a distance (about 240 mm) between the two pulleys. Influence of observed bending stiffness in-plane of the belt and initial inter-axial force on the transverse force were investigated. The experimental results showed that the transverse force was increased with an increase of observed bending stiffness in-plane of the belt and initial inter-axial force. The transverse force acting on the belt running on the tapered pulley was classified into multiple components. Those were components of forces applied with the deflection of the inter-axial force according to the change of taper angle, the resultant force by the bending moment applied on the belt winding around the tapered pulley, and the reaction force applied due to the shearing deformation. The calculation result of the transverse force was almost agreed with experimental data when those components were formulated. It was also shown that the most contribution was specified to be the shearing deformation, regardless of the test conditions. This study found that transverse behavior of frictional flat belt driven by tapered pulley was explained by the summation of those components of forces.

Keywords: skew, frictional flat belt, transverse force, tapered pulley

Procedia PDF Downloads 133
476 Progressive Damage Analysis of Mechanically Connected Composites

Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan

Abstract:

While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values ​​, and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.

Keywords: puck, finite element, bolted joint, composite

Procedia PDF Downloads 78
475 Collaborative Governance in Dutch Flood Risk Management: An Historical Analysis

Authors: Emma Avoyan

Abstract:

The safety standards for flood protection in the Netherlands have been revised recently. It is expected that all major flood-protection structures will have to be reinforced to meet the new standards. The Dutch Flood Protection Programme aims at accomplishing this task through innovative integrated projects such as construction of multi-functional flood defenses. In these projects, flood safety purposes will be combined with spatial planning, nature development, emergency management or other sectoral objectives. Therefore, implementation of dike reinforcement projects requires early involvement and collaboration between public and private sectors, different governmental actors and agencies. The development and implementation of such integrated projects has been an issue in Dutch flood risk management since long. Therefore, this article analyses how cross-sector collaboration within flood risk governance in the Netherlands has evolved over time, and how this development can be explained. The integrative framework for collaborative governance is applied as an analytical tool to map external factors framing possibilities as well as constraints for cross-sector collaboration in Dutch flood risk domain. Supported by an extensive document and literature analysis, the paper offers insights on how the system context and different drivers changing over time either promoted or hindered cross-sector collaboration between flood protection sector, urban development, nature conservation or any other sector involved in flood risk governance. The system context refers to the multi-layered and interrelated suite of conditions that influence the formation and performance of complex governance systems, such as collaborative governance regimes, whereas the drivers initiate and enable the overall process of collaboration. In addition, by applying a method of process tracing we identify a causal and chronological chain of events shaping cross-sectoral interaction in Dutch flood risk management. Our results indicate that in order to evaluate the performance of complex governance systems, it is important to firstly study the system context that shapes it. Clear understanding of the system conditions and drivers for collaboration gives insight into the possibilities of and constraints for effective performance of complex governance systems. The performance of the governance system is affected by the system conditions, while at the same time the governance system can also change the system conditions. Our results show that the sequence of changes within the system conditions and drivers over time affect how cross-sector interaction in Dutch flood risk governance system happens now. Moreover, we have traced the potential of this governance system to shape and change the system context.

Keywords: collaborative governance, cross-sector interaction, flood risk management, the Netherlands

Procedia PDF Downloads 111
474 Multi-Criteria Nautical Ports Capacity and Services Planning

Authors: N. Perko, N. Kavran, M. Bukljas, I. Berbic

Abstract:

This paper is a result of implemented research on proposed introduced methodology for nautical ports capacity planning by introducing a multi-criteria approach of defined criteria and impacts at the Adriatic Sea. The purpose was analysing the determinants -characteristics of infrastructure and services of nautical ports capacity allocated, especially nowadays due to COVID-19 pandemic, as crucial for the successful operation of nautical ports. Giving the importance of the defined priorities for short-term and long-term planning is essential not only in terms of the development of nautical tourism but also in terms of developing the maritime system, but unfortunately, this is not always carried out. Evaluation of the use of resources should follow from a detailed analysis of all aspects of resources bearing in mind that nautical tourism used resources in a sustainable manner and generate effects in the tourism and maritime sectors. Consequently, the identified multiplier effect of nautical tourism, which should be defined and quantified in detail, should be one of the major competitive products on the Croatian Adriatic and the Mediterranean. Research of nautical tourism is necessary to quantify the effects and required planning system development. In the future, the greatest threat to the long-term sustainable development of nautical tourism can be its further uncontrolled or unlimited and undirected development, especially under pressure markedly higher demand than supply for new moorings in the Mediterranean. Results of this implemented research are applicable to nautical ports management and decision-makers of maritime transport system development. This paper will present implemented research and obtained result-developed methodology for nautical port capacity planning -port capacity planning multi-criteria decision-making. A proposed methodological approach of multi-criteria capacity planning includes four criteria (spatial - transport, cost - infrastructure, ecological and organizational criteria, and additional services). The importance of the criteria and sub-criteria is evaluated and carried out as the basis for sensitivity analysis of the importance of the criteria and sub-criteria. Based on the analysis of the identified and quantified importance of certain criteria and sub-criteria, as well as sensitivity analysis and analysis of changes of the quantified importance, scientific and applicable results will be presented. These obtained results have practical applicability by management of nautical ports in the planning of increasing capacity and further development and for the adaptation of existing nautical ports. Obtained research is applicable and replicable in other seas, and results are especially important and useful in this COVID-19 pandemic challenging maritime development framework.

Keywords: Adriatic Sea, capacity, infrastructures, maritime system, methodology, nautical ports, nautical tourism, service

Procedia PDF Downloads 167
473 Investigation of the Possible Correlation of Earthquakes with a Red Tide Occurrence in the Persian Gulf and Oman Sea

Authors: Hadis Hosseinzadehnaseri

Abstract:

The red tide is a kind of algae blooming, caused different problems at different sizes for the human life and the environment, so it has become one of the serious global concerns in the field of Oceanography in few recent decades. This phenomenon has affected on Iran's water, especially the Persian Gulf's since last few years. Collecting data associated with this phenomenon and comparison in different parts of the world is significant as a practical way to study this phenomenon and controlling it. Effective factors to occur this phenomenon lead to the increase of the required nutrients of the algae or provide a good environment for blooming. In this study, we examined the probability of relation between the earthquake and the harmful algae blooming in the Persian Gulf's water through comparing the earthquake data and the recorded Red tides. On the one hand, earthquakes can cause changes in seawater temperature that is effective in creating a suitable environment and the other hand, it increases the possibility of water nutrients, and its transportation in the seabed, so it can play a principal role in the development of red tide occurrence. Comparing the distribution spatial-temporal maps of the earthquakes and deadly red tides in the Persian Gulf and Oman Sea, confirms the hypothesis, why there is a meaningful relation between these two distributions. Comparing the number of earthquakes around the world as well as the number of the red tides in many parts of the world indicates the correlation between these two issues. This subject due to numerous earthquakes, especially in recent years and in the southern part of the country should be considered as a warning to the possibility of re-occurrence of a critical state of red tide in a large scale, why in the year 2008, the number of recorded earthquakes have been more than near years. In this year, the distribution value of the red tide phenomenon in the Persian Gulf got measured about 140,000 square kilometers and entire Oman Sea, with 10 months Survival in the area, which is considered as a record among the occurred algae blooming in the world. In this paper, we could obtain a logical and reasonable relation between the earthquake frequency and this phenomenon occurrence, through compilation of statistics relating to the earthquakes in the southern Iran, from 2000 to the end of the first half of 2013 and also collecting statistics on the occurrence of red tide in the region as well as examination of similar data in different parts of the world. As shown in Figure 1, according to a survey conducted on the earthquake data, the most earthquakes in the southern Iran ranks first in the fourth Gregorian calendar month In April, coincided with Ordibehesht and Khordad in Persian calendar and then in the tenth Gregorian calendar month In October, coincided in Aban and Azar in Persian calendar.

Keywords: red tide, earth quake, persian gulf, harmful algae bloom

Procedia PDF Downloads 476
472 System Analysis on Compact Heat Storage in the Built Environment

Authors: Wilko Planje, Remco Pollé, Frank van Buuren

Abstract:

An increased share of renewable energy sources in the built environment implies the usage of energy buffers to match supply and demand and to prevent overloads of existing grids. Compact heat storage systems based on thermochemical materials (TCM) are promising to be incorporated in future installations as an alternative for regular thermal buffers. This is due to the high energy density (1 – 2 GJ/m3). In order to determine the feasibility of TCM-based systems on building level several installation configurations are simulated and analyzed for different mixes of renewable energy sources (solar thermal, PV, wind, underground, air) for apartments/multistore-buildings for the Dutch situation. Thereby capacity, volume and financial costs are calculated. The simulation consists of options to include the current and future wind power (sea and land) and local roof-attached PV or solar-thermal systems. Thereby, the compact thermal buffer and optionally an electric battery (typically 10 kWhe) form the local storage elements for energy matching and shaving purposes. Besides, electric-driven heat pumps (air / ground) can be included for efficient heat generation in case of power-to-heat. The total local installation provides both space heating, domestic hot water as well as electricity for a specific case with low-energy apartments (annually 9 GJth + 8 GJe) in the year 2025. The energy balance is completed with grid-supplied non-renewable electricity. Taking into account the grid capacities (permanent 1 kWe/household), spatial requirements for the thermal buffer (< 2.5 m3/household) and a desired minimum of 90% share of renewable energy per household on the total consumption the wind-powered scenario results in acceptable sizes of compact thermal buffers with an energy-capacity of 4 - 5 GJth per household. This buffer is combined with a 10 kWhe battery and air source heat pump system. Compact thermal buffers of less than 1 GJ (typically volumes 0.5 - 1 m3) are possible when the installed wind-power is increased with a factor 5. In case of 15-fold of installed wind power compact heat storage devices compete with 1000 L water buffers. The conclusion is that compact heat storage systems can be of interest in the coming decades in combination with well-retrofitted low energy residences based on the current trends of installed renewable energy power.

Keywords: compact thermal storage, thermochemical material, built environment, renewable energy

Procedia PDF Downloads 223
471 A Model of the Universe without Expansion of Space

Authors: Jia-Chao Wang

Abstract:

A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.

Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction

Procedia PDF Downloads 109
470 Media Facades Utilization for Sustainable Tourism Promotion in Historic Places: Case Study of the Walled City of Famagusta, North Cyprus

Authors: Nikou Javadi, Uğur Dağlı

Abstract:

The importance of culture and tourism in the attractiveness and competitiveness of the countries is central, and many regions are evidencing their cultural assets, tangible and intangible, as a means to create comparative advantages in tourism and produce a distinctive place in response to the pressures of globalization. Culture and tourism are interlinked because of their obvious combination and growth potential. Cultural tourism is a crucial global tourism market with fast growing. Regions can develop significant relations between culture and tourism to increase their attractiveness as places to visit, live and invest, increasing their competitiveness. Accordingly, having new and creative approach to historical areas as cultural value-based destinations can improve their conditions to promote tourism. Furthermore, in 21st century, media become the most important factor affecting the development of urban cities, including public places. As a result of the digital revolution, re-imaging and re-linkage public places by media are essential to create more interactions between public spaces and users, interaction media display, and urban screens, one of the most important defined media. This interaction can transform the urban space from being neglected to be more interactive space with users, especially the pedestrians. The paper focuses on The Walled City of Famagusta. As many other historic quarters elsewhere in the world, is in a process, of decay and deterioration, and its functionally distinctive areas are severely threatened by physical, functional, locational, and image obsolescence at varying degrees. So the focus on the future development of this area through tourism promotion can be an appropriate decision for the monument enhancement of the spatial quality in Walled City of Famagusta. In this paper, it is aimed to identify the effects of these new digital factors to transform public spaces especially in historic urban areas to promote creative tourism. Accordingly, two different analysis methods are used as well as a theoretical review. The first is case study on site and the second is Close ended questionnaire, test many concepts raised in this paper. The physical analysis on site carried out in order to evaluate the walled city restoration for touristic purpose. Besides, theoretical review is done in order to provide background to the subject and cleared Factors to attract tourists.

Keywords: historical areas, media façade, sustainable tourism, Walled city of Famagusta

Procedia PDF Downloads 301
469 The Flooding Management Strategy in Urban Areas: Reusing Public Facilities Land as Flood-Detention Space for Multi-Purpose

Authors: Hsiao-Ting Huang, Chang Hsueh-Sheng

Abstract:

Taiwan is an island country which is affected by the monsoon deeply. Under the climate change, the frequency of extreme rainstorm by typhoon becomes more and more often Since 2000. When the extreme rainstorm comes, it will cause serious damage in Taiwan, especially in urban area. It is suffered by the flooding and the government take it as the urgent issue. On the past, the land use of urban planning does not take flood-detention into consideration. With the development of the city, the impermeable surface increase and most of the people live in urban area. It means there is the highly vulnerability in the urban area, but it cannot deal with the surface runoff and the flooding. However, building the detention pond in hydraulic engineering way to solve the problem is not feasible in urban area. The land expropriation is the most expensive construction of the detention pond in the urban area, and the government cannot afford it. Therefore, the management strategy of flooding in urban area should use the existing resource, public facilities land. It can archive the performance of flood-detention through providing the public facilities land with the detention function. As multi-use public facilities land, it also can show the combination of the land use and water agency. To this purpose, this research generalizes the factors of multi-use for public facilities land as flood-detention space with literature review. The factors can be divided into two categories: environmental factors and conditions of public facilities. Environmental factors including three factors: the terrain elevation, the inundation potential and the distance from the drainage system. In the other hand, there are six factors for conditions of public facilities, including area, building rate, the maximum of available ratio etc. Each of them will be according to it characteristic to given the weight for the land use suitability analysis. This research selects the rules of combination from the logical combination. After this process, it can be classified into three suitability levels. Then, three suitability levels will input to the physiographic inundation model for simulating the evaluation of flood-detention respectively. This study tries to respond the urgent issue in urban area and establishes a model of multi-use for public facilities land as flood-detention through the systematic research process of this study. The result of this study can tell which combination of the suitability level is more efficacious. Besides, The model is not only standing on the side of urban planners but also add in the point of view from water agency. Those findings may serve as basis for land use indicators and decision-making references for concerned government agencies.

Keywords: flooding management strategy, land use suitability analysis, multi-use for public facilities land, physiographic inundation model

Procedia PDF Downloads 332
468 Research on the Evolution of Public Space in Tourism-Oriented Traditional Rural Settlements

Authors: Yu Zhang, Mingxue Lang, Li Dong

Abstract:

The hundreds of years of slow succession of living environment in rural area is a crucial carrier of China’s long history of culture and national wisdom. In recent years, the space evolution of traditional rural settlements has been promoted by the intervention of tourism development, among which the public architecture and outdoor activity areas together served as the major places for villagers, and tourists’ social activities are an important characterization for settlement spatial evolution. Traditional public space upgrade and layout study of new public space can effectively promote the tourism industry development of traditional rural settlements. This article takes Qi County, one China Traditional Culture Village as the exemplification and uses the technology of Remote Sensing (RS), Geographic Information System (GIS) and Space Syntax, studies the evolution features of public space of tourism-oriented traditional rural settlements in four steps. First, acquire the 2003 and 2016 image data of Qi County, using the remote sensing application EDRAS8.6. Second, vectorize the basic maps of Qi County including its land use map with the application of ArcGIS 9.3 meanwhile, associating with architectural and site information concluded from field research. Third, analyze the accessibility and connectivity of the inner space of settlements using space syntax; run cross-correlation with the public space data of 2003 and 2016. Finally, summarize the evolution law of the public space of settlements; study the upgrade pattern of traditional public space and location plan for new public space. Major findings of this paper including: first, location layout of traditional public space has a larger association with the calculation results of space syntax and further confirmed the objective value of space syntax in expressing the space and social relations. Second, the intervention of tourism development generates remarkable impact on public space location of tradition rural settlements. Third, traditional public space produces the symbols of both strengthening and decline and forms a diversified upgrade pattern for the purpose of meeting the different tourism functional needs. Finally, space syntax provides an objective basis for location plan of new public space that meets the needs of tourism service. Tourism development has a significant impact on the evolution of public space of traditional rural settlements. Two types of public space, architecture, and site are both with changes seen from the perspective of quantity, location, dimension and function after the intervention of tourism development. Function upgrade of traditional public space and scientific layout of new public space are two important ways in achieving the goal of sustainable development of tourism-oriented traditional rural settlements.

Keywords: public space evolution, Qi county, space syntax, tourism oriented, traditional rural settlements

Procedia PDF Downloads 320
467 Wildlife Habitat Corridor Mapping in Urban Environments: A GIS-Based Approach Using Preliminary Category Weightings

Authors: Stefan Peters, Phillip Roetman

Abstract:

The global loss of biodiversity is threatening the benefits nature provides to human populations and has become a more pressing issue than climate change and requires immediate attention. While there have been successful global agreements for environmental protection, such as the Montreal Protocol, these are rare, and we cannot rely on them solely. Thus, it is crucial to take national and local actions to support biodiversity. Australia is one of the 17 countries in the world with a high level of biodiversity, and its cities are vital habitats for endangered species, with more of them found in urban areas than in non-urban ones. However, the protection of biodiversity in metropolitan Adelaide has been inadequate, with over 130 species disappearing since European colonization in 1836. In this research project we conceptualized, developed and implemented a framework for wildlife Habitat Hotspots and Habitat Corridor modelling in an urban context using geographic data and GIS modelling and analysis. We used detailed topographic and other geographic data provided by a local council, including spatial and attributive properties of trees, parcels, water features, vegetated areas, roads, verges, traffic, and census data. Weighted factors considered in our raster-based Habitat Hotspot model include parcel size, parcel shape, population density, canopy cover, habitat quality and proximity to habitats and water features. Weighted factors considered in our raster-based Habitat Corridor model include habitat potential (resulting from the Habitat Hotspot model), verge size, road hierarchy, road widths, human density, and presence of remnant indigenous vegetation species. We developed a GIS model, using Python scripting and ArcGIS-Pro Model-Builder, to establish an automated reproducible and adjustable geoprocessing workflow, adaptable to any study area of interest. Our habitat hotspot and corridor modelling framework allow to determine and map existing habitat hotspots and wildlife habitat corridors. Our research had been applied to the study case of Burnside, a local council in Adelaide, Australia, which encompass an area of 30 km2. We applied end-user expertise-based category weightings to refine our models and optimize the use of our habitat map outputs towards informing local strategic decision-making.

Keywords: biodiversity, GIS modeling, habitat hotspot, wildlife corridor

Procedia PDF Downloads 89
466 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence

Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang

Abstract:

Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.

Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics

Procedia PDF Downloads 53
465 Magnetic Navigation in Underwater Networks

Authors: Kumar Divyendra

Abstract:

Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.

Keywords: clustering, deep learning, network backbone, parallel computing

Procedia PDF Downloads 76
464 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 267
463 Metagenomic Assessment of the Effects of Genetically Modified Crops on Microbial Ecology and Physicochemical Properties of Soil

Authors: Falana Yetunde Olaitan, Ijah U. J. J, Solebo Shakirat O.

Abstract:

Genetically modified crops are already phenomenally successful and are grown worldwide in more than eighteen countries on more than 67 million hectares. Nigeria, in October 2018, approved Bacillus thuringiensis (Bt) cotton and maize; therefore, the need to carry out environmental risk assessment studies. A total of 15 4L octagonal ceramic pots were filled with 4kg of soil and placed on the bench in 2 rows of 10 pots each and the 3rd row of 5 pots, 1st-row pots were used to plant GM cotton seeds, while the 2nd-row pots were used for non-GM cotton seeds and the 3rd row of 5 pots served as control, all in the screen house. Soil samples for metagenomic DNA extraction were collected at random and at the monthly interval after planting at a distance of 2mm from the plant’s root and at a depth of 10cm using a sterile spatula. Soil samples for physicochemical analysis were collected before planting and after harvesting the GM and non-GM crops as well as from the control soil. The DNA was extracted, quantified and sequenced; Sample 1A (DNA from GM cotton Soil at 1st interval) gave the lowest sequence read with 0.853M while sample 2B (DNA from GM cotton Soil at 2nd interval) gave the highest with 5.785M, others gave between 1.8M and 4.7M. The samples treatment were grouped into four, Group 1 (GM cotton soil from 1 to 3 intervals) had between 800,000 and 5,700,000 strains of microbes (SOM), Group 2 (non GM cotton soil from 1 to 3 intervals) had between 1,400,600 and 4,200,000 SOM, Group 3 (control soil) had between 900,000 and 3,600,000 SOM and Group 4 (initial soil) had between 3,700,000 and 4,000,000 SOM. The microbes observed were predominantly bacteria (including archaea), fungi, dark matter alongside protists and phages. The predominant bacterial groups were the Terrabacteria (Bacillus funiculus, Bacillus sp.), the Proteobacteria (Microvirga massiliensis, sphingomonas sp.) and the Archaea (Nitrososphaera sp.), while the fungi were Aspergillus fischeri and Fusarium falciforme. The comparative analysis between groups was done using JACCARD PERMANOVA beta diversity analysis at P-value not more than 0.76 and there was no significant pair found. The pH for initial, GM cotton, non-GM cotton and control soil were 6.28, 6.26, 7.25, 8.26 and the percentage moisture was 0.63, 0.78, 0.89 and 0.82, respectively, while the percentage Nitrogen was observed to be 17.79, 1.14, 1.10 and 0.56 respectively. Other parameters include, varying concentrations of Potassium (0.46, 1,284.47, 1,785.48, 1,252.83 mg/kg) and Phosphorus (18.76, 17.76, 16.87, 15.23 mg/kg) were recorded for the four treatments respectively. The soil consisted mainly of silt (32.09 to 34.66%) and clay (58.89 to 60.23%), reflecting the soil texture as silty – clay. The results were then tested with ANOVA at less than 0.05 P-value and no pair was found to be significant as well. The results suggest that the GM crops have no significant effect on microbial ecology and physicochemical properties of the soil and, in turn, no direct or indirect effects on human health.

Keywords: genetically modified crop, microbial ecology, physicochemical properties, metagenomics, DNA, soil

Procedia PDF Downloads 127
462 Understanding the Reasons for Flooding in Chennai and Strategies for Making It Flood Resilient

Authors: Nivedhitha Venkatakrishnan

Abstract:

Flooding in urban areas in India has become a usual ritual phenomenon and a nightmare to most cities, which is a consequence of man-made disruption resulting in disaster. The City planning in India falls short of withstanding hydro generated disasters. This has become a barrier and challenge in the process of development put forth by urbanization, high population density, expanding informal settlements, environment degradation from uncollected and untreated waste that flows into natural drains and water bodies, this has disrupted the natural mechanism of hazard protection such as drainage channels, wetlands and floodplains. The magnitude and the impact of the mishap was high because of the failure of development policies, strategies, plans that the city had adopted. In the current scenario, cities are becoming the home for future, with economic diversification bringing in more investment into cities especially in domains of Urban infrastructure, planning and design. The uncertainty of the Urban futures in these low elevated coastal zones faces an unprecedented risk and threat. The study on focuses on three major pillars of resilience such as Recover, Resist and Restore. This process of getting ready to handle the situation bridges the gap between disaster response management and risk reduction requires a shift in paradigm. The study involved a qualitative research and a system design approach (framework). The initial stages involved mapping out of the urban water morphology with respect to the spatial growth gave an insight of the water bodies that have gone missing over the years during the process of urbanization. The major finding of the study was missing links between traditional water harvesting network was a major reason resulting in a manmade disaster. The research conceptualized the ideology of a sponge city framework which would guide the growth through institutional frameworks at different levels. The next stage was on understanding the implementation process at various stage to ensure the shift in paradigm. Demonstration of the concepts at a neighborhood level where, how, what are the functions and benefits of each component. Quantifying the design decision with rainwater harvest, surface runoff and how much water is collected and how it could be collected, stored and reused. The study came with further recommendation for Water Mitigation Spaces that will revive the traditional harvesting network.

Keywords: flooding, man made disaster, resilient city, traditional harvesting network, waterbodies

Procedia PDF Downloads 129
461 Digitalization, Economic Growth and Financial Sector Development in Africa

Authors: Abdul Ganiyu Iddrisu

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth, and reducing poverty. Yet compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, low-income flows among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector, however, empirical evidence on digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We therefore argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa focusing on the role of digitization, and financial sector development. First, we assess how digitization influence financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on 2 economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improves economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, economic growth, financial sector development, Africa

Procedia PDF Downloads 81
460 Architecture for Hearing Impaired: A Study on Conducive Learning Environments for Deaf Children with Reference to Sri Lanka

Authors: Champa Gunawardana, Anishka Hettiarachchi

Abstract:

Conducive Architecture for learning environments is an area of interest for many scholars around the world. Loss of sense of hearing leads to the assumption that deaf students are visual learners. Comprehending favorable non-hearing attributes of architecture can lead to effective, rich and friendly learning environments for hearing impaired. The objective of the current qualitative investigation is to explore the nature and parameters of a sense of place of deaf children to support optimal learning. The investigation was conducted with hearing-impaired children (age: between 8-19, Gender: 15 male and 15 female) of Yashodhara deaf and blind school at Balangoda, Sri Lanka. A sensory ethnography study was adopted to identify the nature of perception and the parameters of most preferred and least preferred spaces of the learning environment. The common perceptions behind most preferred places in the learning environment were found as being calm and quiet, sense of freedom, volumes characterized by openness and spaciousness, sense of safety, wide spaces, privacy and belongingness, less crowded, undisturbed, availability of natural light and ventilation, sense of comfort and the view of green colour in the surroundings. On the other hand, the least preferred spaces were found to be perceived as dark, gloomy, warm, crowded, lack of freedom, smells (bad), unsafe and having glare. Perception of space by deaf considering the hierarchy of sensory modalities involved was identified as; light - color perception (34 %), sight - visual perception (32%), touch - haptic perception (26%), smell - olfactory perception (7%) and sound – auditory perception (1%) respectively. Sense of freedom (32%) and sense of comfort (23%) were the predominant psychological parameters leading to an optimal sense of place perceived by hearing impaired. Privacy (16%), rhythm (14%), belonging (9%) and safety (6%) were found as secondary factors. Open and wide flowing spaces without visual barriers, transparent doors and windows or open port holes to ease their communication, comfortable volumes, naturally ventilated spaces, natural lighting or diffused artificial lighting conditions without glare, sloping walkways, wider stairways, walkways and corridors with ample distance for signing were identified as positive characteristics of the learning environment investigated.

Keywords: deaf, visual learning environment, perception, sensory ethnography

Procedia PDF Downloads 215
459 Risks and Values in Adult Safeguarding: An Examination of How Social Workers Screen Safeguarding Referrals from Residential Homes

Authors: Jeremy Dixon

Abstract:

Safeguarding adults forms a core part of social work practice. The Government in England and Wales has made efforts to standardise practices through The Care Act 2014. The Act states that local authorities have duties to make inquiries in cases where an adult with care or support needs is experiencing or at risk of abuse and is unable to protect themselves from abuse or neglect. Despite the importance given to safeguarding adults within law there remains little research about how social workers conduct such decisions on the ground. This presentation reports on findings from a pilot research study conducted within two social work teams in a Local Authority in England. The objective of the project was to find out how social workers interpreted safeguarding duties as laid out by The Care Act 2014 with a particular focus on how workers assessed and managed risk. Ethnographic research methods were used throughout the project. This paper focusses specifically on decisions made by workers in the assessment team. The paper reports on qualitative observation and interviews with five workers within this team. Drawing on governmentality theory, this paper analyses the techniques used by workers to manage risk from a distance. A high proportion of safeguarding referrals came from care workers or managers in residential care homes. Social workers conducting safeguarding assessments were aware that they had a duty to work in partnership with these agencies. However, their duty to safeguard adults also meant that they needed to view them as potential abusers. In making judgments about when it was proportionate to refer for a safeguarding assessment workers drew on a number of common beliefs about residential care workers which were then tested in conversations with them. Social workers held the belief that residential homes acted defensively, leading them to report any accident or danger. Social workers therefore encouraged residential workers to consider whether statutory criteria had been met and to use their own procedures to manage risk. In addition social workers carried out an assessment of the workers’ motives; specifically whether they were using safeguarding procedures as a shortcut for avoiding other assessments or as a means of accessing extra resources. Where potential abuse was identified social workers encouraged residential homes to use disciplinary policies as a means of isolating and managing risk. The study has implications for understanding risk within social work practice. It shows that whilst social workers use law to govern individuals, these laws are interpreted against cultural values. Additionally they also draw on assumptions about the culture of others.

Keywords: adult safeguarding, governmentality, risk, risk assessment

Procedia PDF Downloads 262