Search results for: regional features
3877 Advancing the Hi-Tech Ecosystem in the Periphery: The Case of the Sea of Galilee Region
Authors: Yael Dubinsky, Orit Hazzan
Abstract:
There is a constant need for hi-tech innovation to be decentralized to peripheral regions. This work describes how we applied design science research (DSR) principles to define what we refer to as the Sea of Galilee (SoG) method. The goal of the SoG method is to harness existing and new technological initiatives in peripheral regions to create a socio-technological network that can initiate and maintain hi-tech activities. The SoG method consists of a set of principles, a stakeholder network, and actual hi-tech business initiatives, including their infrastructure and practices. The three cycles of DSR, the Relevance, Design, and Rigor cycles, layout a research framework to sharpen the requirements, collect data from case studies, and iteratively refine the SoG method based on the existing knowledge base. We propose that the SoG method can be deployed by regional authorities that wish to be considered smart regions (an extension of the notion of smart cities).Keywords: design science research, socio-technological initiatives, Sea of Galilee method, periphery stakeholder network, hi-tech initiatieves
Procedia PDF Downloads 1313876 A Task Scheduling Algorithm in Cloud Computing
Authors: Ali Bagherinia
Abstract:
Efficient task scheduling method can meet users' requirements, and improve the resource utilization, then increase the overall performance of the cloud computing environment. Cloud computing has new features, such as flexibility, virtualization and etc., in this paper we propose a two levels task scheduling method based on load balancing in cloud computing. This task scheduling method meet user's requirements and get high resource utilization, that simulation results in CloudSim simulator prove this.Keywords: cloud computing, task scheduling, virtualization, SLA
Procedia PDF Downloads 4013875 Degradation Model for UK Railway Drainage System
Authors: Yiqi Wu, Simon Tait, Andrew Nichols
Abstract:
Management of UK railway drainage assets is challenging due to the large amounts of historical assets with long asset life cycles. A major concern for asset managers is to maintain the required performance economically and efficiently while complying with the relevant regulation and legislation. As the majority of the drainage assets are buried underground and are often difficult or costly to examine, it is important for asset managers to understand and model the degradation process in order to foresee the upcoming reduction in asset performance and conduct proactive maintenance accordingly. In this research, a Markov chain approach is used to model the deterioration process of rail drainage assets. The study is based on historical condition scores and characteristics of drainage assets across the whole railway network in England, Scotland, and Wales. The model is used to examine the effect of various characteristics on the probabilities of degradation, for example, the regional difference in probabilities of degradation, and how material and shape can influence the deterioration process for chambers, channels, and pipes.Keywords: deterioration, degradation, markov models, probability, railway drainage
Procedia PDF Downloads 2213874 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand
Authors: Wen Liu, Errol Haarhoff, Lee Beattie
Abstract:
In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.Keywords: urban intensification, sustainable development, plan making, governance and implementation
Procedia PDF Downloads 5563873 Quantitative Analysis of Contract Variations Impact on Infrastructure Project Performance
Authors: Soheila Sadeghi
Abstract:
Infrastructure projects often encounter contract variations that can significantly deviate from the original tender estimates, leading to cost overruns, schedule delays, and financial implications. This research aims to quantitatively assess the impact of changes in contract variations on project performance by conducting an in-depth analysis of a comprehensive dataset from the Regional Airport Car Park project. The dataset includes tender budget, contract quantities, rates, claims, and revenue data, providing a unique opportunity to investigate the effects of variations on project outcomes. The study focuses on 21 specific variations identified in the dataset, which represent changes or additions to the project scope. The research methodology involves establishing a baseline for the project's planned cost and scope by examining the tender budget and contract quantities. Each variation is then analyzed in detail, comparing the actual quantities and rates against the tender estimates to determine their impact on project cost and schedule. The claims data is utilized to track the progress of work and identify deviations from the planned schedule. The study employs statistical analysis using R to examine the dataset, including tender budget, contract quantities, rates, claims, and revenue data. Time series analysis is applied to the claims data to track progress and detect variations from the planned schedule. Regression analysis is utilized to investigate the relationship between variations and project performance indicators, such as cost overruns and schedule delays. The research findings highlight the significance of effective variation management in construction projects. The analysis reveals that variations can have a substantial impact on project cost, schedule, and financial outcomes. The study identifies specific variations that had the most significant influence on the Regional Airport Car Park project's performance, such as PV03 (additional fill, road base gravel, spray seal, and asphalt), PV06 (extension to the commercial car park), and PV07 (additional box out and general fill). These variations contributed to increased costs, schedule delays, and changes in the project's revenue profile. The study also examines the effectiveness of project management practices in managing variations and mitigating their impact. The research suggests that proactive risk management, thorough scope definition, and effective communication among project stakeholders can help minimize the negative consequences of variations. The findings emphasize the importance of establishing clear procedures for identifying, assessing, and managing variations throughout the project lifecycle. The outcomes of this research contribute to the body of knowledge in construction project management by demonstrating the value of analyzing tender, contract, claims, and revenue data in variation impact assessment. However, the research acknowledges the limitations imposed by the dataset, particularly the absence of detailed contract and tender documents. This constraint restricts the depth of analysis possible in investigating the root causes and full extent of variations' impact on the project. Future research could build upon this study by incorporating more comprehensive data sources to further explore the dynamics of variations in construction projects.Keywords: contract variation impact, quantitative analysis, project performance, claims analysis
Procedia PDF Downloads 403872 The Urban Project: Metropolization Tool and Sustainability Vector - Case of Constantine
Authors: Mouhoubi Nedjima, Sassi Boudemagh Souad, Chouabbia Khedidja
Abstract:
Cities grow, large or small; they seek to gain a place in the market competition, which talks to sell a product that is the city itself. The metropolis are large cities enjoying a legal status and assets providing their dominions elements on a territory larger than their range, do not escape this situation. Thus, the search for promising tool metropolises better development and durability meet the challenges as economic, social and environmental is timely. The urban project is a new way to build the city; it is involved in the metropolises of two ways, either to manage the crisis and to meet the internal needs of the metropolis, or by creating a regional attractiveness with their potential. This communication will address the issue of urban project as a tool that has and should find a place in the panoply of existing institutional tools. Based on the example of the modernization project of the metropolis of eastern Algeria "Constantine", we will examine what the urban project can bring to a city, the extent of its impact but also the relationship between the visions actors so metropolization a success.Keywords: urban project, metropolis, institutional tools, Constantine
Procedia PDF Downloads 4033871 Load-Deflecting Characteristics of a Fabricated Orthodontic Wire with 50.6Ni 49.4Ti Alloy Composition
Authors: Aphinan Phukaoluan, Surachai Dechkunakorn, Niwat Anuwongnukroh, Anak Khantachawana, Pongpan Kaewtathip, Julathep Kajornchaiyakul, Peerapong Tua-Ngam
Abstract:
Aims: The objectives of this study was to determine the load-deflecting characteristics of a fabricated orthodontic wire with alloy composition of 50.6% (atomic weight) Ni and 49.4% (atomic weight) Ti and to compare the results with Ormco, a commercially available pre-formed NiTi orthodontic archwire. Materials and Methods: The ingots alloys with atomic weight ratio 50.6 Ni: 49.4 Ti alloy were used in this study. Three specimens were cut to have wire dimensions of 0.016 inch x0.022 inch. For comparison, a commercially available pre-formed NiTi archwire, Ormco, with dimensions of 0.016 inch x 0.022 inch was used. Three-point bending tests were performed at the temperature 36+1 °C using a Universal Testing Machine on the newly fabricated and commercial archwires to assess the characteristics of the load-deflection curve with loading and unloading forces. The loading and unloading features at the deflection points 0.25, 0.50, 0.75. 1.0, 1.25, and 1.5 mm were compared. Descriptive statistics was used to evaluate each variables, and independent t-test at p < 0.05 was used to analyze the mean differences between the two groups. Results: The load-deflection curve of the 50.6Ni: 49.4Ti wires exhibited the characteristic features of superelasticity. The curves at the loading and unloading slope of Ormco NiTi archwire were more parallel than the newly fabricated NiTi wires. The average deflection force of the 50.6Ni: 49.4Ti wire was 304.98 g and 208.08 g for loading and unloading, respectively. Similarly, the values were 358.02 g loading and 253.98 g for unloading of Ormco NiTi archwire. The interval difference forces between each deflection points were in the range 20.40-121.38 g and 36.72-92.82 g for the loading and unloading curve of 50.6Ni: 49.4Ti wire, respectively, and 4.08-157.08 g and 14.28-90.78 g for the loading and unloading curve of commercial wire, respectively. The average deflection force of the 50.6Ni: 49.4Ti wire was less than that of Ormco NiTi archwire, which could have been due to variations in the wire dimensions. Although a greater force was required for each deflection point of loading and unloading for the 50.6Ni: 49.4Ti wire as compared to Ormco NiTi archwire, the values were still within the acceptable limits to be clinically used in orthodontic treatment. Conclusion: The 50.6Ni: 49.4Ti wires presented the characteristics of a superelastic orthodontic wire. The loading and unloading force were also suitable for orthodontic tooth movement. These results serve as a suitable foundation for further studies in the development of new orthodontic NiTi archwires.Keywords: 50.6 ni 49.4 Ti alloy wire, load deflection curve, loading and unloading force, orthodontic
Procedia PDF Downloads 3033870 Prevalence of Oral Mucosal Lesions in Malaysia: A Teaching Hospital Based Study
Authors: Renjith George Pallivathukal, Preethy Mary Donald
Abstract:
Asymptomatic oral lesions are often ignored by the patients and usually will be identified only in advanced stages. Early detection of precancerous lesions is important for better prognosis. It is also important for the oral health care person to be aware of the regional prevalence of oral lesions in order to provide early care for the same. We conducted a retrospective study to assess the prevalence of oral lesions based on the information available from patient records in a teaching dental school. Dental records of patients who attended the department of Oral medicine and diagnosis between September 2014 and September 2016 were retrieved and verified for oral lesions. Results: The ages of the patients ranged from 13 to 38 years with a mean age of 21.8 years. The lesions were classified as white (40.5%), red (23%), ulcerated (10.5%), pigmented (15.2%) and soft tissue enlargements (10.8%). 52% of the patients were unaware of the oral lesions before the dental visit. Overall, the prevalence of lesions in dental patients lower to national estimates, but the prevalence of some lesions showed variations.Keywords: oral mucosal lesion, pre-cancer, prevalence, soft tissue lesion
Procedia PDF Downloads 3513869 Numerical Modeling of Air Pollution with PM-Particles and Dust
Authors: N. Gigauri, A. Surmava, L. Intskirveli, V. Kukhalashvili, S. Mdivani
Abstract:
The subject of our study is atmospheric air pollution with numerical modeling. In the presented article, as the object of research, there is chosen city Tbilisi, the capital of Georgia, with a population of one and a half million and a difficult terrain. The main source of pollution in Tbilisi is currently vehicles and construction dust. The concentrations of dust and PM (Particulate Matter) were determined in the air of Tbilisi and in its vicinity. There are estimated their monthly maximum, minimum, and average concentrations. Processes of dust propagation in the atmosphere of the city and its surrounding territory are modelled using a 3D regional model of atmospheric processes and an admixture transfer-diffusion equation. There were taken figures of distribution of the polluted cloud and dust concentrations in different areas of the city at different heights and at different time intervals with the background stationary westward and eastward wind. It is accepted that the difficult terrain and mountain-bar circulation affect the deformation of the cloud and its spread, there are determined time periods when the dust concentration in the city is greater than MAC (Maximum Allowable Concentration, MAC=0.5 mg/m³).Keywords: air pollution, dust, numerical modeling, PM-particles
Procedia PDF Downloads 1403868 Opportunities and Optimization of the Our Eyes Initiative as the Strategy for Counter-Terrorism in ASEAN
Authors: Chastiti Mediafira Wulolo, Tri Legionosuko, Suhirwan, Yusuf
Abstract:
Terrorism and radicalization have become a common threat to every nation in this world. As a part of the asymmetric warfare threat, terrorism and radicalization need a complex strategy as the problem solver. One such way is by collaborating with the international community. The Our Eyes Initiative (OEI), for example, is a cooperation pact in the field of intelligence information exchanges related to terrorism and radicalization initiated by the Indonesian Ministry of Defence. The pact has been signed by Indonesia, Philippines, Malaysia, Brunei Darussalam, Thailand, and Singapore. This cooperation mostly engages military acts as a central role, but it still requires the involvement of various parties such as the police, intelligence agencies and other government institutions. This paper will use a qualitative content analysis method to address the opportunity and enhance the optimization of OEI. As the result, it will explain how OEI takes the opportunities as the strategy for counter-terrorism by building it up as the regional cooperation, building the legitimacy of government and creating the legal framework of the information sharing system.Keywords: our eyes initiative, terrorism, counter-terrorism, ASEAN, cooperation, strategy
Procedia PDF Downloads 1823867 Assessing Circularity Potentials and Customer Education to Drive Ecologically and Economically Effective Materials Design for Circular Economy - A Case Study
Authors: Mateusz Wielopolski, Asia Guerreschi
Abstract:
Circular Economy, as the counterargument to the ‘make-take-dispose’ linear model, is an approach that includes a variety of schools of thought looking at environmental, economic, and social sustainability. This, in turn, leads to a variety of strategies and often confusion when it comes to choosing the right one to make a circular transition as effective as possible. Due to the close interplay of circular product design, business model and social responsibility, companies often struggle to develop strategies that comply with all three triple-bottom-line criteria. Hence, to transition to circularity effectively, product design approaches must become more inclusive. In a case study conducted with the University of Bayreuth and the ISPO, we correlated aspects of material choice in product design, labeling and technological innovation with customer preferences and education about specific material and technology features. The study revealed those attributes of the consumers’ environmental awareness that directly translate into an increase of purchase power - primarily connected with individual preferences regarding sports activity and technical knowledge. Based on this outcome, we constituted a product development approach that incorporates the consumers’ individual preferences towards sustainable product features as well as their awareness about materials and technology. It allows deploying targeted customer education campaigns to raise the willingness to pay for sustainability. Next, we implemented the customer preference and education analysis into a circularity assessment tool that takes into account inherent company assets as well as subjective parameters like customer awareness. The outcome is a detailed but not cumbersome scoring system, which provides guidance for material and technology choices for circular product design while considering business model and communication strategy to the attentive customers. By including customer knowledge and complying with corresponding labels, companies develop more effective circular design strategies, while simultaneously increasing customers’ trust and loyalty.Keywords: circularity, sustainability, product design, material choice, education, awareness, willingness to pay
Procedia PDF Downloads 2003866 A Political-Economic Analysis of Next Generation EU Recovery Fund
Authors: Fernando Martín-Espejo, Christophe Crombez
Abstract:
This paper presents a political-economic analysis of the reforms introduced during the coronavirus crisis at the EU level with a special emphasis on the recovery fund Next Generation EU (NGEU). It also introduces a spatial model to evaluate whether the governmental features of the recovery fund can be framed inside the community method. Particularly, by evaluating the brake clause in the NGEU legislation, this paper analyses theoretically the political and legislative implications of the introduction of flexibility clauses in the EU decision-making process.Keywords: EU, legislative procedures, spatial model, coronavirus
Procedia PDF Downloads 1773865 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers
Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang
Abstract:
In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.Keywords: centrality, patent coupling network, patent influence, social network analysis
Procedia PDF Downloads 543864 The Environmental Impact of Geothermal Energy and Opportunities for Its Utilization in Hungary
Authors: András Medve, Katalin Szabad, István Patkó
Abstract:
According to the International Energy Association the previous principles of the energy sector should be reassessed, in which renewable energy sources have a significant role. We might witness the exchange of roles of countries from importer to exporter, which look for the main resources of market needs. According to the World Energy Outlook 2013, the duration of high oil prices is exceptionally long in the history of the energy market. Forecasts also point at the expected great differences between the regional prices of gas and electric energy. The energy need of the world will grow by its third. two thirds of which will appear in China, India, and South-East Asia, while only 4 per cent of which will be related to OECD countries. Current trends also forecast the growth of the price of energy sources and the emission of glasshouse gases. As a reflection of these forecasts alternative energy sources will gain value, of which geothermic energy is one of the cheapest and most economical. Hungary possesses outstanding resources of geothermic energy. The aim of the study is to research the environmental effects of geothermic energy and the opportunities of its exploitation in Hungary, related to „Horizon 2020” project.Keywords: sustainable energy, renewable energy, development of geothermic energy in Hungary
Procedia PDF Downloads 6033863 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection
Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy
Abstract:
Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks
Procedia PDF Downloads 743862 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach
Authors: Jiaxin Chen
Abstract:
Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification
Procedia PDF Downloads 943861 Photocatalytic Eco-Active Ceramic Slabs to Abate Air Pollution under LED Light
Authors: Claudia L. Bianchi, Giuseppina Cerrato, Federico Galli, Federica Minozzi, Valentino Capucci
Abstract:
At the beginning of the industrial productions, porcelain gres tiles were considered as just a technical material, aesthetically not very beautiful. Today thanks to new industrial production methods, both properties, and beauty of these materials completely fit the market requests. In particular, the possibility to prepare slabs of large sizes is the new frontier of building materials. Beside these noteworthy architectural features, new surface properties have been introduced in the last generation of these materials. In particular, deposition of TiO₂ transforms the traditional ceramic into a photocatalytic eco-active material able to reduce polluting molecules present in air and water, to eliminate bacteria and to reduce the surface dirt thanks to the self-cleaning property. The problem of photocatalytic materials resides in the fact that it is necessary a UV light source to activate the oxidation processes on the surface of the material, processes that are turned off inexorably when the material is illuminated by LED lights and, even more so, when we are in darkness. First, it was necessary a thorough study change the existing plants to deposit the photocatalyst very evenly and this has been done thanks to the advent of digital printing and the development of an ink custom-made that stabilizes the powdered TiO₂ in its formulation. In addition, the commercial TiO₂, which is used for the traditional photocatalytic coating, has been doped with metals in order to activate it even in the visible region and thus in the presence of sunlight or LED. Thanks to this active coating, ceramic slabs are able to purify air eliminating odors and VOCs, and also can be cleaned with very soft detergents due to the self-cleaning properties given by the TiO₂ present at the ceramic surface. Moreover, the presence of dopant metals (patent WO2016157155) also allows the material to work as well as antibacterial in the dark, by eliminating one of the negative features of photocatalytic building materials that have so far limited its use on a large scale. Considering that we are constantly in contact with bacteria, some of which are dangerous for health. Active tiles are 99,99% efficient on all bacteria, from the most common such as Escherichia coli to the most dangerous such as Staphilococcus aureus Methicillin-resistant (MRSA). DIGITALIFE project LIFE13 ENV/IT/000140 – award for best project of October 2017.Keywords: Ag-doped microsized TiO₂, eco-active ceramic, photocatalysis, digital coating
Procedia PDF Downloads 2293860 Recognition of Tifinagh Characters with Missing Parts Using Neural Network
Authors: El Mahdi Barrah, Said Safi, Abdessamad Malaoui
Abstract:
In this paper, we present an algorithm for reconstruction from incomplete 2D scans for tifinagh characters. This algorithm is based on using correlation between the lost block and its neighbors. This system proposed contains three main parts: pre-processing, features extraction and recognition. In the first step, we construct a database of tifinagh characters. In the second step, we will apply “shape analysis algorithm”. In classification part, we will use Neural Network. The simulation results demonstrate that the proposed method give good results.Keywords: Tifinagh character recognition, neural networks, local cost computation, ANN
Procedia PDF Downloads 3343859 Resilience-Based Emergency Bridge Inspection Routing and Repair Scheduling under Uncertainty
Authors: Zhenyu Zhang, Hsi-Hsien Wei
Abstract:
Highway network systems play a vital role in disaster response for disaster-damaged areas. Damaged bridges in such network systems can impede disaster response by disrupting transportation of rescue teams or humanitarian supplies. Therefore, emergency inspection and repair of bridges to quickly collect damage information of bridges and recover the functionality of highway networks is of paramount importance to disaster response. A widely used measure of a network’s capability to recover from disasters is resilience. To enhance highway network resilience, plenty of studies have developed various repair scheduling methods for the prioritization of bridge-repair tasks. These methods assume that repair activities are performed after the damage to a highway network is fully understood via inspection, although inspecting all bridges in a regional highway network may take days, leading to the significant delay in repairing bridges. In reality, emergency repair activities can be commenced as soon as the damage data of some bridges that are crucial to emergency response are obtained. Given that emergency bridge inspection and repair (EBIR) activities are executed simultaneously in the response phase, the real-time interactions between these activities can occur – the blockage of highways due to repair activities can affect inspection routes which in turn have an impact on emergency repair scheduling by providing real-time information on bridge damages. However, the impact of such interactions on the optimal emergency inspection routes (EIR) and emergency repair schedules (ERS) has not been discussed in prior studies. To overcome the aforementioned deficiencies, this study develops a routing and scheduling model for EBIR while accounting for real-time inspection-repair interactions to maximize highway network resilience. A stochastic, time-dependent integer program is proposed for the complex and real-time interacting EBIR problem given multiple inspection and repair teams at locations as set post-disaster. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. Computational tests are performed using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that the simultaneous implementation of bridge inspection and repair activities can significantly improve the highway network resilience. Moreover, the deployment of inspection and repair teams should match each other, and the network resilience will not be improved once the unilateral increase in inspection teams or repair teams exceeds a certain level. This study contributes to both knowledge and practice. First, the developed mathematical model makes it possible for capturing the impact of real-time inspection-repair interactions on inspection routing and repair scheduling and efficiently deriving optimal EIR and ERS on a large and complex highway network. Moreover, this study contributes to the organizational dimension of highway network resilience by providing optimal strategies for highway bridge management. With the decision support tool, disaster managers are able to identify the most critical bridges for disaster management and make decisions on proper inspection and repair strategies to improve highway network resilience.Keywords: disaster management, emergency bridge inspection and repair, highway network, resilience, uncertainty
Procedia PDF Downloads 1093858 About the Number of Fundamental Physical Interactions
Authors: Andrey Angorsky
Abstract:
In the article an issue about the possible number of fundamental physical interactions is studied. The theory of similarity on the dimensionless quantity as the damping ratio serves as the instrument of analysis. The structure with the features of Higgs field comes out from non-commutative expression for this ratio. The experimentally checked up supposition about the nature of dark energy is spoken out.Keywords: damping ratio, dark energy, dimensionless quantity, fundamental physical interactions, Higgs field, non-commutative expression
Procedia PDF Downloads 1403857 Google Translate: AI Application
Authors: Shaima Almalhan, Lubna Shukri, Miriam Talal, Safaa Teskieh
Abstract:
Since artificial intelligence is a rapidly evolving topic that has had a significant impact on technical growth and innovation, this paper examines people's awareness, use, and engagement with the Google Translate application. To see how familiar aware users are with the app and its features, quantitative and qualitative research was conducted. The findings revealed that consumers have a high level of confidence in the application and how far people they benefit from this sort of innovation and how convenient it makes communication.Keywords: artificial intelligence, google translate, speech recognition, language translation, camera translation, speech to text, text to speech
Procedia PDF Downloads 1543856 Design of Broadband Power Divider for 3G and 4G Applications
Authors: A. M. El-Akhdar, A. M. El-Tager, H. M. El-Hennawy
Abstract:
This paper presents a broadband power divider with equal power division ratio. Two sections of transmission line transformers based on coupled microstrip lines are applied to obtain broadband performance. In addition, design methodology is proposed for the novel structure. A prototype is designed, simulated to operate in the band from 2.1 to 3.8 GHz to fulfill the requirements of 3G and 4G applications. The proposed structure features reduced size and less resistors than other conventional techniques. Simulation verifies the proposed idea and design methodology.Keywords: power dividers, coupled lines, microstrip, 4G applications
Procedia PDF Downloads 4773855 A Semantic and Concise Structure to Represent Human Actions
Authors: Tobias Strübing, Fatemeh Ziaeetabar
Abstract:
Humans usually manipulate objects with their hands. To represent these actions in a simple and understandable way, we need to use a semantic framework. For this purpose, the Semantic Event Chain (SEC) method has already been presented which is done by consideration of touching and non-touching relations between manipulated objects in a scene. This method was improved by a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of static (e.g. top, bottom) and dynamic spatial relations (e.g. moving apart, getting closer) between objects in an action scene. This leads to a better action prediction as well as the ability to distinguish between more actions. Each eSEC manipulation descriptor is a huge matrix with thirty rows and a massive set of the spatial relations between each pair of manipulated objects. The current eSEC framework has so far only been used in the category of manipulation actions, which eventually involve two hands. Here, we would like to extend this approach to a whole body action descriptor and make a conjoint activity representation structure. For this purpose, we need to do a statistical analysis to modify the current eSEC by summarizing while preserving its features, and introduce a new version called Enhanced eSEC or (e2SEC). This summarization can be done from two points of the view: 1) reducing the number of rows in an eSEC matrix, 2) shrinking the set of possible semantic spatial relations. To achieve these, we computed the importance of each matrix row in an statistical way, to see if it is possible to remove a particular one while all manipulations are still distinguishable from each other. On the other hand, we examined which semantic spatial relations can be merged without compromising the unity of the predefined manipulation actions. Therefore by performing the above analyses, we made the new e2SEC framework which has 20% fewer rows, 16.7% less static spatial and 11.1% less dynamic spatial relations. This simplification, while preserving the salient features of a semantic structure in representing actions, has a tremendous impact on the recognition and prediction of complex actions, as well as the interactions between humans and robots. It also creates a comprehensive platform to integrate with the body limbs descriptors and dramatically increases system performance, especially in complex real time applications such as human-robot interaction prediction.Keywords: enriched semantic event chain, semantic action representation, spatial relations, statistical analysis
Procedia PDF Downloads 1263854 The Effectiveness of Spatial Planning and Land Use Management Policies to Promote Tourism Development in the Wild Coast, Eastern Cape
Authors: Siyamthanda Makhwabe
Abstract:
Tourism development and spatial planning within the broader spectrum of the Eastern Cape needs to be strategically integrated to give effectiveness to development planning within the province. Tourism was severely affected and limited by policies of the previous regime. Tourism development in the Eastern Cape has been identified as one of the underdeveloped sectors that have the potential to improve the province’s local economic development trajectory The proposed study reviews literature on tourism development in an urban/rural and regional context in the Eastern Cape province. The proposed study will therefore offer an in-depth literature review on issues pertaining to spatial planning, land use management policies and tourism development within the Eastern Cape using the scoping review method. The intention of the proposed study is to identify synergies between the intertwined municipalities within the Wild Coast region in order to create a tourism belt that would yield benefit from Coffee Bay to East London.Keywords: development, Eastern Cape, policies, spatial planning, tourism
Procedia PDF Downloads 913853 Artificial Intelligence and Development: The Missing Link
Authors: Driss Kettani
Abstract:
ICT4D actors are naturally attempted to include AI in the range of enabling technologies and tools that could support and boost the Development process, and to refer to these as AI4D. But, doing so, assumes that AI complies with the very specific features of ICT4D context, including, among others, affordability, relevance, openness, and ownership. Clearly, none of these is fulfilled, and the enthusiastic posture that AI4D is a natural part of ICT4D is not grounded and, to certain extent, does not serve the purpose of Technology for Development at all. In the context of Development, it is important to emphasize and prioritize ICT4D, in the national digital transformation strategies, instead of borrowing "trendy" waves of the IT Industry that are motivated by business considerations, with no specific care/consideration to Development.Keywords: AI, ICT4D, technology for development, position paper
Procedia PDF Downloads 883852 Introduction to Techno-Sectoral Innovation System Modeling and Functions Formulating
Authors: S. M. Azad, H. Ghodsi Pour, F. Roshannafasa
Abstract:
In recent years ‘technology management and policymaking’ is one of the most important problems in management science. In this field, different generations of innovation and technology management are presented which the earliest one is Innovation System (IS) approach. In a general classification, innovation systems are divided in to 4 approaches: Technical, sectoral, regional, and national. There are many researches in relation to each of these approaches in different academic fields. Every approach has some benefits. If two or more approaches hybrid, their benefits would be combined. In addition, according to the sectoral structure of the governance model in Iran, in many sectors such as information technology, the combination of three other approaches with sectoral approach is essential. Hence, in this paper, combining two IS approaches (technical and sectoral) and using system dynamics, a generic model is presented for a sample of software industry. As a complimentary point, this article is introducing a new hybrid approach called Techno-Sectoral Innovation System. This TSIS model is accomplished by Changing concepts of the ‘functions’ which came from Technological IS literature and using them into sectoral system as measurable indicators.Keywords: innovation system, technology, techno-sectoral system, functional indicators, system dynamics
Procedia PDF Downloads 4393851 The City Ecological Corridor Construction Based on the Concept Of "Sponge City"(Case Study: Lishui)
Authors: Xu Mengyuan, Xu Lei
Abstract:
Behind the rapid development of Chinese city, the contradiction of frequent urban waterlogging and the shortage of water resources is deepening. In order to solve this problem, introduce the low impact development "sponge city" construction mode in the process of the construction of new urbanization in China, make our city " resilience to adapt" environmental change and natural disaster. Firstly this paper analyses the basic reason of urban waterlogging, then introduces the basic connotation and realization approach of “sponge city”. Finally, study on the project in Lishui Guazhou, focuses on the analysis of the "urban ecological corridor" construction strategy and the positive impact on city in the construction of “sponge city”. Meanwhile, we put forward the ”local conditions” and ”sustainable” as the construction ideas, make use of ecological construction leading city development, explore the ecological balance through the city to enhance the regional value, and providing reference and reflection for the development and future of the “sponge city” in China.Keywords: urban water logging, sponge city, urban ecological corridor, sustainable development, China
Procedia PDF Downloads 6413850 NanoFrazor Lithography for advanced 2D and 3D Nanodevices
Authors: Zhengming Wu
Abstract:
NanoFrazor lithography systems were developed as a first true alternative or extension to standard mask-less nanolithography methods like electron beam lithography (EBL). In contrast to EBL they are based on thermal scanning probe lithography (t-SPL). Here a heatable ultra-sharp probe tip with an apex of a few nm is used for patterning and simultaneously inspecting complex nanostructures. The heat impact from the probe on a thermal responsive resist generates those high-resolution nanostructures. The patterning depth of each individual pixel can be controlled with better than 1 nm precision using an integrated in-situ metrology method. Furthermore, the inherent imaging capability of the Nanofrazor technology allows for markerless overlay, which has been achieved with sub-5 nm accuracy as well as it supports stitching layout sections together with < 10 nm error. Pattern transfer from such resist features below 10 nm resolution were demonstrated. The technology has proven its value as an enabler of new kinds of ultra-high resolution nanodevices as well as for improving the performance of existing device concepts. The application range for this new nanolithography technique is very broad spanning from ultra-high resolution 2D and 3D patterning to chemical and physical modification of matter at the nanoscale. Nanometer-precise markerless overlay and non-invasiveness to sensitive materials are among the key strengths of the technology. However, while patterning at below 10 nm resolution is achieved, significantly increasing the patterning speed at the expense of resolution is not feasible by using the heated tip alone. Towards this end, an integrated laser write head for direct laser sublimation (DLS) of the thermal resist has been introduced for significantly faster patterning of micrometer to millimeter-scale features. Remarkably, the areas patterned by the tip and the laser are seamlessly stitched together and both processes work on the very same resist material enabling a true mix-and-match process with no developing or any other processing steps in between. The presentation will include examples for (i) high-quality metal contacting of 2D materials, (ii) tuning photonic molecules, (iii) generating nanofluidic devices and (iv) generating spintronic circuits. Some of these applications have been enabled only due to the various unique capabilities of NanoFrazor lithography like the absence of damage from a charged particle beam.Keywords: nanofabrication, grayscale lithography, 2D materials device, nano-optics, photonics, spintronic circuits
Procedia PDF Downloads 723849 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 1833848 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada
Authors: Stefan W. Kienzle
Abstract:
The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes
Procedia PDF Downloads 92