Search results for: building information modeling (BIM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17339

Search results for: building information modeling (BIM)

13589 Prioritizing the TQM Enablers and IT Resources in the ICT Industry: An AHP Approach

Authors: Suby Khanam, Faisal Talib, Jamshed Siddiqui

Abstract:

Total Quality Management (TQM) is a managerial approach that improves the competitiveness of the industry, meanwhile Information technology (IT) was introduced with TQM for handling the technical issues which is supported by quality experts for fulfilling the customers’ requirement. Present paper aims to utilise AHP (Analytic Hierarchy Process) methodology to priorities and rank the hierarchy levels of TQM enablers and IT resource together for its successful implementation in the Information and Communication Technology (ICT) industry. A total of 17 TQM enablers (nine) and IT resources (eight) were identified and partitioned into 3 categories and were prioritised by AHP approach. The finding indicates that the 17 sub-criteria can be grouped into three main categories namely organizing, tools and techniques, and culture and people. Further, out of 17 sub-criteria, three sub-criteria: Top management commitment and support, total employee involvement, and continuous improvement got highest priority whereas three sub-criteria such as structural equation modelling, culture change, and customer satisfaction got lowest priority. The result suggests a hierarchy model for ICT industry to prioritise the enablers and resources as well as to improve the TQM and IT performance in the ICT industry. This paper has some managerial implication which suggests the managers of ICT industry to implement TQM and IT together in their organizations to get maximum benefits and how to utilize available resources. At the end, conclusions, limitation, future scope of the study are presented.

Keywords: analytic hierarchy process, information technology, information and communication technology, prioritization, total quality management

Procedia PDF Downloads 349
13588 On the Utility of Bidirectional Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of the flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on the spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts, as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with an attention mechanism. In previous works on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work, with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on the presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: machine learning, classification and regression, gene circuit design, bidirectional transformers

Procedia PDF Downloads 63
13587 Modeling of Leaks Effects on Transient Dispersed Bubbly Flow

Authors: Mohand Kessal, Rachid Boucetta, Mourad Tikobaini, Mohammed Zamoum

Abstract:

Leakage problem of two-component fluids flow is modeled for a transient one-dimensional homogeneous bubbly flow and developed by taking into account the effect of a leak located at the middle point of the pipeline. The corresponding three conservation equations are numerically resolved by an improved characteristic method. The obtained results are explained and commented in terms of physical impact on the flow parameters.

Keywords: fluid transients, pipelines leaks, method of characteristics, leakage problem

Procedia PDF Downloads 480
13586 Image Multi-Feature Analysis by Principal Component Analysis for Visual Surface Roughness Measurement

Authors: Wei Zhang, Yan He, Yan Wang, Yufeng Li, Chuanpeng Hao

Abstract:

Surface roughness is an important index for evaluating surface quality, needs to be accurately measured to ensure the performance of the workpiece. The roughness measurement based on machine vision involves various image features, some of which are redundant. These redundant features affect the accuracy and speed of the visual approach. Previous research used correlation analysis methods to select the appropriate features. However, this feature analysis is independent and cannot fully utilize the information of data. Besides, blindly reducing features lose a lot of useful information, resulting in unreliable results. Therefore, the focus of this paper is on providing a redundant feature removal approach for visual roughness measurement. In this paper, the statistical methods and gray-level co-occurrence matrix(GLCM) are employed to extract the texture features of machined images effectively. Then, the principal component analysis(PCA) is used to fuse all extracted features into a new one, which reduces the feature dimension and maintains the integrity of the original information. Finally, the relationship between new features and roughness is established by the support vector machine(SVM). The experimental results show that the approach can effectively solve multi-feature information redundancy of machined surface images and provides a new idea for the visual evaluation of surface roughness.

Keywords: feature analysis, machine vision, PCA, surface roughness, SVM

Procedia PDF Downloads 213
13585 A Taxonomy of the Informational Content of Virtual Heritage Serious Games

Authors: Laurence C. Hanes, Robert J. Stone

Abstract:

Video games have reached a point of huge commercial success as well as wide familiarity with audiences both young and old. Much attention and research have also been directed towards serious games and their potential learning affordances. It is little surprise that the field of virtual heritage has taken a keen interest in using serious games to present cultural heritage information to users, with applications ranging from museums and cultural heritage institutions, to academia and research, to schools and education. Many researchers have already documented their efforts to develop and distribute virtual heritage serious games. Although attempts have been made to create classifications of the different types of virtual heritage games (somewhat akin to the idea of game genres), no formal taxonomy has yet been produced to define the different types of cultural heritage and historical information that can be presented through these games at a content level, and how the information can be manifested within the game. This study proposes such a taxonomy. First the informational content is categorized as heritage or historical, then further divided into tangible, intangible, natural, and analytical. Next, the characteristics of the manifestation within the game are covered. The means of manifestation, level of demonstration, tone, and focus are all defined and explained. Finally, the potential learning outcomes of the content are discussed. A demonstration of the taxonomy is then given by describing the informational content and corresponding manifestations within several examples of virtual heritage serious games as well as commercial games. It is anticipated that this taxonomy will help designers of virtual heritage serious games to think about and clearly define the information they are presenting through their games, and how they are presenting it. Another result of the taxonomy is that it will enable us to frame cultural heritage and historical information presented in commercial games with a critical lens, especially where there may not be explicit learning objectives. Finally, the results will also enable us to identify shared informational content and learning objectives between any virtual heritage serious and/or commercial games.

Keywords: informational content, serious games, taxonomy, virtual heritage

Procedia PDF Downloads 367
13584 Preserving Urban Cultural Heritage with Deep Learning: Color Planning for Japanese Merchant Towns

Authors: Dongqi Li, Yunjia Huang, Tomo Inoue, Kohei Inoue

Abstract:

With urbanization, urban cultural heritage is facing the impact and destruction of modernization and urbanization. Many historical areas are losing their historical information and regional cultural characteristics, so it is necessary to carry out systematic color planning for historical areas in conservation. As an early focus on urban color planning, Japan has a systematic approach to urban color planning. Hence, this paper selects five merchant towns from the category of important traditional building preservation areas in Japan as the subject of this study to explore the color structure and emotion of this type of historic area. First, the image semantic segmentation method identifies the buildings, roads, and landscape environments. Their color data were extracted for color composition and emotion analysis to summarize their common features. Second, the obtained Internet evaluations were extracted by natural language processing for keyword extraction. The correlation analysis of the color structure and keywords provides a valuable reference for conservation decisions for this historic area in the town. This paper also combines the color structure and Internet evaluation results with generative adversarial networks to generate predicted images of color structure improvements and color improvement schemes. The methods and conclusions of this paper can provide new ideas for the digital management of environmental colors in historic districts and provide a valuable reference for the inheritance of local traditional culture.

Keywords: historic districts, color planning, semantic segmentation, natural language processing

Procedia PDF Downloads 89
13583 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 591
13582 Research on Dynamic Practical Byzantine Fault Tolerance Consensus Algorithm

Authors: Cao Xiaopeng, Shi Linkai

Abstract:

The practical Byzantine fault-tolerant algorithm does not add nodes dynamically. It is limited in practical application. In order to add nodes dynamically, Dynamic Practical Byzantine Fault Tolerance Algorithm (DPBFT) was proposed. Firstly, a new node sends request information to other nodes in the network. The nodes in the network decide their identities and requests. Then the nodes in the network reverse connect to the new node and send block information of the current network. The new node updates information. Finally, the new node participates in the next round of consensus, changes the view and selects the master node. This paper abstracts the decision of nodes into the undirected connected graph. The final consistency of the graph is used to prove that the proposed algorithm can adapt to the network dynamically. Compared with the PBFT algorithm, DPBFT has better fault tolerance and lower network bandwidth.

Keywords: practical byzantine, fault tolerance, blockchain, consensus algorithm, consistency analysis

Procedia PDF Downloads 130
13581 A Hybrid-Evolutionary Optimizer for Modeling the Process of Obtaining Bricks

Authors: Marius Gavrilescu, Sabina-Adriana Floria, Florin Leon, Silvia Curteanu, Costel Anton

Abstract:

Natural sciences provide a wide range of experimental data whose related problems require study and modeling beyond the capabilities of conventional methodologies. Such problems have solution spaces whose complexity and high dimensionality require correspondingly complex regression methods for proper characterization. In this context, we propose an optimization method which consists in a hybrid dual optimizer setup: a global optimizer based on a modified variant of the popular Imperialist Competitive Algorithm (ICA), and a local optimizer based on a gradient descent approach. The ICA is modified such that intermediate solution populations are more quickly and efficiently pruned of low-fitness individuals by appropriately altering the assimilation, revolution and competition phases, which, combined with an initialization strategy based on low-discrepancy sampling, allows for a more effective exploration of the corresponding solution space. Subsequently, gradient-based optimization is used locally to seek the optimal solution in the neighborhoods of the solutions found through the modified ICA. We use this combined approach to find the optimal configuration and weights of a fully-connected neural network, resulting in regression models used to characterize the process of obtained bricks using silicon-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. Thus, the purpose of our approach is to determine by simulation the working conditions, including the manufacturing mix recipe with the addition of different materials, to minimize the emissions represented by CO and CH4. Our approach determines regression models which perform significantly better than those found using the traditional ICA for the aforementioned problem, resulting in better convergence and a substantially lower error.

Keywords: optimization, biologically inspired algorithm, regression models, bricks, emissions

Procedia PDF Downloads 82
13580 The Voluntary Audit of Semi-Annual Consolidated Financial Statements Decision and Accounting Conservatism

Authors: Shuofen Hsu, Ya-Yi Chao, Chao-Wei Li

Abstract:

This paper investigates the relationship between voluntary audit (hereafter, VA) of semi-annual consolidated financial statements decision and accounting conservatism. In general, there are four kinds of auditors' assurance services, which include audit, review, agreed-upon procedure and compliance engagements base on degree of assurance. The VA work by auditors may not only have the higher audit quality but an important signal of more reliable information than the review work. In Taiwan, The listed companies must prepare the semi-annual consolidated financial statements and with auditors' review before 2012, but some of the listed companies choose the assurance work from review to audit voluntarily. Due to the adoption of International Financial Reporting Standards, the listed companies were required to prepare the second quarterly consolidated financial statements which should be reviewed by auditors since 2013. This rule will change some of the assurance work from audit to review by auditors, and the information asymmetry maybe increased. To control the selection bias, we use two-stage model to test the relationship between VA decision and accounting conservatism. Our empirical results indicate that the VA decision and accounting conservatism have a significant positive relationship in firms with family-controlled. That is, firms with family-controlled are more likely to do VA and to prepare more conservative consolidated financial statements to reduce the information asymmetry, meaning that there is a complementary effect between VA and accounting conservatism for firms with more information asymmetry. But on the contrary, we find that the VA decision and accounting conservatism have a significant negative relationship in firms with professional managers-controlled, meaning that there is a substitution effect between VA and accounting conservatism for firms with less information asymmetry. Finally, the accounting conservatism of consolidated financial statements decrease after the adoption of IFRSs (International Financial Reporting Standards) in Taiwan. It means that the disclosure and transparency of consolidated financial statements had be improved.

Keywords: voluntary audit, accounting conservatism, audit quality, information asymmetry

Procedia PDF Downloads 227
13579 Modeling of Foundation-Soil Interaction Problem by Using Reduced Soil Shear Modulus

Authors: Yesim Tumsek, Erkan Celebi

Abstract:

In order to simulate the infinite soil medium for soil-foundation interaction problem, the essential geotechnical parameter on which the foundation stiffness depends, is the value of soil shear modulus. This parameter directly affects the site and structural response of the considered model under earthquake ground motions. Strain-dependent shear modulus under cycling loads makes difficult to estimate the accurate value in computation of foundation stiffness for the successful dynamic soil-structure interaction analysis. The aim of this study is to discuss in detail how to use the appropriate value of soil shear modulus in the computational analyses and to evaluate the effect of the variation in shear modulus with strain on the impedance functions used in the sub-structure method for idealizing the soil-foundation interaction problem. Herein, the impedance functions compose of springs and dashpots to represent the frequency-dependent stiffness and damping characteristics at the soil-foundation interface. Earthquake-induced vibration energy is dissipated into soil by both radiation and hysteretic damping. Therefore, flexible-base system damping, as well as the variability in shear strengths, should be considered in the calculation of impedance functions for achievement a more realistic dynamic soil-foundation interaction model. In this study, it has been written a Matlab code for addressing these purposes. The case-study example chosen for the analysis is considered as a 4-story reinforced concrete building structure located in Istanbul consisting of shear walls and moment resisting frames with a total height of 12m from the basement level. The foundation system composes of two different sized strip footings on clayey soil with different plasticity (Herein, PI=13 and 16). In the first stage of this study, the shear modulus reduction factor was not considered in the MATLAB algorithm. The static stiffness, dynamic stiffness modifiers and embedment correction factors of two rigid rectangular foundations measuring 2m wide by 17m long below the moment frames and 7m wide by 17m long below the shear walls are obtained for translation and rocking vibrational modes. Afterwards, the dynamic impedance functions of those have been calculated for reduced shear modulus through the developed Matlab code. The embedment effect of the foundation is also considered in these analyses. It can easy to see from the analysis results that the strain induced in soil will depend on the extent of the earthquake demand. It is clearly observed that when the strain range increases, the dynamic stiffness of the foundation medium decreases dramatically. The overall response of the structure can be affected considerably because of the degradation in soil stiffness even for a moderate earthquake. Therefore, it is very important to arrive at the corrected dynamic shear modulus for earthquake analysis including soil-structure interaction.

Keywords: clay soil, impedance functions, soil-foundation interaction, sub-structure approach, reduced shear modulus

Procedia PDF Downloads 272
13578 How Manufacturing Firm Manages Information Security: Need Pull and Technology Push Perspective

Authors: Geuna Kim, Sanghyun Kim

Abstract:

This study investigates various factors that may influence the ISM process, including the organization’s internal needs and external pressure, and examines the role of regulatory pressure in ISM development and performance. The 105 sets of data collected in a survey were tested against the research model using SEM. The results indicate that NP and TP had positive effects on the ISM process, except for perceived benefits. Regulatory pressure had a positive effect on the relationship between ISM awareness and ISM development and performance.

Keywords: information security management, need pull, technology push, regulatory pressure

Procedia PDF Downloads 299
13577 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption

Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu

Abstract:

By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.

Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture

Procedia PDF Downloads 378
13576 Challenges for Reconstruction: A Case Study from 2015 Gorkha, Nepal Earthquake

Authors: Hari K. Adhikari, Keshab Sharma, K. C. Apil

Abstract:

The Gorkha Nepal earthquake of moment magnitude (Mw) 7.8 hit the central region of Nepal on April 25, 2015; with the epicenter about 77 km northwest of Kathmandu Valley. This paper aims to explore challenges of reconstruction in the rural earthquake-stricken areas of Nepal. The Gorkha earthquake on April 25, 2015, has significantly affected the livelihood of people and overall economy in Nepal, causing severe damage and destruction in central Nepal including nation’s capital. A larger part of the earthquake affected area is difficult to access with rugged terrain and scattered settlements, which posed unique challenges and efforts on a massive scale reconstruction and rehabilitation. 800 thousand buildings were affected leaving 8 million people homeless. Challenge of reconstruction of optimum 800 thousand houses is arduous for Nepal in the background of its turmoil political scenario and weak governance. With significant actors involved in the reconstruction process, no appreciable relief has reached to the ground, which is reflected over the frustration of affected people. The 2015 Gorkha earthquake is one of most devastating disasters in the modern history of Nepal. Best of our knowledge, there is no comprehensive study on reconstruction after disasters in modern Nepal, which integrates the necessary information to deal with challenges and opportunities of reconstructions. The study was conducted using qualitative content analysis method. Thirty engineers and ten social mobilizes working for reconstruction and more than hundreds local social workers, local party leaders, and earthquake victims were selected arbitrarily. Information was collected through semi-structured interviews and open-ended questions, focus group discussions, and field notes, with no previous assumption. Author also reviewed literature and document reviews covering academic and practitioner studies on challenges of reconstruction after earthquake in developing countries such as 2001 Gujarat earthquake, 2005 Kashmir earthquake, 2003 Bam earthquake and 2010 Haiti earthquake; which have very similar building typologies, economic, political, geographical, and geological conditions with Nepal. Secondary data was collected from reports, action plans, and reflection papers of governmental entities, non-governmental organizations, private sector businesses, and the online news. This study concludes that inaccessibility, absence of local government, weak governance, weak infrastructures, lack of preparedness, knowledge gap and manpower shortage, etc. are the key challenges of the reconstruction after 2015 earthquake in Nepal. After scrutinizing different challenges and issues, study counsels that good governance, integrated information, addressing technical issues, public participation along with short term and long term strategies to tackle with technical issues are some crucial factors for timely and quality reconstruction in context of Nepal. Sample collected for this study is relatively small sample size and may not be fully representative of the stakeholders involved in reconstruction. However, the key findings of this study are ones that need to be recognized by academics, governments, and implementation agencies, and considered in the implementation of post-disaster reconstruction program in developing countries.

Keywords: Gorkha earthquake, reconstruction, challenges, policy

Procedia PDF Downloads 411
13575 Meanings and Concepts of Standardization in Systems Medicine

Authors: Imme Petersen, Wiebke Sick, Regine Kollek

Abstract:

In systems medicine, high-throughput technologies produce large amounts of data on different biological and pathological processes, including (disturbed) gene expressions, metabolic pathways and signaling. The large volume of data of different types, stored in separate databases and often located at different geographical sites have posed new challenges regarding data handling and processing. Tools based on bioinformatics have been developed to resolve the upcoming problems of systematizing, standardizing and integrating the various data. However, the heterogeneity of data gathered at different levels of biological complexity is still a major challenge in data analysis. To build multilayer disease modules, large and heterogeneous data of disease-related information (e.g., genotype, phenotype, environmental factors) are correlated. Therefore, a great deal of attention in systems medicine has been put on data standardization, primarily to retrieve and combine large, heterogeneous datasets into standardized and incorporated forms and structures. However, this data-centred concept of standardization in systems medicine is contrary to the debate in science and technology studies (STS) on standardization that rather emphasizes the dynamics, contexts and negotiations of standard operating procedures. Based on empirical work on research consortia that explore the molecular profile of diseases to establish systems medical approaches in the clinic in Germany, we trace how standardized data are processed and shaped by bioinformatics tools, how scientists using such data in research perceive such standard operating procedures and which consequences for knowledge production (e.g. modeling) arise from it. Hence, different concepts and meanings of standardization are explored to get a deeper insight into standard operating procedures not only in systems medicine, but also beyond.

Keywords: data, science and technology studies (STS), standardization, systems medicine

Procedia PDF Downloads 342
13574 Stock Price Informativeness and Profit Warnings: Empirical Analysis

Authors: Adel Almasarwah

Abstract:

This study investigates the nature of association between profit warnings and stock price informativeness in the context of Jordan as an emerging country. The analysis is based on the response of stock price synchronicity to profit warnings percentages that have been published in Jordanian firms throughout the period spanning 2005–2016 in the Amman Stock Exchange. The standard of profit warnings indicators have related negatively to stock price synchronicity in Jordanian firms, meaning that firms with a high portion of profit warnings integrate with more firm-specific information into stock price. Robust regression was used rather than OLS as a parametric test to overcome the variances inflation factor (VIF) and heteroscedasticity issues recognised as having occurred during running the OLS regression; this enabled us to obtained stronger results that fall in line with our prediction that higher profit warning encourages firm investors to collect and process more firm-specific information than common market information.

Keywords: Profit Warnings, Jordanian Firms, Stock Price Informativeness, Synchronicity

Procedia PDF Downloads 142
13573 Weapon Collection Initiatives and the Threat of Small Arms and Light Weapons Proliferation in Volatile Areas of North-Eastern Nigeria as a Way Forward for National Security and Development

Authors: Halilu Babaji, Adamu Buba

Abstract:

The proliferation of small arms and light weapons (SALW) and its illicit trafficking in West Africa and Nigeria in particular, pose a major threat to peace, security and development in the Sub-region. The high circulation of these weapons in the region is a product of the interplay of several factors, which derives principally from the internal socio-economic and political dynamics compounded by globalization. The process of globalization has congealed both time and space making it easier for ideas, goods, persons, services, information, products and money to move across borders with fewer restrictions. And this has a negative effect in the entire region making it easier for arms, ammunition, insurgents, criminal and drugs to flow within national boundaries. The failure of public security in most parts of Nigeria has lead communities to indulge in different forms of ‘self-help ‘security measures, ranging from vigilante groups to community-owned arms stockpiling. Having lost confidence in the Nigerian state, parties to some of these conflicts have become entangled in a security dilemma. The quest to procure more arms to guarantee personal and community protection from perceived and real enemies is fuelling the ‘domestic arms race ‘. Therefore, as small arms remain-and proliferate – development is impeded. The impact of SALW on economic well being and national development in Nigeria is of vast significant. Therefore the need to collect these arms in circulation in Nigeria particularly the volatile area of North-east is of very important. This will hopefully contribute to government effort in building a free, secured and peaceful society.

Keywords: arms, development, proliferation, security

Procedia PDF Downloads 327
13572 Free and Open Source Software for BIM Workflow of Steel Structure Design

Authors: Danilo Di Donato

Abstract:

The continuous new releases of free and open source software (FOSS) and the high costs of proprietary software -whose monopoly is characterized by closed codes and the low level of implementation and customization of software by end-users- impose a reflection on possible tools that can be chosen and adopted for the design and the representation of new steel constructions. The paper aims to show experimentation carried out to verify the actual potential and the effective applicability of FOSS supports to the BIM modeling of steel structures, particularly considering the goal of a possible workflow in order to achieve high level of development (LOD); allow effective interchange methods between different software. To this end, the examined software packages are those with open source or freeware licenses, in order to evaluate their use in architectural praxis. The test has primarily involved the experimentation of Freecad -the only Open Source software that allows a complete and integrated BIM workflow- and then the results have been compared with those of two proprietary software, Sketchup and TeklaBim Sight, which are released with a free version, but not usable for commercial purposes. The experiments carried out on Open Source, and freeware software was then compared with the outcomes that are obtained by two proprietary software, Sketchup Pro and Tekla Structure which has special modules particularly addressed to the design of steel structures. This evaluation has concerned different comparative criteria, that have been defined on the basis of categories related to the reliability, the efficiency, the potentiality, achievable LOD and user-friendliness of the analyzed software packages. In order to verify the actual outcomes of FOSS BIM for the steel structure projects, these results have been compared with a simulation related to a real case study and carried out with a proprietary software BIM modeling. Therefore, the same design theme, the project of a shelter of public space, has been developed using different software. Therefore the purpose of the contribution is to assess what are the developments and potentialities inherent in FOSS BIM, in order to estimate their effective applicability to professional practice, their limits and new fields of research they propose.

Keywords: BIM, steel buildings, FOSS, LOD

Procedia PDF Downloads 175
13571 Mathematical Modeling of the AMCs Cross-Contamination Removal in the FOUPs: Finite Element Formulation and Application in FOUP’s Decontamination

Authors: N. Santatriniaina, J. Deseure, T. Q. Nguyen, H. Fontaine, C. Beitia, L. Rakotomanana

Abstract:

Nowadays, with the increasing of the wafer's size and the decreasing of critical size of integrated circuit manufacturing in modern high-tech, microelectronics industry needs a maximum attention to challenge the contamination control. The move to 300 mm is accompanied by the use of Front Opening Unified Pods for wafer and his storage. In these pods an airborne cross contamination may occur between wafers and the pods. A predictive approach using modeling and computational methods is very powerful method to understand and qualify the AMCs cross contamination processes. This work investigates the required numerical tools which are employed in order to study the AMCs cross-contamination transfer phenomena between wafers and FOUPs. Numerical optimization and finite element formulation in transient analysis were established. Analytical solution of one dimensional problem was developed and the calibration process of physical constants was performed. The least square distance between the model (analytical 1D solution) and the experimental data are minimized. The behavior of the AMCs intransient analysis was determined. The model framework preserves the classical forms of the diffusion and convection-diffusion equations and yields to consistent form of the Fick's law. The adsorption process and the surface roughness effect were also traduced as a boundary condition using the switch condition Dirichlet to Neumann and the interface condition. The methodology is applied, first using the optimization methods with analytical solution to define physical constants, and second using finite element method including adsorption kinetic and the switch of Dirichlet to Neumann condition.

Keywords: AMCs, FOUP, cross-contamination, adsorption, diffusion, numerical analysis, wafers, Dirichlet to Neumann, finite elements methods, Fick’s law, optimization

Procedia PDF Downloads 510
13570 Survey on Data Security Issues Through Cloud Computing Amongst Sme’s in Nairobi County, Kenya

Authors: Masese Chuma Benard, Martin Onsiro Ronald

Abstract:

Businesses have been using cloud computing more frequently recently because they wish to take advantage of its advantages. However, employing cloud computing also introduces new security concerns, particularly with regard to data security, potential risks and weaknesses that could be exploited by attackers, and various tactics and strategies that could be used to lessen these risks. This study examines data security issues on cloud computing amongst sme’s in Nairobi county, Kenya. The study used the sample size of 48, the research approach was mixed methods, The findings show that data owner has no control over the cloud merchant's data management procedures, there is no way to ensure that data is handled legally. This implies that you will lose control over the data stored in the cloud. Data and information stored in the cloud may face a range of availability issues due to internet outages; this can represent a significant risk to data kept in shared clouds. Integrity, availability, and secrecy are all mentioned.

Keywords: data security, cloud computing, information, information security, small and medium-sized firms (SMEs)

Procedia PDF Downloads 85
13569 Construction Strategy of Urban Public Space in Driverless Era

Authors: Yang Ye, Hongfei Qiu, Yaqi Li

Abstract:

The planning and construction of traditional cities are oriented by cars, which leads to the problems of insufficient urban public space, fragmentation, and low utilization efficiency. With the development of driverless technology, the urban structure will change from the traditional single-core grid structure to the multi-core model. In terms of traffic organization, with the release of land for traffic facilities, public space will become more continuous and integrated with traffic space. In the context of driverless technology, urban public reconstruction is characterized by modularization and high efficiency, and its planning and layout features accord with points (service facilities), lines (smart lines), surfaces (activity centers). The public space of driverless urban roads will provide diversified urban public facilities and services. The intensive urban layout makes the commercial public space realize the functions of central activities and style display, respectively, in the interior (building atrium) and the exterior (building periphery). In addition to recreation function, urban green space can also utilize underground parking space to realize efficient dispatching of shared cars. The roads inside the residential community will be integrated into the urban landscape, providing conditions for the community public activity space with changing time sequence and improving the efficiency of space utilization. The intervention of driverless technology will change the thinking of traditional urban construction and turn it into a human-oriented one. As a result, urban public space will be richer, more connected, more efficient, and the urban space justice will be optimized. By summarizing the frontier research, this paper discusses the impact of unmanned driving on cities, especially urban public space, which is beneficial for landscape architects to cope with the future development and changes of the industry and provides a reference for the related research and practice.

Keywords: driverless, urban public space, construction strategy, urban design

Procedia PDF Downloads 115
13568 Light-Entropy Continuum Theory

Authors: Christopher Restall

Abstract:

field causing attraction between mixed charges of matter during charge exchanges with antimatter. This asymmetry is caused from none-trinary quark amount variation in matter and anti-matter during entropy progression. This document explains how a circularity critique exercise assessed scientific knowledge and develop a unified theory from the information collected. The circularity critique, creates greater intuition leaps than an individual would naturally, the information collected can be integrated and assessed thoroughly for correctness.

Keywords: unified theory of everything, gravity, quantum gravity, standard model

Procedia PDF Downloads 43
13567 The Design of Information Technology System for Traceability of Thailand’s Tubtimjun Roseapple

Authors: Pimploi Tirastittam, Phutthiwat Waiyawuththanapoom, Sawanath Treesathon

Abstract:

As there are several countries which import agriculture product from Thailand, those countries demand Thailand to establish the traceability system. The traceability system is the tool to reduce the risk in the supply chain in a very effective way as it will help the stakeholder in the supply chain to identify the defect point which will reduce the cost of operation in the supply chain. This research is aimed to design the traceability system for Tubtimjun roseapple for exporting to China, and it is the qualitative research. The data was collected from the expert in the tuntimjun roseapple and fruit exporting industry, and the data was used to design the traceability system. The design of the tubtimjun roseapple traceability system was followed the theory of supply chain which starts from the upstream of the supply chain to the downstream of the supply chain to support the process and condition of the exporting which included the database designing, system architecture, user interface design and information technology of the traceability system.

Keywords: design information, technology system, traceability, tubtimjun roseapple

Procedia PDF Downloads 171
13566 Application of Electrochromic Glazing for Reducing Peak Cooling Loads

Authors: Ranojoy Dutta

Abstract:

HVAC equipment capacity has a direct impact on occupant comfort and energy consumption of a building. Glazing gains, especially in buildings with high window area, can be a significant contributor to the total peak load on the HVAC system, leading to over-sized systems that mostly operate at poor part load efficiency. In addition, radiant temperature, which largely drives occupant comfort in glazed perimeter zones, is often not effectively controlled despite the HVAC being designed to meet the air temperature set-point. This is due to short wave solar radiation transmitted through windows, that is not sensed by the thermostat until much later when the thermal mass in the room releases the absorbed solar heat to the indoor air. The implication of this phenomenon is increased cooling energy despite poor occupant comfort. EC glazing can significantly eliminate direct solar transmission through windows, reducing both the space cooling loads for the building and improving comfort for occupants near glazing. This paper will review the exact mechanism of how EC glazing would reduce the peak load under design day conditions, leading to reduced cooling capacity vs regular high-performance glazing. Since glazing heat transfer only affects the sensible load, system sizing will be evaluated both with and without the availability of a DOAS to isolate the downsizing potential of the primary cooling equipment when outdoor air is conditioned separately. Given the dynamic nature of glazing gains due to the sun’s movement, effective peak load mitigation with EC requires an automated control system that can predict solar movement and radiation levels so that the right tint state with the appropriate SHGC is utilized at any given time for a given façade orientation. Such an automated EC product will be evaluated for a prototype commercial office model situated in four distinct climate zones.

Keywords: electrochromic glazing, peak sizing, thermal comfort, glazing load

Procedia PDF Downloads 132
13565 An Informative Marketing Platform: Methodology and Architecture

Authors: Martina Marinelli, Samanta Vellante, Francesco Pilotti, Daniele Di Valerio, Gaetanino Paolone

Abstract:

Any development in web marketing technology requires changes in information engineering to identify instruments and techniques suitable for the production of software applications for informative marketing. Moreover, for large web solutions, designing an interface that enables human interactions is a complex process that must bridge between informative marketing requirements and the developed solution. A user-friendly interface in web marketing applications is crucial for a successful business. The paper introduces mkInfo - a software platform that implements informative marketing. Informative marketing is a new interpretation of marketing which places the information at the center of every marketing action. The creative team includes software engineering researchers who have recently authored an article on automatic code generation. The authors have created the mkInfo software platform to generate informative marketing web applications. For each web application, it is possible to automatically implement an opt in page, a landing page, a sales page, and a thank you page: one only needs to insert the content. mkInfo implements an autoresponder to send mail according to a predetermined schedule. The mkInfo platform also includes e-commerce for a product or service. The stakeholder can access any opt-in page and get basic information about a product or service. If he wants to know more, he will need to provide an e-mail address to access a landing page that will generate an e-mail sequence. It will provide him with complete information about the product or the service. From this point on, the stakeholder becomes a user and is now able to purchase the product or related services through the mkInfo platform. This paper suggests a possible definition for Informative Marketing, illustrates its basic principles, and finally details the mkInfo platform that implements it. This paper also offers some Informative Marketing models, which are implemented in the mkInfo platform. Informative marketing can be applied to products or services. It is necessary to realize a web application for each product or service. The mkInfo platform enables the product or the service producer to send information concerning a specific product or service to all stakeholders. In conclusion, the technical contributions of this paper are: a different interpretation of marketing based on information; a modular architecture for web applications, particularly for one with standard features such as information storage, exchange, and delivery; multiple models to implement informative marketing; a software platform enabling the implementation of such models in a web application. Future research aims to enable stakeholders to provide information about a product or a service so that the information gathered about a product or a service includes both the producer’s and the stakeholders' point of view. The purpose is to create an all-inclusive management system of the knowledge regarding a specific product or service: a system that includes everything about the product or service and is able to address even unexpected questions.

Keywords: informative marketing, opt in page, software platform, web application

Procedia PDF Downloads 127
13564 Evaluation of Symptoms, Laboratory Findings, and Natural History of IgE Mediated Wheat Allergy

Authors: Soudeh Tabashi, Soudabeh Fazeli Dehkordy, Masood Movahedi, Nasrin Behniafard

Abstract:

Introduction: Food allergy has increased in three last decades. Since wheat is one of the major constituents of daily meal in many regions throughout the world, wheat allergy is one of the most important allergies ranking among the 8 most common types of food allergies. Our information about epidemiology and etiology of food allergies are limited. Therefore, in this study we sought to evaluate the symptoms and laboratory findings in children with wheat allergy. Materials and methods: There were 23 patients aged up to 18 with the diagnosis of IgE mediated wheat allergy that were included enrolled in this study. Using a questionnaire .we collected their information and organized them into 4 groups categories of: demographic data identification, signs and symptoms, comorbidities, and laboratory data. Then patients were followed up for 6 month and their lab data were compared together. Results: Most of the patients (82%) presented the symptoms of wheat allergy in the first year of their life. The skin and the respiratory system were the most commonly involved organs with an incidence of 86% and 78% respectively. Most of the patients with wheat allergy were also sensitive to the other type of foods and their sensitivity to egg were most common type (47%). in 57% of patients, IgE levels were decreased during the 6 month follow-up period. Conclusion: We do not have enough information about data on epidemiology and response to therapy of wheat allergy and to best of our knowledge no study has addressed this issue in Iran so far. This study is the first source of information about IgE mediated wheat allergy in Iran and It can provide an opening for future studies about wheat allergy and its treatments.

Keywords: wheat allergy, food allergy, IgE, food allergy

Procedia PDF Downloads 194
13563 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning

Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic

Abstract:

Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.

Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method

Procedia PDF Downloads 252
13562 Integrated Genetic-A* Graph Search Algorithm Decision Model for Evaluating Cost and Quality of School Renovation Strategies

Authors: Yu-Ching Cheng, Yi-Kai Juan, Daniel Castro

Abstract:

Energy consumption of buildings has been an increasing concern for researchers and practitioners in the last decade. Sustainable building renovation can reduce energy consumption and carbon dioxide emissions; meanwhile, it also can extend existing buildings useful life and facilitate environmental sustainability while providing social and economic benefits to the society. School buildings are different from other designed spaces as they are more crowded and host the largest portion of daily activities and occupants. Strategies that focus on reducing energy use but also improve the students’ learning environment becomes a significant subject in sustainable school buildings development. A decision model is developed in this study to solve complicated and large-scale combinational, discrete and determinate problems such as school renovation projects. The task of this model is to automatically search for the most cost-effective (lower cost and higher quality) renovation strategies. In this study, the search process of optimal school building renovation solutions is by nature a large-scale zero-one programming determinate problem. A* is suitable for solving deterministic problems due to its stable and effective search process, and genetic algorithms (GA) provides opportunities to acquire global optimal solutions in a short time via its indeterminate search process based on probability. These two algorithms are combined in this study to consider trade-offs between renovation cost and improved quality, this decision model is able to evaluate current school environmental conditions and suggest an optimal scheme of sustainable school buildings renovation strategies. Through adoption of this decision model, school managers can overcome existing limitations and transform school buildings into spaces more beneficial to students and friendly to the environment.

Keywords: decision model, school buildings, sustainable renovation, genetic algorithm, A* search algorithm

Procedia PDF Downloads 120
13561 The Hidden Role of Interest Rate Risks in Carry Trades

Authors: Jingwen Shi, Qi Wu

Abstract:

We study the role played interest rate risk in carry trade return in order to understand the forward premium puzzle. In this study, our goal is to investigate to what extent carry trade return is indeed due to compensation for risk taking and, more important, to reveal the nature of these risks. Using option data not only on exchange rates but also on interest rate swaps (swaptions), our first finding is that, besides the consensus currency risks, interest rate risks also contribute a non-negligible portion to the carry trade return. What strikes us is our second finding. We find that large downside risks of future exchange rate movements are, in fact, priced significantly in option market on interest rates. The role played by interest rate risk differs structurally from the currency risk. There is a unique premium associated with interest rate risk, though seemingly small in size, which compensates the tail risks, the left tail to be precise. On the technical front, our study relies on accurately retrieving implied distributions from currency options and interest rate swaptions simultaneously, especially the tail components of the two. For this purpose, our major modeling work is to build a new international asset pricing model where we use an orthogonal setup for pricing kernels and specify non-Gaussian dynamics in order to capture three sets of option skew accurately and consistently across currency options and interest rate swaptions, domestic and foreign, within one model. Our results open a door for studying forward premium anomaly through implied information from interest rate derivative market.

Keywords: carry trade, forward premium anomaly, FX option, interest rate swaption, implied volatility skew, uncovered interest rate parity

Procedia PDF Downloads 445
13560 Climate Change Impact Due to Timber Product Imports in the UK

Authors: Juan A. Ferriz-Papi, Allan L. Nantel, Talib E. Butt

Abstract:

Buildings are thought to consume about 50% of the total energy in the UK. The use stage in a building life cycle has the largest energy consumption, although different assessments are showing that the construction can equal several years of maintenance and operations. The selection of materials with lower embodied energy is very important to reduce this consumption. For this reason, timber is one adequate material due to its low embodied energy and the capacity to be used as carbon storage. The use of timber in the construction industry is very significant. Sawn wood, for example, is one of the top 5 construction materials consumed in the UK according to National Statistics. Embodied energy for building products considers the energy consumed in extraction and production stages. However, it is not the same consideration if this product is produced locally as when considering the resource produced further afield. Transport is a very relevant matter that profoundly influences in the results of embodied energy. The case of timber use in the UK is important because the balance between imports and exports is far negative, industry consuming more imported timber than produced. Nearly 80% of sawn softwood used in construction is imported. The imports-exports deficit for sawn wood accounted for more than 180 million pounds during the first four-month period of 2016. More than 85% of these imports come from Europe (83% from the EU). The aim of this study is to analyze climate change impact due to transport for timber products consumed in the UK. An approximate estimation of energy consumed and carbon emissions are calculated considering the timber product’s import origin. The results are compared to the total consumption of each product, estimating the impact of transport on the final embodied energy and carbon emissions. The analysis of these results can help deduce that one big challenge for climate change is the reduction of external dependency, with the associated improvement of internal production of timber products. A study of different types of timber products produced in the UK and abroad is developed to understand the possibilities for this country to improve sustainability and self-management. Reuse and recycle possibilities are also considered.

Keywords: embodied energy, climate change, CO2 emissions, timber, transport

Procedia PDF Downloads 345