Search results for: architecture complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3275

Search results for: architecture complexity

2465 A Universal Approach to Categorize Failures in Production

Authors: Konja Knüppel, Gerrit Meyer, Peter Nyhuis

Abstract:

The increasing interconnectedness and complexity of production processes raise the susceptibility of production systems to failure. Therefore, the ability to respond quickly to failures is increasingly becoming a competitive factor. The research project "Sustainable failure management in manufacturing SMEs" is developing a methodology to identify failures in the production and select preventive and reactive measures in order to correct failures and to establish sustainable failure management systems.

Keywords: failure categorization, failure management, logistic performance, production optimization

Procedia PDF Downloads 367
2464 Communication Anxiety in Nigerian Students Studying English as a Foreign Language: Evidence from Colleges of Education Sector

Authors: Yasàlu Haruna

Abstract:

In every transaction, the use of language is central regardless of form or complexity if any meaning is expected to be harvested therefrom. Students constituting a population group in the learning landscape of Nigeria occupy a central position with a propensity to excel or otherwise in the context of communication, especially in the learning process and social interaction. The nature or quantum of anxiety or confidence in speaking a second language is not only peculiar to societies where the second language is not an official language but to a degree, the linguistic gap created by adoption and adaptation syndrome manifests in created anxiety or lack of confidence especially where mastery of a spoken language becomes a major challenge. This paper explores the manner in which linguistic complexity and cultural barriers combine to widen the adaptation and adoption gap. In much the same way, typical issues of pronouncement, intonation and accent difficulties are vital variables that explain the root cause of anxiety. Using a combination of primary and secondary sources of data expressed in questionnaires, key informant interviews and other available data, the paper concludes that the non-integration of anxiety possibility into the education delivery framework has left a lot to be needed in cultivating second language speakers among students of Nigerian Colleges of Education. In addition, cultural barriers and the absence of integration interfaces in the course of learning within and outside the classroom contribute to further widening the gap. Again, colleagues/mates/conversation partners' mastery of a second language remains a contributory factor largely due to the quality of the preparatory school system in many parts of the country. The paper recommends that national policies and frameworks must be reviewed to consider integration windows where culture and conversation partner deficiencies can be remedied through educational events such as debates, quizzes and symposia; improvements can be attained while commercial advertisements are tailored towards seeking for adoption of second language in commerce and major cultural activities.

Keywords: cultural barriers, integration, college of education and adaptation, second language

Procedia PDF Downloads 82
2463 The Tramway in French Cities: Complication of Public Spaces and Complexity of the Design Process

Authors: Elisa Maître

Abstract:

The redeployment of tram networks in French cities has considerably modified public spaces and the way citizens use them. Above and beyond the image that trams have of contributing to the sustainable urban development, the question of safety for users in these spaces has not been studied much. This study is based on an analysis of use of public spaces laid out for trams, from the standpoint of legibility and safety concerns. The study also examines to what extent the complexity of the design process, with many interactions between numerous and varied players in this process has a role in the genesis of these problems. This work is mainly based on the analysis of links between the uses of these re-designed public spaces (through observations, interviews of users and accident studies) and the analysis of the design conditions and processes of the projects studied (mainly based on interviews with the actors of these projects). Practical analyses were based three points of view: that of the planner, that of the user (based on observations and interviews) and that of the road safety expert. The cities of Montpellier, Marseille and Nice are the three fields of study on which the demonstration of this thesis is based. On part, the results of this study allow showing that the insertion of tram poses some problems complication of public areas of French cities. These complications related to the restructuring of public spaces for the tram, create difficulties of use and safety concerns. On the other hand, interviews depth analyses, fully transcribed, have led us to develop particular dysfunction scenarios in the design process. These elements lead to question the way the legibility and safety of these new forms of public spaces are taken into account. Then, an in-depth analysis of the design processes of public spaces with trams systems would also be a way of better understanding the choices made, the compromises accepted, and the conflicts and constraints at work, weighing on the layout of these spaces. The results presented concerning the impact that spaces laid out for trams have on the difficulty of use, suggest different possibilities for improving the way in which safety for all users is taken into account in designing public spaces.

Keywords: public spaces, road layout, users, design process of urban projects

Procedia PDF Downloads 224
2462 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data

Authors: M. Kharrat, G. Moreau, Z. Aboura

Abstract:

The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.

Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition

Procedia PDF Downloads 152
2461 A Design of Elliptic Curve Cryptography Processor based on SM2 over GF(p)

Authors: Shiji Hu, Lei Li, Wanting Zhou, DaoHong Yang

Abstract:

The data encryption, is the foundation of today’s communication. On this basis, how to improve the speed of data encryption and decryption is always a problem that scholars work for. In this paper, we proposed an elliptic curve crypto processor architecture based on SM2 prime field. In terms of hardware implementation, we optimized the algorithms in different stages of the structure. In finite field modulo operation, we proposed an optimized improvement of Karatsuba-Ofman multiplication algorithm, and shorten the critical path through pipeline structure in the algorithm implementation. Based on SM2 recommended prime field, a fast modular reduction algorithm is used to reduce 512-bit wide data obtained from the multiplication unit. The radix-4 extended Euclidean algorithm was used to realize the conversion between affine coordinate system and Jacobi projective coordinate system. In the parallel scheduling of point operations on elliptic curves, we proposed a three-level parallel structure of point addition and point double based on the Jacobian projective coordinate system. Combined with the scalar multiplication algorithm, we added mutual pre-operation to the point addition and double point operation to improve the efficiency of the scalar point multiplication. The proposed ECC hardware architecture was verified and implemented on Xilinx Virtex-7 and ZYNQ-7 platforms, and each 256-bit scalar multiplication operation took 0.275ms. The performance for handling scalar multiplication is 32 times that of CPU(dual-core ARM Cortex-A9).

Keywords: Elliptic curve cryptosystems, SM2, modular multiplication, point multiplication.

Procedia PDF Downloads 89
2460 Analysis of Direct Current Motor in LabVIEW

Authors: E. Ramprasath, P. Manojkumar, P. Veena

Abstract:

DC motors have been widely used in the past centuries which are proudly known as the workhorse of industrial systems until the invention of the AC induction motors which makes a huge revolution in industries. Since then, the use of DC machines have been decreased due to enormous factors such as reliability, robustness and complexity but it lost its fame due to the losses. A new methodology is proposed to construct a DC motor through the simulation in LabVIEW to get an idea about its real time performances, if a change in parameter might have bigger improvement in losses and reliability.

Keywords: analysis, characteristics, direct current motor, LabVIEW software, simulation

Procedia PDF Downloads 544
2459 A Cooperative Signaling Scheme for Global Navigation Satellite Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.

Keywords: global navigation satellite network, cooperative signaling, data combining, nodes

Procedia PDF Downloads 276
2458 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models

Authors: Morten Brøgger, Kim Wittchen

Abstract:

Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.

Keywords: building stock energy modelling, energy-savings, archetype

Procedia PDF Downloads 149
2457 Mitigating Denial of Service Attacks in Information Centric Networking

Authors: Bander Alzahrani

Abstract:

Information-centric networking (ICN) using architectures such as Publish-Subscribe Internet Routing Paradigm (PSIRP) is one of the promising candidates for a future Internet, has recently been under the spotlight by the research community to investigate the possibility of redesigning the current Internet architecture to solve many issues such as routing scalability, security, and quality of services issues.. The Bloom filter-based forwarding is a source-routing approach that is used in the PSIRP architecture. This mechanism is vulnerable to brute force attacks which may lead to denial-of-service (DoS) attacks. In this work, we present a new forwarding approach that keeps the advantages of Bloom filter-based forwarding while mitigates attacks on the forwarding mechanism. In practice, we introduce a special type of forwarding nodes called Edge-FW to be placed at the edge of the network. The role of these node is to add an extra security layer by validating and inspecting packets at the edge of the network against brute-force attacks and check whether the packet contains a legitimate forwarding identifier (FId) or not. We leverage Certificateless Aggregate Signature (CLAS) scheme with a small size of 64-bit which is used to sign the FId. Hence, this signature becomes bound to a specific FId. Therefore, malicious nodes that inject packets with random FIds will be easily detected and dropped at the Edge-FW node when the signature verification fails. Our preliminary security analysis suggests that with the proposed approach, the forwarding plane is able to resist attacks such as DoS with very high probability.

Keywords: bloom filter, certificateless aggregate signature, denial-of-service, information centric network

Procedia PDF Downloads 194
2456 A CORDIC Based Design Technique for Efficient Computation of DCT

Authors: Deboraj Muchahary, Amlan Deep Borah Abir J. Mondal, Alak Majumder

Abstract:

A discrete cosine transform (DCT) is described and a technique to compute it using fast Fourier transform (FFT) is developed. In this work, DCT of a finite length sequence is obtained by incorporating CORDIC methodology in radix-2 FFT algorithm. The proposed methodology is simple to comprehend and maintains a regular structure, thereby reducing computational complexity. DCTs are used extensively in the area of digital processing for the purpose of pattern recognition. So the efficient computation of DCT maintaining a transparent design flow is highly solicited.

Keywords: DCT, DFT, CORDIC, FFT

Procedia PDF Downloads 471
2455 The Effect of Fibre Orientation on the Mechanical Behaviour of Skeletal Muscle: A Finite Element Study

Authors: Christobel Gondwe, Yongtao Lu, Claudia Mazzà, Xinshan Li

Abstract:

Skeletal muscle plays an important role in the human body system and function by generating voluntary forces and facilitating body motion. However, The mechanical properties and behaviour of skeletal muscle are still not comprehensively known yet. As such, various robust engineering techniques have been applied to better elucidate the mechanical behaviour of skeletal muscle. It is considered that muscle mechanics are highly governed by the architecture of the fibre orientations. Therefore, the aim of this study was to investigate the effect of different fibre orientations on the mechanical behaviour of skeletal muscle.In this study, a continuum mechanics approach–finite element (FE) analysis was applied to the left bicep femoris long head to determine the contractile mechanism of the muscle using Hill’s three-element model. The geometry of the muscle was segmented from the magnetic resonance images. The muscle was modelled as a quasi-incompressible hyperelastic (Mooney-Rivlin) material. Two types of fibre orientations were implemented: one with the idealised fibre arrangement, i.e. parallel single-direction fibres going from the muscle origin to insertion sites, and the other with curved fibre arrangement which is aligned with the muscle shape.The second fibre arrangement was implemented through the finite element method; non-uniform rational B-spline (FEM-NURBs) technique by means of user material (UMAT) subroutines. The stress-strain behaviour of the muscle was investigated under idealised exercise conditions, and will be further analysed under physiological conditions. The results of the two different FE models have been outputted and qualitatively compared.

Keywords: FEM-NURBS, finite element analysis, Mooney-Rivlin hyperelastic, muscle architecture

Procedia PDF Downloads 477
2454 A Ground Structure Method to Minimize the Total Installed Cost of Steel Frame Structures

Authors: Filippo Ranalli, Forest Flager, Martin Fischer

Abstract:

This paper presents a ground structure method to optimize the topology and discrete member sizing of steel frame structures in order to minimize total installed cost, including material, fabrication and erection components. The proposed method improves upon existing cost-based ground structure methods by incorporating constructability considerations well as satisfying both strength and serviceability constraints. The architecture for the method is a bi-level Multidisciplinary Feasible (MDF) architecture in which the discrete member sizing optimization is nested within the topology optimization process. For each structural topology generated, the sizing optimization process seek to find a set of discrete member sizes that result in the lowest total installed cost while satisfying strength (member utilization) and serviceability (node deflection and story drift) criteria. To accurately assess cost, the connection details for the structure are generated automatically using accurate site-specific cost information obtained directly from fabricators and erectors. Member continuity rules are also applied to each node in the structure to improve constructability. The proposed optimization method is benchmarked against conventional weight-based ground structure optimization methods resulting in an average cost savings of up to 30% with comparable computational efficiency.

Keywords: cost-based structural optimization, cost-based topology and sizing, optimization, steel frame ground structure optimization, multidisciplinary optimization of steel structures

Procedia PDF Downloads 335
2453 Between the House and the City: An Investigation of the Structure of the Family/Society and the Role of the Public Housing in Tokyo and Berlin

Authors: Abudjana Babiker

Abstract:

The middle of twenty century witnessed an explosion in public housing. After the great depression, some of the capitalists and communist countries have launched policies and programs to produce public housing in the urban areas. Concurrently, modernity was the leading architecture style at the time excessively supported the production, and principally was the instrument for the success of the public housing program due to the modernism manifesto for manufactured architecture as an international style that serves the society and parallelly connect it to the other design industries which allowed for the production of the architecture elements. After the second world war, public housing flourished, especially in communist’s countries. The idea of public housing was conceived as living spaces at the time, while the Workplaces performed as the place for production and labor. Michel Foucault - At the end of the twenty century- the introduction of biopolitics has had highlighted the alteration in the production and labor inter-function. The house does not precisely perform as the sanctuary, from the production, for the family, it opens the house to be -part of the city as- a space for production, not only to produce objects but to reproduce the family as a total part of the production mechanism in the city. While the public housing kept altering from one country to another after the failure of the modernist’s public housing in the late 1970s, the society continued changing parallelly with the socio-economic condition in each political-economical system, and the public housing thus followed. The family structure in the major cities has been dramatically changing, single parenting and the long working hours, for instance, have been escalating the loneliness in the major cities such as London, Berlin, and Tokyo and the public housing for the families is no longer suits the single lifestyle for the individuals. This Paper investigates the performance of both the single/individual lifestyle and the family/society structure in Tokyo and Berlin in a relation to the utilization of public housing under economical policies and the socio-political environment that produced the individuals and the collective. The study is carried through the study of the undercurrent individual/society and case studies to examine the performance of the utilization of the housing. The major finding is that the individual/collective are revolving around the city; the city identified and acts as a system that magnetized and blurred the line between production and reproduction lifestyle. The mass public housing for families is shifting to be a combination between neo-liberalism and socialism housing.

Keywords: loneliness, production reproduction, work live, publichousing

Procedia PDF Downloads 182
2452 Architecture for Hearing Impaired: A Study on Conducive Learning Environments for Deaf Children with Reference to Sri Lanka

Authors: Champa Gunawardana, Anishka Hettiarachchi

Abstract:

Conducive Architecture for learning environments is an area of interest for many scholars around the world. Loss of sense of hearing leads to the assumption that deaf students are visual learners. Comprehending favorable non-hearing attributes of architecture can lead to effective, rich and friendly learning environments for hearing impaired. The objective of the current qualitative investigation is to explore the nature and parameters of a sense of place of deaf children to support optimal learning. The investigation was conducted with hearing-impaired children (age: between 8-19, Gender: 15 male and 15 female) of Yashodhara deaf and blind school at Balangoda, Sri Lanka. A sensory ethnography study was adopted to identify the nature of perception and the parameters of most preferred and least preferred spaces of the learning environment. The common perceptions behind most preferred places in the learning environment were found as being calm and quiet, sense of freedom, volumes characterized by openness and spaciousness, sense of safety, wide spaces, privacy and belongingness, less crowded, undisturbed, availability of natural light and ventilation, sense of comfort and the view of green colour in the surroundings. On the other hand, the least preferred spaces were found to be perceived as dark, gloomy, warm, crowded, lack of freedom, smells (bad), unsafe and having glare. Perception of space by deaf considering the hierarchy of sensory modalities involved was identified as; light - color perception (34 %), sight - visual perception (32%), touch - haptic perception (26%), smell - olfactory perception (7%) and sound – auditory perception (1%) respectively. Sense of freedom (32%) and sense of comfort (23%) were the predominant psychological parameters leading to an optimal sense of place perceived by hearing impaired. Privacy (16%), rhythm (14%), belonging (9%) and safety (6%) were found as secondary factors. Open and wide flowing spaces without visual barriers, transparent doors and windows or open port holes to ease their communication, comfortable volumes, naturally ventilated spaces, natural lighting or diffused artificial lighting conditions without glare, sloping walkways, wider stairways, walkways and corridors with ample distance for signing were identified as positive characteristics of the learning environment investigated.

Keywords: deaf, visual learning environment, perception, sensory ethnography

Procedia PDF Downloads 224
2451 Linearization and Process Standardization of Construction Design Engineering Workflows

Authors: T. R. Sreeram, S. Natarajan, C. Jena

Abstract:

Civil engineering construction is a network of tasks involving varying degree of complexity and streamlining, and standardization is the only way to establish a systemic approach to design. While there are off the shelf tools such as AutoCAD that play a role in the realization of design, the repeatable process in which these tools are deployed often is ignored. The present paper addresses this challenge through a sustainable design process and effective standardizations at all stages in the design workflow. The same is demonstrated through a case study in the context of construction, and further improvement points are highlighted.

Keywords: syste, lean, value stream, process improvement

Procedia PDF Downloads 118
2450 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply

Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele

Abstract:

In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.

Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant

Procedia PDF Downloads 169
2449 The Reenactment of Historic Memory and the Ways to Read past Traces through Contemporary Architecture in European Urban Contexts: The Case Study of the Medieval Walls of Naples

Authors: Francesco Scarpati

Abstract:

Because of their long history, ranging from ancient times to the present day, European cities feature many historical layers, whose single identities are represented by traces surviving in the urban design. However, urban transformations, in particular, the ones that have been produced by the property speculation phenomena of the 20th century, often compromised the readability of these traces, resulting in a loss of the historical identities of the single layers. The purpose of this research is, therefore, a reflection on the theme of the reenactment of the historical memory in the stratified European contexts and on how contemporary architecture can help to reveal past signs of the cities. The research work starts from an analysis of a series of emblematic examples that have already provided an original solution to the described problem, going from the architectural detail scale to the urban and landscape scale. The results of these analyses are then applied to the case study of the city of Naples, as an emblematic example of a stratified city, with an ancient Greek origin; a city where it is possible to read most of the traces of its transformations. Particular consideration is given to the trace of the medieval walls of the city, which a long time ago clearly divided the city itself from the outer fields, and that is no longer readable at the current time. Finally, solutions and methods of intervention are proposed to ensure that the trace of the walls, read as a boundary, can be revealed through the contemporary project.

Keywords: contemporary project, historic memory, historic urban contexts, medieval walls, naples, stratified cities, urban traces

Procedia PDF Downloads 263
2448 Methods Employed to Mitigate Wind Damage on Ancient Egyptian Architecture

Authors: Hossam Mohamed Abdelfattah Helal Hegazi

Abstract:

Winds and storms are considered crucial weathering factors, representing primary causes of destruction and erosion for all materials on the Earth's surface. This naturally includes historical structures, with the impact of winds and storms intensifying their deterioration, particularly when carrying high-hardness sand particles during their passage across the ground. Ancient Egyptians utilized various methods to prevent wind damage to their ancient architecture throughout the ancient Egyptian periods . One of the techniques employed by ancient Egyptians was the use of clay or compacted earth as a filling material between opposing walls made of stone, bricks, or mud bricks. The walls made of reeds or woven tree branches were covered with clay to prevent the infiltration of winds and rain, enhancing structural integrity, this method was commonly used in hollow layers . Additionally, Egyptian engineers innovated a type of adobe brick with uniformly leveled sides, manufactured from dried clay. They utilized stone barriers, constructed wind traps, and planted trees in rows parallel to the prevailing wind direction. Moreover, they employed receptacles to drain rainwater resulting from wind-loaded rain and used mortar to fill gaps in roofs and structures. Furthermore, proactive measures such as the removal of sand from around historical and archaeological buildings were taken to prevent adverse effects

Keywords: winds, storms, weathering, destruction, erosion, materials, Earth's surface, historical structures, impact

Procedia PDF Downloads 50
2447 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks

Procedia PDF Downloads 350
2446 Generating a Functional Grammar for Architectural Design from Structural Hierarchy in Combination of Square and Equal Triangle

Authors: Sanaz Ahmadzadeh Siyahrood, Arghavan Ebrahimi, Mohammadjavad Mahdavinejad

Abstract:

Islamic culture was accountable for a plethora of development in astronomy and science in the medieval term, and in geometry likewise. Geometric patterns are reputable in a considerable number of cultures, but in the Islamic culture the patterns have specific features that connect the Islamic faith to mathematics. In Islamic art, three fundamental shapes are generated from the circle shape: triangle, square and hexagon. Originating from their quiddity, each of these geometric shapes has its own specific structure. Even though the geometric patterns were generated from such simple forms as the circle and the square, they can be combined, duplicated, interlaced, and arranged in intricate combinations. So in order to explain geometrical interaction principles between square and equal triangle, in the first definition step, all types of their linear forces individually and in the second step, between them, would be illustrated. In this analysis, some angles will be created from intersection of their directions. All angles are categorized to some groups and the mathematical expressions among them are analyzed. Since the most geometric patterns in Islamic art and architecture are based on the repetition of a single motif, the evaluation results which are obtained from a small portion, is attributable to a large-scale domain while the development of infinitely repeating patterns can represent the unchanging laws. Geometric ornamentation in Islamic art offers the possibility of infinite growth and can accommodate the incorporation of other types of architectural layout as well, so the logic and mathematical relationships which have been obtained from this analysis are applicable in designing some architecture layers and developing the plan design.

Keywords: angle, equal triangle, square, structural hierarchy

Procedia PDF Downloads 189
2445 Architecture - Performance Relationship in GPU Computing - Composite Process Flow Modeling and Simulations

Authors: Ram Mohan, Richard Haney, Ajit Kelkar

Abstract:

Current developments in computing have shown the advantage of using one or more Graphic Processing Units (GPU) to boost the performance of many computationally intensive applications but there are still limits to these GPU-enhanced systems. The major factors that contribute to the limitations of GPU(s) for High Performance Computing (HPC) can be categorized as hardware and software oriented in nature. Understanding how these factors affect performance is essential to develop efficient and robust applications codes that employ one or more GPU devices as powerful co-processors for HPC computational modeling. This research and technical presentation will focus on the analysis and understanding of the intrinsic interrelationship of both hardware and software categories on computational performance for single and multiple GPU-enhanced systems using a computationally intensive application that is representative of a large portion of challenges confronting modern HPC. The representative application uses unstructured finite element computations for transient composite resin infusion process flow modeling as the computational core, characteristics and results of which reflect many other HPC applications via the sparse matrix system used for the solution of linear system of equations. This work describes these various software and hardware factors and how they interact to affect performance of computationally intensive applications enabling more efficient development and porting of High Performance Computing applications that includes current, legacy, and future large scale computational modeling applications in various engineering and scientific disciplines.

Keywords: graphical processing unit, software development and engineering, performance analysis, system architecture and software performance

Procedia PDF Downloads 359
2444 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights

Authors: Julian Wise

Abstract:

Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.

Keywords: mineral technology, big data, machine learning operations, data lake

Procedia PDF Downloads 104
2443 Regulatory Guidelines to Support the Design of Nanosatellite Projects in Mexican Academic Contexts

Authors: Alvaro Armenta-Ramade, Arturo Serrano-Santoyo, Veronica Rojas-Mendizabal, Roberto Conte-Galvan

Abstract:

The availability and affordability of commercial off-the-shell products have brought a major impetus in the development of university projects related to the design, construction and launching of small satellites on a global scale. Universities in emerging economies as well as in least developed countries have been able to develop prototypes of small satellites (cubesats and cansats) with limited budgets. The experience gained in the development of small satellites gives rise to capacity building for designing more complex aerospace systems. This trend has significantly increased the pace and number of aerospace university projects around the world. In the case of Mexico, projects funded by different agencies have been very effective in accelerating the capacity building and technology transfer initiatives in the aerospace ecosystem. However, many of this initiatives have centered their efforts in technology development matters with minimum or no considerations of key regulatory issues related to frequency assignment, management and licensing, as well as launching requirements and measures of mitigation of space debris. These regulatory concerns are fundamental to accomplish successful missions that take into account the complete value chain of an aerospace project. The purpose of this paper is to develop a regulatory framework to support the efforts of educational institutions working on the development of small satellites in Mexico. We base our framework on recommendations from the International Telecommunications Union (ITU), the United Nations Office for Outer Space Affairs (UNOOSA) and other major actors of the Mexican regulatory ecosystem. In order to develop an integrated and cohesive framework, we draw on complexity science to identify the agents, their role and interactions. Our goal is to create a guiding instrument available both in print and online that can also be used in other regions of the world

Keywords: capacity building, complexity science, cubesats, space regulations, small satellites

Procedia PDF Downloads 252
2442 Methodology for the Integration of Object Identification Processes in Handling and Logistic Systems

Authors: L. Kiefer, C. Richter, G. Reinhart

Abstract:

The uprising complexity in production systems due to an increasing amount of variants up to customer innovated products leads to requirements that hierarchical control systems are not able to fulfil. Therefore, factory planners can install autonomous manufacturing systems. The fundamental requirement for an autonomous control is the identification of objects within production systems. In this approach an attribute-based identification is focused for avoiding dose-dependent identification costs. Instead of using an identification mark (ID) like a radio frequency identification (RFID)-Tag, an object type is directly identified by its attributes. To facilitate that it’s recommended to include the identification and the corresponding sensors within handling processes, which connect all manufacturing processes and therefore ensure a high identification rate and reduce blind spots. The presented methodology reduces the individual effort to integrate identification processes in handling systems. First, suitable object attributes and sensor systems for object identification in a production environment are defined. By categorising these sensor systems as well as handling systems, it is possible to match them universal within a compatibility matrix. Based on that compatibility further requirements like identification time are analysed, which decide whether the combination of handling and sensor system is well suited for parallel handling and identification within an autonomous control. By analysing a list of more than thousand possible attributes, first investigations have shown, that five main characteristics (weight, form, colour, amount, and position of subattributes as drillings) are sufficient for an integrable identification. This knowledge limits the variety of identification systems and leads to a manageable complexity within the selection process. Besides the procedure, several tools, as an example a sensor pool are presented. These tools include the generated specific expert knowledge and simplify the selection. The primary tool is a pool of preconfigured identification processes depending on the chosen combination of sensor and handling device. By following the defined procedure and using the created tools, even laypeople out of other scientific fields can choose an appropriate combination of handling devices and sensors which enable parallel handling and identification.

Keywords: agent systems, autonomous control, handling systems, identification

Procedia PDF Downloads 171
2441 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 169
2440 Quality-Of-Service-Aware Green Bandwidth Allocation in Ethernet Passive Optical Network

Authors: Tzu-Yang Lin, Chuan-Ching Sue

Abstract:

Sleep mechanisms are commonly used to ensure the energy efficiency of each optical network unit (ONU) that concerns a single class delay constraint in the Ethernet Passive Optical Network (EPON). How long the ONUs can sleep without violating the delay constraint has become a research problem. Particularly, we can derive an analytical model to determine the optimal sleep time of ONUs in every cycle without violating the maximum class delay constraint. The bandwidth allocation considering such optimal sleep time is called Green Bandwidth Allocation (GBA). Although the GBA mechanism guarantees that the different class delay constraints do not violate the maximum class delay constraint, packets with a more relaxed delay constraint will be treated as those with the most stringent delay constraint and may be sent early. This means that the ONU will waste energy in active mode to send packets in advance which did not need to be sent at the current time. Accordingly, we proposed a QoS-aware GBA using a novel intra-ONU scheduling to control the packets to be sent according to their respective delay constraints, thereby enhancing energy efficiency without deteriorating delay performance. If packets are not explicitly classified but with different packet delay constraints, we can modify the intra-ONU scheduling to classify packets according to their packet delay constraints rather than their classes. Moreover, we propose the switchable ONU architecture in which the ONU can switch the architecture according to the sleep time length, thus improving energy efficiency in the QoS-aware GBA. The simulation results show that the QoS-aware GBA ensures that packets in different classes or with different delay constraints do not violate their respective delay constraints and consume less power than the original GBA.

Keywords: Passive Optical Networks, PONs, Optical Network Unit, ONU, energy efficiency, delay constraint

Procedia PDF Downloads 276
2439 Reconstruction of Visual Stimuli Using Stable Diffusion with Text Conditioning

Authors: ShyamKrishna Kirithivasan, Shreyas Battula, Aditi Soori, Richa Ramesh, Ramamoorthy Srinath

Abstract:

The human brain, among the most complex and mysterious aspects of the body, harbors vast potential for extensive exploration. Unraveling these enigmas, especially within neural perception and cognition, delves into the realm of neural decoding. Harnessing advancements in generative AI, particularly in Visual Computing, seeks to elucidate how the brain comprehends visual stimuli observed by humans. The paper endeavors to reconstruct human-perceived visual stimuli using Functional Magnetic Resonance Imaging (fMRI). This fMRI data is then processed through pre-trained deep-learning models to recreate the stimuli. Introducing a new architecture named LatentNeuroNet, the aim is to achieve the utmost semantic fidelity in stimuli reconstruction. The approach employs a Latent Diffusion Model (LDM) - Stable Diffusion v1.5, emphasizing semantic accuracy and generating superior quality outputs. This addresses the limitations of prior methods, such as GANs, known for poor semantic performance and inherent instability. Text conditioning within the LDM's denoising process is handled by extracting text from the brain's ventral visual cortex region. This extracted text undergoes processing through a Bootstrapping Language-Image Pre-training (BLIP) encoder before it is injected into the denoising process. In conclusion, a successful architecture is developed that reconstructs the visual stimuli perceived and finally, this research provides us with enough evidence to identify the most influential regions of the brain responsible for cognition and perception.

Keywords: BLIP, fMRI, latent diffusion model, neural perception.

Procedia PDF Downloads 62
2438 A U-Net Based Architecture for Fast and Accurate Diagram Extraction

Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal

Abstract:

In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.

Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO

Procedia PDF Downloads 128
2437 Sustainability through Resilience: How Emergency Responders Cope with Stressors

Authors: Sophie Kroeling, Agnetha Schuchardt

Abstract:

Striving for sustainability brings a lot of challenges for different fields of interest, e. g. security or health concerns. In Germany, civil protection is predominantly carried out by emergency responders who perform essential tasks of civil protection. Based on theoretical concepts of different psychological stress theories this contribution focuses on the question, how the resilience of emergency responders can be improved. The goal is to identify resources and successful coping strategies that help to prevent and reduce negative outcomes during or after stressful events. The paper will present results from a qualitative analysis of semi-structured qualitative interviews with 20 emergency responders. These results provide insights into the complexity of coping processes (e. g. controlling the situation, downplaying perceived personal threats through humor) and show the diversity of stressors (like complexity of the disastrous situation, intrusive press and media, or lack of social support within the organization). Self-efficacy expectation was a very important resource for coping with stressful situations. The results served as a starting point for a quantitative survey (that was conducted in March 2017), the development of education and training tools for emergency responders and the improvement of critical incident stress management processes. First results from the quantitative study with more than 700 participants show that, e. g., the emergency responders use social coping within their private social network and also within their aid organization and that both are correlated to resilience. Moreover, missing information, bureaucratic problems and social conflicts within the organization are events that the majority of the participants considered very onerous. Further results from regression analysis will be presented. The proposed paper will combine findings from the qualitative study with the quantitative results, illustrating figures and correlations with respective statements from the interviews. At the end, suggestions for the improvement of the emergency responder’s resilience are given and it is discussed how this can make a contribution to strive for civil security and furthermore a sustainable development.

Keywords: civil security, emergency responders, stress, resilience, resources

Procedia PDF Downloads 140
2436 Energy-Saving Methods and Principles of Energy-Efficient Concept Design in the Northern Hemisphere

Authors: Yulia A. Kononova, Znang X. Ning

Abstract:

Nowadays, architectural development is getting faster and faster. Nevertheless, modern architecture often does not meet all the points, which could help our planet to get better. As we know, people are spending an enormous amount of energy every day of their lives. Because of the uncontrolled energy usage, people have to increase energy production. As energy production process demands a lot of fuel sources, it courses a lot of problems such as climate changes, environment pollution, animals’ distinction, and lack of energy sources also. Nevertheless, nowadays humanity has all the opportunities to change this situation. Architecture is one of the most popular fields where it is possible to apply new methods of saving energy or even creating it. Nowadays we have kinds of buildings, which can meet new willing. One of them is energy effective buildings, which can save or even produce energy, combining several energy-saving principles. The main aim of this research is to provide information that helps to apply energy-saving methods while designing an environment-friendly building. The research methodology requires gathering relevant information from literature, building guidelines documents and previous research works in order to analyze it and sum up into a material that can be applied to energy-efficient building design. To mark results it should be noted that the usage of all the energy-saving methods applied to a design project of building results in ultra-low energy buildings that require little energy for space heating or cooling. As a conclusion it can be stated that developing methods of passive house design can decrease the need of energy production, which is an important issue that has to be solved in order to save planet sources and decrease environment pollution.

Keywords: accumulation, energy-efficient building, storage, superinsulation, passive house

Procedia PDF Downloads 259