Search results for: biomimetic architecture
1196 Sustainable Concepts Applied in the Pre-Columbian Andean Architecture in Southern Ecuador
Authors: Diego Espinoza-Piedra, David Duran
Abstract:
All architectural and land use processes are framed in a cultural, social and geographical context. The present study analyzes the Andean culture before the Spanish conquest in southern Ecuador, in the province of Azuay. This area has been habited for more than 10.000 years. The Canari and the Inca cultures occupied Azuay close to the arrival of the Spanish conquers. The Inca culture was settled in the Andes Mountains. The Canari culture was established in the south of Ecuador, on the actual provinces of Azuay and Canar. In contrast with history and archeology, to the best of our knowledge, their architecture has not yet been studied in this area because of the lack of architectural structures. Consequently, the present research reviewed the land use and culture for architectonic interpretations. The two main architectural objects in these cultures were dwellings and public buildings. In the first case, housing was conceived as temporary. It had to stand as long as its inhabitants lived. Therefore, houses were built when a couple got married. The whole community started the construction through the so-called ‘minga’ or collective work. The construction materials were tree branches, reeds, agave, ground, and straw. So that when their owners aged and then died, this house was easily disarmed and overthrown. Their materials become part of the land for agriculture. Finally, this cycle was repeated indefinitely. In the second case, the buildings, which we can call public, have presented erroneous interpretations. They have been defined as temples. But according to our conclusions, they were places for temporary accommodation, storage of objects and products, and in some special cases, even astronomical observatories. These public buildings were settled along the important road system called ‘Capac-Nam’, currently declared by UNESCO as World Cultural Heritage. The buildings had different scales at regular distances. Also, they were established in special or strategic places, which constituted a system of observatories. These observatories allowed to determine the cycles or calendars (solar or lunar) necessary for the agricultural production, as well as other natural phenomena. Most of the current minimal existence of physical structures in quantity and state of conservation is at the level of foundations or pieces of walls. Therefore, this study was realized after the identification of the history and culture of the inhabitants of this Andean region.Keywords: Andean, pre-Colombian architecture, Southern Ecuador, sustainable
Procedia PDF Downloads 1271195 Place-Making Theory behind Claremont Court
Authors: Sandra Costa-Santos, Nadia Bertolino, Stephen Hicks, Vanessa May, Camilla Lewis
Abstract:
This paper aims to elaborate the architectural theory on place-making that supported Claremont Court housing scheme (Edinburgh, United Kingdom). Claremont Court (1959-62) is a large post-war mixed development housing scheme designed by Basil Spence, which included ‘place-making’ as one of its founding principles. Although some stylistic readings of the housing scheme have been published, the theory on place-making that allegedly ruled the design has yet to be clarified. The architecture allows us to mark or make a place within space in order to dwell. Under the framework of contemporary philosophical theories of place, this paper aims to explore the relationship between place and dwelling through a cross-disciplinary reading of Claremont Court, with a view to develop an architectural theory on place-making. Since dwelling represents the way we are immersed in our world in an existential manner, this theme is not just relevant for architecture but also for philosophy and sociology. The research in this work is interpretive-historic in nature. It examines documentary evidence of the original architectural design, together with relevant literature in sociology, history, and architecture, through the lens of theories of place. First, the paper explores how the dwelling types originally included in Claremont Court supported ideas of dwelling or meanings of home. Then, it traces shared space and social ties in order to study the symbolic boundaries that allow the creation of a collective identity or sense of belonging. Finally, the relation between the housing scheme and the supporting theory is identified. The findings of this research reveal Scottish architect Basil Spence’s exploration of the meaning of home, as he changed his approach to the mass housing while acting as President of the Royal Incorporation of British Architects (1958-60). When the British Government was engaged in various ambitious building programmes, he sought to drive architecture to a wider socio-political debate as president of the RIBA, hence moving towards a more ambitious and innovative socio-architectural approach. Rather than trying to address the ‘genius loci’ with an architectural proposition, as has been stated, the research shows that the place-making theory behind the housing scheme was supported by notions of community-based on shared space and dispositions. The design of the housing scheme was steered by a desire to foster social relations and collective identities, rather than by the idea of keeping the spirit of the place. This research is part of a cross-disciplinary project funded by the Arts and Humanities Research Council. The findings present Claremont Court as a signifier of Basil Spence’s attempt to address the post-war political debate on housing in United Kingdom. They highlight the architect’s theoretical agenda and challenge current purely stylistic readings of Claremont Court as they fail to acknowledge its social relevance.Keywords: architectural theory, dwelling, place-making, post-war housing
Procedia PDF Downloads 2651194 Normalized Enterprises Architectures: Portugal's Public Procurement System Application
Authors: Tiago Sampaio, André Vasconcelos, Bruno Fragoso
Abstract:
The Normalized Systems Theory, which is designed to be applied to software architectures, provides a set of theorems, elements and rules, with the purpose of enabling evolution in Information Systems, as well as ensuring that they are ready for change. In order to make that possible, this work’s solution is to apply the Normalized Systems Theory to the domain of enterprise architectures, using Archimate. This application is achieved through the adaptation of the elements of this theory, making them artifacts of the modeling language. The theorems are applied through the identification of the viewpoints to be used in the architectures, as well as the transformation of the theory’s encapsulation rules into architectural rules. This way, it is possible to create normalized enterprise architectures, thus fulfilling the needs and requirements of the business. This solution was demonstrated using the Portuguese Public Procurement System. The Portuguese government aims to make this system as fair as possible, allowing every organization to have the same business opportunities. The aim is for every economic operator to have access to all public tenders, which are published in any of the 6 existing platforms, independently of where they are registered. In order to make this possible, we applied our solution to the construction of two different architectures, which are able of fulfilling the requirements of the Portuguese government. One of those architectures, TO-BE A, has a Message Broker that performs the communication between the platforms. The other, TO-BE B, represents the scenario in which the platforms communicate with each other directly. Apart from these 2 architectures, we also represent the AS-IS architecture that demonstrates the current behavior of the Public Procurement Systems. Our evaluation is based on a comparison between the AS-IS and the TO-BE architectures, regarding the fulfillment of the rules and theorems of the Normalized Systems Theory and some quality metrics.Keywords: archimate, architecture, broker, enterprise, evolvable systems, interoperability, normalized architectures, normalized systems, normalized systems theory, platforms
Procedia PDF Downloads 3561193 Modified Model-Based Systems Engineering Driven Approach for Defining Complex Energy Systems
Authors: Akshay S. Dalvi, Hazim El-Mounayri
Abstract:
The internal and the external interactions between the complex structural and behavioral characteristics of the complex energy system result in unpredictable emergent behaviors. These emergent behaviors are not well understood, especially when modeled using the traditional top-down systems engineering approach. The intrinsic nature of current complex energy systems has called for an elegant solution that provides an integrated framework in Model-Based Systems Engineering (MBSE). This paper mainly presents a MBSE driven approach to define and handle the complexity that arises due to emergent behaviors. The approach provides guidelines for developing system architecture that leverages in predicting the complexity index of the system at different levels of abstraction. A framework that integrates indefinite and definite modeling aspects is developed to determine the complexity that arises during the development phase of the system. This framework provides a workflow for modeling complex systems using Systems Modeling Language (SysML) that captures the system’s requirements, behavior, structure, and analytical aspects at both problem definition and solution levels. A system architecture for a district cooling plant is presented, which demonstrates the ability to predict the complexity index. The result suggests that complex energy systems like district cooling plant can be defined in an elegant manner using the unconventional modified MBSE driven approach that helps in estimating development time and cost.Keywords: district cooling plant, energy systems, framework, MBSE
Procedia PDF Downloads 1281192 A Study on How to Develop the Usage Metering Functions of BIM (Building Information Modeling) Software under Cloud Computing Environment
Authors: Kim Byung-Kon, Kim Young-Jin
Abstract:
As project opportunities for the Architecture, Engineering and Construction (AEC) industry have grown more complex and larger, the utilization of BIM (Building Information Modeling) technologies for 3D design and simulation practices has been increasing significantly; the typical applications of the BIM technologies include clash detection and design alternative based on 3D planning, which have been expanded over to the technology of construction management in the AEC industry for virtual design and construction. As for now, commercial BIM software has been operated under a single-user environment, which is why initial costs for its introduction are very high. Cloud computing, one of the most promising next-generation Internet technologies, enables simple Internet devices to use services and resources provided with BIM software. Recently in Korea, studies to link between BIM and cloud computing technologies have been directed toward saving costs to build BIM-related infrastructure, and providing various BIM services for small- and medium-sized enterprises (SMEs). This study addressed how to develop the usage metering functions of BIM software under cloud computing architecture in order to archive and use BIM data and create an optimal revenue structure so that the BIM services may grow spontaneously, considering a demand for cloud resources. To this end, the author surveyed relevant cases, and then analyzed needs and requirements from AEC industry. Based on the results & findings of the foregoing survey & analysis, the author proposed herein how to optimally develop the usage metering functions of cloud BIM software.Keywords: construction IT, BIM (Building Information Modeling), cloud computing, BIM-based cloud computing, 3D design, cloud BIM
Procedia PDF Downloads 5061191 Energy Efficient Buildings in Tehran by Reviewing High-Tech Methods and Vernacular Architecture Principles
Authors: Shima Naderi, Abbas Abbaszadeh Shahri
Abstract:
Energy resources are reachable and affordable in Iran, thus surplus access to fossil fuels besides high level of economic growth leads to serious environmental critical such as pollutants and greenhouse gases in the atmosphere, increase in average degrease and lack of water sources specially in Tehran as a capital city of Iran. As building sector consumes a huge portion of energy, taking actions towards alternative sources of energy as well as conserving non-renewable energy resources and architectural energy saving methods are the fundamental basis for achieving sustainability`s goals. This study tries to explore implantation of both high technologies and traditional issues for reduction of energy demands in buildings of Tehran and introduce some factors and instructions for achieving this purpose. Green and energy efficient buildings such as ZEBs make it possible to preserve natural resources for the next generations by reducing pollution and increasing ecosystem self-recovery. However ZEB is not widely spread in Iran because of its low economic efficiency, it is not viable for a private entrepreneur without the governmental supports. Therefore executing of Architectural Energy Efficiency can be a better option. It is necessary to experience a substructure expansion with respect to traditional residential building style. Renewable energies and passive design which are the substantial part of the history of architecture in Iran can be regenerated and employed as an essential part of designing energy efficient buildings.Keywords: architectural energy efficiency, passive design, renewable energies, zero energy buildings
Procedia PDF Downloads 3571190 Spatial Architecture Impact in Mediation Open Circuit Voltage Control of Quantum Solar Cell Recovery Systems
Authors: Moustafa Osman Mohammed
Abstract:
The photocurrent generations are influencing ultra-high efficiency solar cells based on self-assembled quantum dot (QD) nanostructures. Nanocrystal quantum dots (QD) provide a great enhancement toward solar cell efficiencies through the use of quantum confinement to tune absorbance across the solar spectrum enabled multi-exciton generation. Based on theoretical predictions, QDs have potential to improve systems efficiency in approximate regular electrons excitation intensity greater than 50%. In solar cell devices, an intermediate band formed by the electron levels in quantum dot systems. The spatial architecture is exploring how can solar cell integrate and produce not only high open circuit voltage (> 1.7 eV) but also large short-circuit currents due to the efficient absorption of sub-bandgap photons. In the proposed QD system, the structure allows barrier material to absorb wavelengths below 700 nm while multi-photon processes in the used quantum dots to absorb wavelengths up to 2 µm. The assembly of the electronic model is flexible to demonstrate the atoms and molecules structure and material properties to tune control energy bandgap of the barrier quantum dot to their respective optimum values. In terms of energy virtual conversion, the efficiency and cost of the electronic structure are unified outperform a pair of multi-junction solar cell that obtained in the rigorous test to quantify the errors. The milestone toward achieving the claimed high-efficiency solar cell device is controlling the edge causes of energy bandgap between the barrier material and quantum dot systems according to the media design limits. Despite this remarkable potential for high photocurrent generation, the achievable open-circuit voltage (Voc) is fundamentally limited due to non-radiative recombination processes in QD solar cells. The orientation of voltage recovery system is compared theoretically with experimental Voc variation in mediation upper–limit obtained one diode modeling form at the cells with different bandgap (Eg) as classified in the proposed spatial architecture. The opportunity for improvement Voc is valued approximately greater than 1V by using smaller QDs through QD solar cell recovery systems as confined to other micro and nano operations states.Keywords: nanotechnology, photovoltaic solar cell, quantum systems, renewable energy, environmental modeling
Procedia PDF Downloads 1541189 Employing a System of Systems Approach in the Maritime RobotX Challenge: Incorporating Information Technology Students in the Development of an Autonomous Catamaran
Authors: Adam Jenkins
Abstract:
The Maritime RobotX Challenge provides a platform for postgraduate students conducting research in autonomous robotic systems to participate in an international competition. Although targeted to postgraduate students, the problem domain lends itself to a wide range of different levels of student expertise. In 2022, undergraduate Information Technology students from the University of South Australia undertook the challenge, utilizing a System of the Systems approach to the project's architecture. Each student group produced an independent solution to an identified task, which was then implemented on a Single Board Computer (SBC). A Central Control System then engaged each solution when appropriate, allowing the encapsulated SBC systems to manage each task as it was encountered. This approach facilitated collaboration among the multiple independent student teams over an 18-month period, and the fundamental system-agnostic architecture allowed for both the variance in student solutions and the limitations caused by the global electronics shortage. By adopting this approach, Information Technology teams were able to work independently yet produce an effective solution, leveraging their expertise to develop and construct an autonomous catamaran capable of meeting the competition's demanding requirements while producing a high level of engagement. The System of Systems approach is recommended to other universities interested in competing at this level and engaging students in a real-world problem.Keywords: case study, robotics, education, programming, system of systems, multi-disciplinary collaboration
Procedia PDF Downloads 751188 Ranking of Optimal Materials for Building Walls from the Perspective of Cost and Waste of Electricity and Gas Energy Using AHP-TOPSIS 1 Technique: Study Example: Sari City
Authors: Seyedomid Fatemi
Abstract:
The walls of the building, as the main intermediary between the outside and the inside of the building, play an important role in controlling the environmental conditions and ensuring the comfort of the residents, thus reducing the heating and cooling loads. Therefore, the use of suitable materials is considered one of the simplest and most effective ways to reduce the heating and cooling loads of the building, which will also save energy. Therefore, in order to achieve the goal of the research "Ranking of optimal materials for building walls," optimal materials for building walls in a temperate and humid climate (case example: Sari city) from the perspective of embodied energy, waste of electricity and gas energy, cost and reuse been investigated to achieve sustainable architecture. In this regard, using information obtained from Sari Municipality, design components have been presented by experts using the Delphi method. Considering the criteria of experts' opinions (cost and reuse), the amount of embodied energy of the materials, as well as the amount of waste of electricity and gas of different materials of the walls, with the help of the AHP weighting technique and finally with the TOPSIS technique, the best type of materials in the order of 1- 3-D Panel 2-ICF-, 3-Cement block with pumice, 4-Wallcrete block, 5-Clay block, 6-Autoclaved Aerated Concrete (AAC), 7-Foam cement block, 8-Aquapanel and 9-Reinforced concrete wall for use in The walls of the buildings were proposed in Sari city.Keywords: optimum materials, building walls, moderate and humid climate, sustainable architecture, AHP-TOPSIS technique
Procedia PDF Downloads 751187 Cocoon Characterization of Sericigenous Insects in North-East India and Prospects
Authors: Tarali Kalita, Karabi Dutta
Abstract:
The North Eastern Region of India, with diverse climatic conditions and a wide range of ecological habitats, makes an ideal natural abode for a good number of silk-producing insects. Cocoon is the economically important life stage from where silk of economic importance is obtained. In recent years, silk-based biomaterials have gained considerable attention, which is dependent on the structure and properties of the silkworm cocoons as well as silk yarn. The present investigation deals with the morphological study of cocoons, including cocoon color, cocoon size, shell weight and shell ratio of eleven different species of silk insects collected from different regions of North East India. The Scanning Electron Microscopic study and X-ray photoelectron spectroscopy were performed to know the arrangement of silk threads in cocoons and the atomic elemental analysis, respectively. Further, collected cocoons were degummed and reeled/spun on a reeling machine or spinning wheel to know the filament length, linear density and tensile strength by using Universal Testing Machine. The study showed significant variation in terms of cocoon color, cocoon shape, cocoon weight and filament packaging. XPS analysis revealed the presence of elements (Mass %) C, N, O, Si and Ca in varying amounts. The wild cocoons showed the presence of Calcium oxalate crystals which makes the cocoons hard and needs further treatment to reel. In the present investigation, the highest percentage of strain (%) and toughness (g/den) were observed in Antheraea assamensis, which implies that the muga silk is a more compact packing of molecules. It is expected that this study will be the basis for further biomimetic studies to design and manufacture artificial fiber composites with novel morphologies and associated material properties.Keywords: cocoon characterization, north-east India, prospects, silk characterization
Procedia PDF Downloads 881186 Embedded System of Signal Processing on FPGA: Underwater Application Architecture
Authors: Abdelkader Elhanaoui, Mhamed Hadji, Rachid Skouri, Said Agounad
Abstract:
The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method. The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy. Signal processing has a wider implementation in general-purpose pro-cessors. Our interest was focused on the use of FPGAs (Field-Programmable Gate Ar-rays) in order to minimize the computational complexity in single processor architecture, then be accelerated on FPGA and meet real-time and energy efficiency requirements. Gen-eral-purpose processors are not efficient for signal processing. We implemented the acous-tic backscattered signal processing model on the Altera DE-SOC board and compared it to Odroid xu4. By comparison, the computing latency of Odroid xu4 and FPGA is 60 sec-onds and 3 seconds, respectively. The detailed SoC FPGA-based system has shown that acoustic spectra are performed up to 20 times faster than the Odroid xu4 implementation. FPGA-based system of processing algorithms is realized with an absolute error of about 10⁻³. This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing. It is possible to obtain information related to the detection and characterization of submerged cells. So we have achieved good exper-imental results in real-time and energy efficiency.Keywords: DE1 FPGA, acoustic scattering, form function, signal processing, non-destructive testing
Procedia PDF Downloads 751185 RNA-Seq Based Transcriptomic Analysis of Wheat Cultivars for Unveiling of Genomic Variations and Isolation of Drought Tolerant Genes for Genome Editing
Authors: Ghulam Muhammad Ali
Abstract:
Unveiling of genes involved in drought and root architecture using transcriptomic analyses remained fragmented for further improvement of wheat through genome editing. The purpose of this research endeavor was to unveil the variations in different genes implicated in drought tolerance and root architecture in wheat through RNA-seq data analysis. In this study seedlings of 8 days old, 6 cultivars of wheat namely, Batis, Blue Silver, Local White, UZ888, Chakwal 50 and Synthetic wheat S22 were subjected to transcriptomic analysis for root and shoot genes. Total of 12 RNA samples was sequenced by Illumina. Using updated wheat transcripts from Ensembl and IWGC references with 54,175 gene models, we found that 49,621 out of 54,175 (91.5%) genes are expressed at an RPKM of 0.1 or more (in at least 1 sample). The number of genes expressed was higher in Local White than Batis. Differentially expressed genes (DEG) were higher in Chakwal 50. Expression-based clustering indicated conserved function of DRO1and RPK1 between Arabidopsis and wheat. Dendrogram showed that Local White is sister to Chakwal 50 while Batis is closely related to Blue Silver. This study flaunts transcriptomic sequence variations in different cultivars that showed mutations in genes associated with drought that may directly contribute to drought tolerance. DRO1 and RPK1 genes were fetched/isolated for genome editing. These genes are being edited in wheat through CRISPR-Cas9 for yield enhancement.Keywords: transcriptomic, wheat, genome editing, drought, CRISPR-Cas9, yield enhancement
Procedia PDF Downloads 1451184 Text Emotion Recognition by Multi-Head Attention based Bidirectional LSTM Utilizing Multi-Level Classification
Authors: Vishwanath Pethri Kamath, Jayantha Gowda Sarapanahalli, Vishal Mishra, Siddhesh Balwant Bandgar
Abstract:
Recognition of emotional information is essential in any form of communication. Growing HCI (Human-Computer Interaction) in recent times indicates the importance of understanding of emotions expressed and becomes crucial for improving the system or the interaction itself. In this research work, textual data for emotion recognition is used. The text being the least expressive amongst the multimodal resources poses various challenges such as contextual information and also sequential nature of the language construction. In this research work, the proposal is made for a neural architecture to resolve not less than 8 emotions from textual data sources derived from multiple datasets using google pre-trained word2vec word embeddings and a Multi-head attention-based bidirectional LSTM model with a one-vs-all Multi-Level Classification. The emotions targeted in this research are Anger, Disgust, Fear, Guilt, Joy, Sadness, Shame, and Surprise. Textual data from multiple datasets were used for this research work such as ISEAR, Go Emotions, Affect datasets for creating the emotions’ dataset. Data samples overlap or conflicts were considered with careful preprocessing. Our results show a significant improvement with the modeling architecture and as good as 10 points improvement in recognizing some emotions.Keywords: text emotion recognition, bidirectional LSTM, multi-head attention, multi-level classification, google word2vec word embeddings
Procedia PDF Downloads 1731183 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions
Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams
Abstract:
The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.Keywords: architecture, central pavilions, classicism, machine learning
Procedia PDF Downloads 1381182 A Comprehensive Review of Artificial Intelligence Applications in Sustainable Building
Authors: Yazan Al-Kofahi, Jamal Alqawasmi.
Abstract:
In this study, a comprehensive literature review (SLR) was conducted, with the main goal of assessing the existing literature about how artificial intelligence (AI), machine learning (ML), deep learning (DL) models are used in sustainable architecture applications and issues including thermal comfort satisfaction, energy efficiency, cost prediction and many others issues. For this reason, the search strategy was initiated by using different databases, including Scopus, Springer and Google Scholar. The inclusion criteria were used by two research strings related to DL, ML and sustainable architecture. Moreover, the timeframe for the inclusion of the papers was open, even though most of the papers were conducted in the previous four years. As a paper filtration strategy, conferences and books were excluded from database search results. Using these inclusion and exclusion criteria, the search was conducted, and a sample of 59 papers was selected as the final included papers in the analysis. The data extraction phase was basically to extract the needed data from these papers, which were analyzed and correlated. The results of this SLR showed that there are many applications of ML and DL in Sustainable buildings, and that this topic is currently trendy. It was found that most of the papers focused their discussions on addressing Environmental Sustainability issues and factors using machine learning predictive models, with a particular emphasis on the use of Decision Tree algorithms. Moreover, it was found that the Random Forest repressor demonstrates strong performance across all feature selection groups in terms of cost prediction of the building as a machine-learning predictive model.Keywords: machine learning, deep learning, artificial intelligence, sustainable building
Procedia PDF Downloads 651181 A Design of Elliptic Curve Cryptography Processor based on SM2 over GF(p)
Authors: Shiji Hu, Lei Li, Wanting Zhou, DaoHong Yang
Abstract:
The data encryption, is the foundation of today’s communication. On this basis, how to improve the speed of data encryption and decryption is always a problem that scholars work for. In this paper, we proposed an elliptic curve crypto processor architecture based on SM2 prime field. In terms of hardware implementation, we optimized the algorithms in different stages of the structure. In finite field modulo operation, we proposed an optimized improvement of Karatsuba-Ofman multiplication algorithm, and shorten the critical path through pipeline structure in the algorithm implementation. Based on SM2 recommended prime field, a fast modular reduction algorithm is used to reduce 512-bit wide data obtained from the multiplication unit. The radix-4 extended Euclidean algorithm was used to realize the conversion between affine coordinate system and Jacobi projective coordinate system. In the parallel scheduling of point operations on elliptic curves, we proposed a three-level parallel structure of point addition and point double based on the Jacobian projective coordinate system. Combined with the scalar multiplication algorithm, we added mutual pre-operation to the point addition and double point operation to improve the efficiency of the scalar point multiplication. The proposed ECC hardware architecture was verified and implemented on Xilinx Virtex-7 and ZYNQ-7 platforms, and each 256-bit scalar multiplication operation took 0.275ms. The performance for handling scalar multiplication is 32 times that of CPU(dual-core ARM Cortex-A9).Keywords: Elliptic curve cryptosystems, SM2, modular multiplication, point multiplication.
Procedia PDF Downloads 961180 A Next-Generation Blockchain-Based Data Platform: Leveraging Decentralized Storage and Layer 2 Scaling for Secure Data Management
Authors: Kenneth Harper
Abstract:
The rapid growth of data-driven decision-making across various industries necessitates advanced solutions to ensure data integrity, scalability, and security. This study introduces a decentralized data platform built on blockchain technology to improve data management processes in high-volume environments such as healthcare and financial services. The platform integrates blockchain networks using Cosmos SDK and Polkadot Substrate alongside decentralized storage solutions like IPFS and Filecoin, and coupled with decentralized computing infrastructure built on top of Avalanche. By leveraging advanced consensus mechanisms, we create a scalable, tamper-proof architecture that supports both structured and unstructured data. Key features include secure data ingestion, cryptographic hashing for robust data lineage, and Zero-Knowledge Proof mechanisms that enhance privacy while ensuring compliance with regulatory standards. Additionally, we implement performance optimizations through Layer 2 scaling solutions, including ZK-Rollups, which provide low-latency data access and trustless data verification across a distributed ledger. The findings from this exercise demonstrate significant improvements in data accessibility, reduced operational costs, and enhanced data integrity when tested in real-world scenarios. This platform reference architecture offers a decentralized alternative to traditional centralized data storage models, providing scalability, security, and operational efficiency.Keywords: blockchain, cosmos SDK, decentralized data platform, IPFS, ZK-Rollups
Procedia PDF Downloads 241179 Mitigating Denial of Service Attacks in Information Centric Networking
Authors: Bander Alzahrani
Abstract:
Information-centric networking (ICN) using architectures such as Publish-Subscribe Internet Routing Paradigm (PSIRP) is one of the promising candidates for a future Internet, has recently been under the spotlight by the research community to investigate the possibility of redesigning the current Internet architecture to solve many issues such as routing scalability, security, and quality of services issues.. The Bloom filter-based forwarding is a source-routing approach that is used in the PSIRP architecture. This mechanism is vulnerable to brute force attacks which may lead to denial-of-service (DoS) attacks. In this work, we present a new forwarding approach that keeps the advantages of Bloom filter-based forwarding while mitigates attacks on the forwarding mechanism. In practice, we introduce a special type of forwarding nodes called Edge-FW to be placed at the edge of the network. The role of these node is to add an extra security layer by validating and inspecting packets at the edge of the network against brute-force attacks and check whether the packet contains a legitimate forwarding identifier (FId) or not. We leverage Certificateless Aggregate Signature (CLAS) scheme with a small size of 64-bit which is used to sign the FId. Hence, this signature becomes bound to a specific FId. Therefore, malicious nodes that inject packets with random FIds will be easily detected and dropped at the Edge-FW node when the signature verification fails. Our preliminary security analysis suggests that with the proposed approach, the forwarding plane is able to resist attacks such as DoS with very high probability.Keywords: bloom filter, certificateless aggregate signature, denial-of-service, information centric network
Procedia PDF Downloads 1961178 The Effect of Fibre Orientation on the Mechanical Behaviour of Skeletal Muscle: A Finite Element Study
Authors: Christobel Gondwe, Yongtao Lu, Claudia Mazzà, Xinshan Li
Abstract:
Skeletal muscle plays an important role in the human body system and function by generating voluntary forces and facilitating body motion. However, The mechanical properties and behaviour of skeletal muscle are still not comprehensively known yet. As such, various robust engineering techniques have been applied to better elucidate the mechanical behaviour of skeletal muscle. It is considered that muscle mechanics are highly governed by the architecture of the fibre orientations. Therefore, the aim of this study was to investigate the effect of different fibre orientations on the mechanical behaviour of skeletal muscle.In this study, a continuum mechanics approach–finite element (FE) analysis was applied to the left bicep femoris long head to determine the contractile mechanism of the muscle using Hill’s three-element model. The geometry of the muscle was segmented from the magnetic resonance images. The muscle was modelled as a quasi-incompressible hyperelastic (Mooney-Rivlin) material. Two types of fibre orientations were implemented: one with the idealised fibre arrangement, i.e. parallel single-direction fibres going from the muscle origin to insertion sites, and the other with curved fibre arrangement which is aligned with the muscle shape.The second fibre arrangement was implemented through the finite element method; non-uniform rational B-spline (FEM-NURBs) technique by means of user material (UMAT) subroutines. The stress-strain behaviour of the muscle was investigated under idealised exercise conditions, and will be further analysed under physiological conditions. The results of the two different FE models have been outputted and qualitatively compared.Keywords: FEM-NURBS, finite element analysis, Mooney-Rivlin hyperelastic, muscle architecture
Procedia PDF Downloads 4771177 Re-Envisioning Modernity: Transformations of Postwar Suburban Landscapes
Authors: Shannon Clayton
Abstract:
In an effort to explore the potential transformation of North American postwar suburbs, this M.Arch thesis actively engages in the ongoing critique of modernism from the mid 20th century to the present. Contemporary urban design practice has emerged out of the reaction to orthodox modernism. Typically, new suburban development falls into one of two strategies; an attempt to replicate pre-war fabric that never existed, or a reliance on high-density to create instant urbanism. In both cases, the critical role of architecture has been grossly undervalued. Ironically, it is the denial of suburbia’s inherent modernity that has served to prevent genuine place-making. As history demonstrates, modernism is not antithetical to architecture and place. In the postwar years, a critical discussion emerged amongst architects, which sought to evolve modernism beyond functionalism. This was demonstrated through critical discussions on image, experience, and monumentality. As well as increased interest in civic space, and investigations into mat urbanism and the megastructure. The undercurrent within these explorations was a belief that the scale and complexity of modern development could become an opportunity to create urbanism, rather than squander it. This critical discourse has continued through architectural work in the Netherlands and Denmark since the early 1990s, where an emphasis on visual variety, human scale, and public interaction has been given high priority. This thesis applies principles from this ongoing dialogue, and identifies hidden potential within existing North American suburban networks. As a result, the project re-evaluates the legacy of the master plan from a contemporary perspective.Keywords: urbanism, modernism, suburbia, place-making
Procedia PDF Downloads 2511176 A Ground Structure Method to Minimize the Total Installed Cost of Steel Frame Structures
Authors: Filippo Ranalli, Forest Flager, Martin Fischer
Abstract:
This paper presents a ground structure method to optimize the topology and discrete member sizing of steel frame structures in order to minimize total installed cost, including material, fabrication and erection components. The proposed method improves upon existing cost-based ground structure methods by incorporating constructability considerations well as satisfying both strength and serviceability constraints. The architecture for the method is a bi-level Multidisciplinary Feasible (MDF) architecture in which the discrete member sizing optimization is nested within the topology optimization process. For each structural topology generated, the sizing optimization process seek to find a set of discrete member sizes that result in the lowest total installed cost while satisfying strength (member utilization) and serviceability (node deflection and story drift) criteria. To accurately assess cost, the connection details for the structure are generated automatically using accurate site-specific cost information obtained directly from fabricators and erectors. Member continuity rules are also applied to each node in the structure to improve constructability. The proposed optimization method is benchmarked against conventional weight-based ground structure optimization methods resulting in an average cost savings of up to 30% with comparable computational efficiency.Keywords: cost-based structural optimization, cost-based topology and sizing, optimization, steel frame ground structure optimization, multidisciplinary optimization of steel structures
Procedia PDF Downloads 3411175 Between the House and the City: An Investigation of the Structure of the Family/Society and the Role of the Public Housing in Tokyo and Berlin
Authors: Abudjana Babiker
Abstract:
The middle of twenty century witnessed an explosion in public housing. After the great depression, some of the capitalists and communist countries have launched policies and programs to produce public housing in the urban areas. Concurrently, modernity was the leading architecture style at the time excessively supported the production, and principally was the instrument for the success of the public housing program due to the modernism manifesto for manufactured architecture as an international style that serves the society and parallelly connect it to the other design industries which allowed for the production of the architecture elements. After the second world war, public housing flourished, especially in communist’s countries. The idea of public housing was conceived as living spaces at the time, while the Workplaces performed as the place for production and labor. Michel Foucault - At the end of the twenty century- the introduction of biopolitics has had highlighted the alteration in the production and labor inter-function. The house does not precisely perform as the sanctuary, from the production, for the family, it opens the house to be -part of the city as- a space for production, not only to produce objects but to reproduce the family as a total part of the production mechanism in the city. While the public housing kept altering from one country to another after the failure of the modernist’s public housing in the late 1970s, the society continued changing parallelly with the socio-economic condition in each political-economical system, and the public housing thus followed. The family structure in the major cities has been dramatically changing, single parenting and the long working hours, for instance, have been escalating the loneliness in the major cities such as London, Berlin, and Tokyo and the public housing for the families is no longer suits the single lifestyle for the individuals. This Paper investigates the performance of both the single/individual lifestyle and the family/society structure in Tokyo and Berlin in a relation to the utilization of public housing under economical policies and the socio-political environment that produced the individuals and the collective. The study is carried through the study of the undercurrent individual/society and case studies to examine the performance of the utilization of the housing. The major finding is that the individual/collective are revolving around the city; the city identified and acts as a system that magnetized and blurred the line between production and reproduction lifestyle. The mass public housing for families is shifting to be a combination between neo-liberalism and socialism housing.Keywords: loneliness, production reproduction, work live, publichousing
Procedia PDF Downloads 1851174 Architecture for Hearing Impaired: A Study on Conducive Learning Environments for Deaf Children with Reference to Sri Lanka
Authors: Champa Gunawardana, Anishka Hettiarachchi
Abstract:
Conducive Architecture for learning environments is an area of interest for many scholars around the world. Loss of sense of hearing leads to the assumption that deaf students are visual learners. Comprehending favorable non-hearing attributes of architecture can lead to effective, rich and friendly learning environments for hearing impaired. The objective of the current qualitative investigation is to explore the nature and parameters of a sense of place of deaf children to support optimal learning. The investigation was conducted with hearing-impaired children (age: between 8-19, Gender: 15 male and 15 female) of Yashodhara deaf and blind school at Balangoda, Sri Lanka. A sensory ethnography study was adopted to identify the nature of perception and the parameters of most preferred and least preferred spaces of the learning environment. The common perceptions behind most preferred places in the learning environment were found as being calm and quiet, sense of freedom, volumes characterized by openness and spaciousness, sense of safety, wide spaces, privacy and belongingness, less crowded, undisturbed, availability of natural light and ventilation, sense of comfort and the view of green colour in the surroundings. On the other hand, the least preferred spaces were found to be perceived as dark, gloomy, warm, crowded, lack of freedom, smells (bad), unsafe and having glare. Perception of space by deaf considering the hierarchy of sensory modalities involved was identified as; light - color perception (34 %), sight - visual perception (32%), touch - haptic perception (26%), smell - olfactory perception (7%) and sound – auditory perception (1%) respectively. Sense of freedom (32%) and sense of comfort (23%) were the predominant psychological parameters leading to an optimal sense of place perceived by hearing impaired. Privacy (16%), rhythm (14%), belonging (9%) and safety (6%) were found as secondary factors. Open and wide flowing spaces without visual barriers, transparent doors and windows or open port holes to ease their communication, comfortable volumes, naturally ventilated spaces, natural lighting or diffused artificial lighting conditions without glare, sloping walkways, wider stairways, walkways and corridors with ample distance for signing were identified as positive characteristics of the learning environment investigated.Keywords: deaf, visual learning environment, perception, sensory ethnography
Procedia PDF Downloads 2291173 The Reenactment of Historic Memory and the Ways to Read past Traces through Contemporary Architecture in European Urban Contexts: The Case Study of the Medieval Walls of Naples
Authors: Francesco Scarpati
Abstract:
Because of their long history, ranging from ancient times to the present day, European cities feature many historical layers, whose single identities are represented by traces surviving in the urban design. However, urban transformations, in particular, the ones that have been produced by the property speculation phenomena of the 20th century, often compromised the readability of these traces, resulting in a loss of the historical identities of the single layers. The purpose of this research is, therefore, a reflection on the theme of the reenactment of the historical memory in the stratified European contexts and on how contemporary architecture can help to reveal past signs of the cities. The research work starts from an analysis of a series of emblematic examples that have already provided an original solution to the described problem, going from the architectural detail scale to the urban and landscape scale. The results of these analyses are then applied to the case study of the city of Naples, as an emblematic example of a stratified city, with an ancient Greek origin; a city where it is possible to read most of the traces of its transformations. Particular consideration is given to the trace of the medieval walls of the city, which a long time ago clearly divided the city itself from the outer fields, and that is no longer readable at the current time. Finally, solutions and methods of intervention are proposed to ensure that the trace of the walls, read as a boundary, can be revealed through the contemporary project.Keywords: contemporary project, historic memory, historic urban contexts, medieval walls, naples, stratified cities, urban traces
Procedia PDF Downloads 2631172 Methods Employed to Mitigate Wind Damage on Ancient Egyptian Architecture
Authors: Hossam Mohamed Abdelfattah Helal Hegazi
Abstract:
Winds and storms are considered crucial weathering factors, representing primary causes of destruction and erosion for all materials on the Earth's surface. This naturally includes historical structures, with the impact of winds and storms intensifying their deterioration, particularly when carrying high-hardness sand particles during their passage across the ground. Ancient Egyptians utilized various methods to prevent wind damage to their ancient architecture throughout the ancient Egyptian periods . One of the techniques employed by ancient Egyptians was the use of clay or compacted earth as a filling material between opposing walls made of stone, bricks, or mud bricks. The walls made of reeds or woven tree branches were covered with clay to prevent the infiltration of winds and rain, enhancing structural integrity, this method was commonly used in hollow layers . Additionally, Egyptian engineers innovated a type of adobe brick with uniformly leveled sides, manufactured from dried clay. They utilized stone barriers, constructed wind traps, and planted trees in rows parallel to the prevailing wind direction. Moreover, they employed receptacles to drain rainwater resulting from wind-loaded rain and used mortar to fill gaps in roofs and structures. Furthermore, proactive measures such as the removal of sand from around historical and archaeological buildings were taken to prevent adverse effectsKeywords: winds, storms, weathering, destruction, erosion, materials, Earth's surface, historical structures, impact
Procedia PDF Downloads 611171 Generating a Functional Grammar for Architectural Design from Structural Hierarchy in Combination of Square and Equal Triangle
Authors: Sanaz Ahmadzadeh Siyahrood, Arghavan Ebrahimi, Mohammadjavad Mahdavinejad
Abstract:
Islamic culture was accountable for a plethora of development in astronomy and science in the medieval term, and in geometry likewise. Geometric patterns are reputable in a considerable number of cultures, but in the Islamic culture the patterns have specific features that connect the Islamic faith to mathematics. In Islamic art, three fundamental shapes are generated from the circle shape: triangle, square and hexagon. Originating from their quiddity, each of these geometric shapes has its own specific structure. Even though the geometric patterns were generated from such simple forms as the circle and the square, they can be combined, duplicated, interlaced, and arranged in intricate combinations. So in order to explain geometrical interaction principles between square and equal triangle, in the first definition step, all types of their linear forces individually and in the second step, between them, would be illustrated. In this analysis, some angles will be created from intersection of their directions. All angles are categorized to some groups and the mathematical expressions among them are analyzed. Since the most geometric patterns in Islamic art and architecture are based on the repetition of a single motif, the evaluation results which are obtained from a small portion, is attributable to a large-scale domain while the development of infinitely repeating patterns can represent the unchanging laws. Geometric ornamentation in Islamic art offers the possibility of infinite growth and can accommodate the incorporation of other types of architectural layout as well, so the logic and mathematical relationships which have been obtained from this analysis are applicable in designing some architecture layers and developing the plan design.Keywords: angle, equal triangle, square, structural hierarchy
Procedia PDF Downloads 1931170 Architecture - Performance Relationship in GPU Computing - Composite Process Flow Modeling and Simulations
Authors: Ram Mohan, Richard Haney, Ajit Kelkar
Abstract:
Current developments in computing have shown the advantage of using one or more Graphic Processing Units (GPU) to boost the performance of many computationally intensive applications but there are still limits to these GPU-enhanced systems. The major factors that contribute to the limitations of GPU(s) for High Performance Computing (HPC) can be categorized as hardware and software oriented in nature. Understanding how these factors affect performance is essential to develop efficient and robust applications codes that employ one or more GPU devices as powerful co-processors for HPC computational modeling. This research and technical presentation will focus on the analysis and understanding of the intrinsic interrelationship of both hardware and software categories on computational performance for single and multiple GPU-enhanced systems using a computationally intensive application that is representative of a large portion of challenges confronting modern HPC. The representative application uses unstructured finite element computations for transient composite resin infusion process flow modeling as the computational core, characteristics and results of which reflect many other HPC applications via the sparse matrix system used for the solution of linear system of equations. This work describes these various software and hardware factors and how they interact to affect performance of computationally intensive applications enabling more efficient development and porting of High Performance Computing applications that includes current, legacy, and future large scale computational modeling applications in various engineering and scientific disciplines.Keywords: graphical processing unit, software development and engineering, performance analysis, system architecture and software performance
Procedia PDF Downloads 3611169 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights
Authors: Julian Wise
Abstract:
Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.Keywords: mineral technology, big data, machine learning operations, data lake
Procedia PDF Downloads 1101168 Quality-Of-Service-Aware Green Bandwidth Allocation in Ethernet Passive Optical Network
Authors: Tzu-Yang Lin, Chuan-Ching Sue
Abstract:
Sleep mechanisms are commonly used to ensure the energy efficiency of each optical network unit (ONU) that concerns a single class delay constraint in the Ethernet Passive Optical Network (EPON). How long the ONUs can sleep without violating the delay constraint has become a research problem. Particularly, we can derive an analytical model to determine the optimal sleep time of ONUs in every cycle without violating the maximum class delay constraint. The bandwidth allocation considering such optimal sleep time is called Green Bandwidth Allocation (GBA). Although the GBA mechanism guarantees that the different class delay constraints do not violate the maximum class delay constraint, packets with a more relaxed delay constraint will be treated as those with the most stringent delay constraint and may be sent early. This means that the ONU will waste energy in active mode to send packets in advance which did not need to be sent at the current time. Accordingly, we proposed a QoS-aware GBA using a novel intra-ONU scheduling to control the packets to be sent according to their respective delay constraints, thereby enhancing energy efficiency without deteriorating delay performance. If packets are not explicitly classified but with different packet delay constraints, we can modify the intra-ONU scheduling to classify packets according to their packet delay constraints rather than their classes. Moreover, we propose the switchable ONU architecture in which the ONU can switch the architecture according to the sleep time length, thus improving energy efficiency in the QoS-aware GBA. The simulation results show that the QoS-aware GBA ensures that packets in different classes or with different delay constraints do not violate their respective delay constraints and consume less power than the original GBA.Keywords: Passive Optical Networks, PONs, Optical Network Unit, ONU, energy efficiency, delay constraint
Procedia PDF Downloads 2831167 Reconstruction of Visual Stimuli Using Stable Diffusion with Text Conditioning
Authors: ShyamKrishna Kirithivasan, Shreyas Battula, Aditi Soori, Richa Ramesh, Ramamoorthy Srinath
Abstract:
The human brain, among the most complex and mysterious aspects of the body, harbors vast potential for extensive exploration. Unraveling these enigmas, especially within neural perception and cognition, delves into the realm of neural decoding. Harnessing advancements in generative AI, particularly in Visual Computing, seeks to elucidate how the brain comprehends visual stimuli observed by humans. The paper endeavors to reconstruct human-perceived visual stimuli using Functional Magnetic Resonance Imaging (fMRI). This fMRI data is then processed through pre-trained deep-learning models to recreate the stimuli. Introducing a new architecture named LatentNeuroNet, the aim is to achieve the utmost semantic fidelity in stimuli reconstruction. The approach employs a Latent Diffusion Model (LDM) - Stable Diffusion v1.5, emphasizing semantic accuracy and generating superior quality outputs. This addresses the limitations of prior methods, such as GANs, known for poor semantic performance and inherent instability. Text conditioning within the LDM's denoising process is handled by extracting text from the brain's ventral visual cortex region. This extracted text undergoes processing through a Bootstrapping Language-Image Pre-training (BLIP) encoder before it is injected into the denoising process. In conclusion, a successful architecture is developed that reconstructs the visual stimuli perceived and finally, this research provides us with enough evidence to identify the most influential regions of the brain responsible for cognition and perception.Keywords: BLIP, fMRI, latent diffusion model, neural perception.
Procedia PDF Downloads 66