Search results for: turbulence models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7060

Search results for: turbulence models

4570 Advanced Technologies and Algorithms for Efficient Portfolio Selection

Authors: Konstantinos Liagkouras, Konstantinos Metaxiotis

Abstract:

In this paper we present a classification of the various technologies applied for the solution of the portfolio selection problem according to the discipline and the methodological framework followed. We provide a concise presentation of the emerged categories and we are trying to identify which methods considered obsolete and which lie at the heart of the debate. On top of that, we provide a comparative study of the different technologies applied for efficient portfolio construction and we suggest potential paths for future work that lie at the intersection of the presented techniques.

Keywords: portfolio selection, optimization techniques, financial models, stochastic, heuristics

Procedia PDF Downloads 432
4569 The TarMed Reform of 2014: A Causal Analysis of the Effects on the Behavior of Swiss Physicians

Authors: Camila Plaza, Stefan Felder

Abstract:

In October 2014, the TARMED reform was implemented in Switzerland. In an effort to even out the financial standing of general practitioners (including pediatricians) relative to that of specialists in the outpatient sector, the reform tackled two aspects: on the one hand, GPs would be able to bill an additional 9 CHF per patient, once per consult per day. This is referred to as the surcharge position. As a second measure, it reduced the fees for certain technical services targeted to specialists (e.g., imaging, surgical technical procedures, etc.). Given the fee-for-service reimbursement system in Switzerland, we predict that physicians reacted to the economic incentives of the reform by increasing the consults per patient and decreasing the average amount of time per consult. Within this framework, our treatment group is formed by GPs and our control group by those specialists who were not affected by the reform. Using monthly insurance claims panel data aggregated at the physician praxis level (provided by SASIS AG), for the period of January 2013-December 2015, we run difference in difference panel data models with physician and time fixed effects in order to test for the causal effects of the reform. We account for seasonality, and control for physician characteristics such as age, gender, specialty, and physician experience. Furthermore, we run the models on subgroups of physicians within our sample so as to account for heterogeneity and treatment intensities. Preliminary results support our hypothesis. We find evidence of an increase in consults per patients and a decrease in time per consult. Robustness checks do not significantly alter the results for our outcome variable of consults per patient. However, we do find a smaller effect of the reform for time per consult. Thus, the results of this paper could provide policymakers a better understanding of physician behavior and their sensitivity to financial incentives of reforms (both past and future) under the current reimbursement system.

Keywords: difference in differences, financial incentives, health reform, physician behavior

Procedia PDF Downloads 128
4568 Chemometric Regression Analysis of Radical Scavenging Ability of Kombucha Fermented Kefir-Like Products

Authors: Strahinja Kovacevic, Milica Karadzic Banjac, Jasmina Vitas, Stefan Vukmanovic, Radomir Malbasa, Lidija Jevric, Sanja Podunavac-Kuzmanovic

Abstract:

The present study deals with chemometric regression analysis of quality parameters and the radical scavenging ability of kombucha fermented kefir-like products obtained with winter savory (WS), peppermint (P), stinging nettle (SN) and wild thyme tea (WT) kombucha inoculums. Each analyzed sample was described by milk fat content (MF, %), total unsaturated fatty acids content (TUFA, %), monounsaturated fatty acids content (MUFA, %), polyunsaturated fatty acids content (PUFA, %), the ability of free radicals scavenging (RSA Dₚₚₕ, % and RSA.ₒₕ, %) and pH values measured after each hour from the start until the end of fermentation. The aim of the conducted regression analysis was to establish chemometric models which can predict the radical scavenging ability (RSA Dₚₚₕ, % and RSA.ₒₕ, %) of the samples by correlating it with the MF, TUFA, MUFA, PUFA and the pH value at the beginning, in the middle and at the end of fermentation process which lasted between 11 and 17 hours, until pH value of 4.5 was reached. The analysis was carried out applying univariate linear (ULR) and multiple linear regression (MLR) methods on the raw data and the data standardized by the min-max normalization method. The obtained models were characterized by very limited prediction power (poor cross-validation parameters) and weak statistical characteristics. Based on the conducted analysis it can be concluded that the resulting radical scavenging ability cannot be precisely predicted only on the basis of MF, TUFA, MUFA, PUFA content, and pH values, however, other quality parameters should be considered and included in the further modeling. This study is based upon work from project: Kombucha beverages production using alternative substrates from the territory of the Autonomous Province of Vojvodina, 142-451-2400/2019-03, supported by Provincial Secretariat for Higher Education and Scientific Research of AP Vojvodina.

Keywords: chemometrics, regression analysis, kombucha, quality control

Procedia PDF Downloads 142
4567 Scheduling Residential Daily Energy Consumption Using Bi-criteria Optimization Methods

Authors: Li-hsing Shih, Tzu-hsun Yen

Abstract:

Because of the long-term commitment to net zero carbon emission, utility companies include more renewable energy supply, which generates electricity with time and weather restrictions. This leads to time-of-use electricity pricing to reflect the actual cost of energy supply. From an end-user point of view, better residential energy management is needed to incorporate the time-of-use prices and assist end users in scheduling their daily use of electricity. This study uses bi-criteria optimization methods to schedule daily energy consumption by minimizing the electricity cost and maximizing the comfort of end users. Different from most previous research, this study schedules users’ activities rather than household appliances to have better measures of users’ comfort/satisfaction. The relation between each activity and the use of different appliances could be defined by users. The comfort level is at the highest when the time and duration of an activity completely meet the user’s expectation, and the comfort level decreases when the time and duration do not meet expectations. A questionnaire survey was conducted to collect data for establishing regression models that describe users’ comfort levels when the execution time and duration of activities are different from user expectations. Six regression models representing the comfort levels for six types of activities were established using the responses to the questionnaire survey. A computer program is developed to evaluate electricity cost and the comfort level for each feasible schedule and then find the non-dominated schedules. The Epsilon constraint method is used to find the optimal schedule out of the non-dominated schedules. A hypothetical case is presented to demonstrate the effectiveness of the proposed approach and the computer program. Using the program, users can obtain the optimal schedule of daily energy consumption by inputting the intended time and duration of activities and the given time-of-use electricity prices.

Keywords: bi-criteria optimization, energy consumption, time-of-use price, scheduling

Procedia PDF Downloads 60
4566 Surface Tension and Bulk Density of Ammonium Nitrate Solutions: A Molecular Dynamics Study

Authors: Sara Mosallanejad, Bogdan Z. Dlugogorski, Jeff Gore, Mohammednoor Altarawneh

Abstract:

Ammonium nitrate (NH­₄NO₃, AN) is commonly used as the main component of AN emulsion and fuel oil (ANFO) explosives, that use extensively in civilian and mining operations for underground development and tunneling applications. The emulsion formulation and wettability of AN prills, which affect the physical stability and detonation of ANFO, highly depend on the surface tension, density, viscosity of the used liquid. Therefore, for engineering applications of this material, the determination of density and surface tension of concentrated aqueous solutions of AN is essential. The molecular dynamics (MD) simulation method have been used to investigate the density and the surface tension of high concentrated ammonium nitrate solutions; up to its solubility limit in water. Non-polarisable models for water and ions have carried out the simulations, and the electronic continuum correction model (ECC) uses a scaling of the charges of the ions to apply the polarisation implicitly into the non-polarisable model. The results of calculated density and the surface tension of the solutions have been compared to available experimental values. Our MD simulations show that the non-polarisable model with full-charge ions overestimates the experimental results while the reduce-charge model for the ions fits very well with the experimental data. Ions in the solutions show repulsion from the interface using the non-polarisable force fields. However, when charges of the ions in the original model are scaled in line with the scaling factor of the ECC model, the ions create a double ionic layer near the interface by the migration of anions toward the interface while cations stay in the bulk of the solutions. Similar ions orientations near the interface were observed when polarisable models were used in simulations. In conclusion, applying the ECC model to the non-polarisable force field yields the density and surface tension of the AN solutions with high accuracy in comparison to the experimental measurements.

Keywords: ammonium nitrate, electronic continuum correction, non-polarisable force field, surface tension

Procedia PDF Downloads 232
4565 Identifying Confirmed Resemblances in Problem-Solving Engineering, Both in the Past and Present

Authors: Colin Schmidt, Adrien Lecossier, Pascal Crubleau, Philippe Blanchard, Simon Richir

Abstract:

Introduction:The widespread availability of artificial intelligence, exemplified by Generative Pre-trained Transformers (GPT) relying on large language models (LLM), has caused a seismic shift in the realm of knowledge. Everyone now has the capacity to swiftly learn how these models can either serve them well or not. Today, conversational AI like ChatGPT is grounded in neural transformer models, a significant advance in natural language processing facilitated by the emergence of renowned LLMs constructed using neural transformer architecture. Inventiveness of an LLM : OpenAI's GPT-3 stands as a premier LLM, capable of handling a broad spectrum of natural language processing tasks without requiring fine-tuning, reliably producing text that reads as if authored by humans. However, even with an understanding of how LLMs respond to questions asked, there may be lurking behind OpenAI’s seemingly endless responses an inventive model yet to be uncovered. There may be some unforeseen reasoning emerging from the interconnection of neural networks here. Just as a Soviet researcher in the 1940s questioned the existence of Common factors in inventions, enabling an Under standing of how and according to what principles humans create them, it is equally legitimate today to explore whether solutions provided by LLMs to complex problems also share common denominators. Theory of Inventive Problem Solving (TRIZ) : We will revisit some fundamentals of TRIZ and how Genrich ALTSHULLER was inspired by the idea that inventions and innovations are essential means to solve societal problems. It's crucial to note that traditional problem-solving methods often fall short in discovering innovative solutions. The design team is frequently hampered by psychological barriers stemming from confinement within a highly specialized knowledge domain that is difficult to question. We presume ChatGPT Utilizes TRIZ 40. Hence, the objective of this research is to decipher the inventive model of LLMs, particularly that of ChatGPT, through a comparative study. This will enhance the efficiency of sustainable innovation processes and shed light on how the construction of a solution to a complex problem was devised. Description of the Experimental Protocol : To confirm or reject our main hypothesis that is to determine whether ChatGPT uses TRIZ, we will follow a stringent protocol that we will detail, drawing on insights from a panel of two TRIZ experts. Conclusion and Future Directions : In this endeavor, we sought to comprehend how an LLM like GPT addresses complex challenges. Our goal was to analyze the inventive model of responses provided by an LLM, specifically ChatGPT, by comparing it to an existing standard model: TRIZ 40. Of course, problem solving is our main focus in our endeavours.

Keywords: artificial intelligence, Triz, ChatGPT, inventiveness, problem-solving

Procedia PDF Downloads 74
4564 Balancing Resources and Demands in Activation Work with Young Adults: Exploring Potentials of the Job Demands-Resources Theory

Authors: Gurli Olsen, Ida Bruheim Jensen

Abstract:

Internationally, many young adults not in education, employment, or training (NEET) remain in temporary solutions such as labour market measures or other forms of welfare arrangements. These trends have been associated with ineffective labour market measures, an underfunded theoretical foundation for activation work, limited competence among social workers and labour market employees in using ordinary workplaces as job inclusion measures, and an overemphasis on young adults’ personal limitations such as health challenges and lack of motivation. Two competing models have been prominent in activation work: Place‐Then‐Train and Train‐Then‐Place. A traditional strategy for labour market measures has been to first motivate NEETs to sheltered work and training and then to the regular labour market (train then place). Measures such as Supported Employment (SE) and Individual Placement and Support (IPS) advocate for rapid entry into paid work at the regular labour market with close supervision and training from social workers, employees, and others (place then train). None of these models demonstrate unquestionable results. In this web of working life measures, young adults (NEETs) experience a lack of confidence in their own capabilities and coping strategies vis-á-vis labour market- and educational demands. Drawing on young adults’ own experiences, we argue that the Job Demands-Resources (JD-R) Theory can contribute to the theoretical and practical dimensions of activation work. This presentation will focus on what the JD-R theory entails and how it can be fruitful in activation work with NEETs (what and how). The overarching rationale of the JD-R theory is that an enduring balance between demands (e.g., deadlines, working hours) and resources (e.g., social support, enjoyable work tasks) is important for job performance for people in any job and potentially in other meaningful activities. Extensive research has demonstrated that a balance between demands and resources increases motivation and decreases stress. Nevertheless, we have not identified literature on the JD-R theory in activation work with young adults.

Keywords: activation work, job demands-resources theory, social work, theory development

Procedia PDF Downloads 79
4563 Antioxidant Potential of Pomegranate Rind Extract Attenuates Pain, Inflammation and Bone Damage in Experimental Rats

Authors: Ritu Karwasra, Surender Singh

Abstract:

Inflammation is an important physiological response of the body’s self-defense system that helps in eliminating and protecting organism from harmful stimuli and in tissue repair. It is a highly regulated protective response which helps in eliminating the initial cause of cell injury, and initiates the process of repair. The present study was designed to evaluate the ameliorative effect of pomegranate rind extract on pain and inflammation. Hydroalcoholic standardized rind extract of pomegranate at doses 50, 100 and 200 mg/kg and indomethacin (3 mg/kg) was tested against eddy’s hot plate induced thermal algesia, carrageenan (acute inflammation) and Complete Freund’s Adjuvant (chronic inflammation) induced models in Wistar rats. Parameters analyzed were inhibition of paw edema, measurement of joint diameter, levels of GSH, TBARS, SOD, TNF-α, radiographic imaging, tissue histology and synovial expression of pro-inflammatory cytokine receptor (TNF-R1). Radiological and light microscopical analysis were carried out to find out the bone damage in CFA-induced chronic inflammatory model. Findings of the present study revealed that pomegranate rind extract at a dose of 200 mg/kg caused a significant (p<0.05) reduction in paw swelling in both the inflammatory models. Nociceptive threshold was also significantly (p<0.05) improved. Immunohistochemical analysis of TNF-R1 in CFA-induced group showed elevated level, whereas reduction in level of TNF-R1 was observed in pomegranate (200 mg/kg). Henceforth, we might say that pomegranate produced a dose-dependent reduction in inflammation and pain along with the reduction in levels of oxidative stress markers and tissue histology, and the effect was found to be comparable to that of indomethacin. Thus, it can be concluded that pomegranate is a potential therapeutic target in the pathogenesis of inflammation and pain, and punicalagin is the major constituents found in rind extract might be responsible for the activity.

Keywords: carrageenan, inflammation, nociceptive-threshold, pomegranate, histopathology

Procedia PDF Downloads 219
4562 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models

Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti

Abstract:

In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.

Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics

Procedia PDF Downloads 54
4561 Impact of Air Flow Structure on Distinct Shape of Differential Pressure Devices

Authors: A. Bertašienė

Abstract:

Energy harvesting from any structure makes a challenge. Different structure of air/wind flows in industrial, environmental and residential applications emerge the real flow investigation in detail. Many of the application fields are hardly achievable to the detailed description due to the lack of up-to-date statistical data analysis. In situ measurements aim crucial investments thus the simulation methods come to implement structural analysis of the flows. Different configurations of testing environment give an overview how important is the simple structure of field in limited area on efficiency of the system operation and the energy output. Several configurations of modeled working sections in air flow test facility was implemented in CFD ANSYS environment to compare experimentally and numerically air flow development stages and forms that make effects on efficiency of devices and processes. Effective form and amount of these flows under different geometry cases define the manner of instruments/devices that measure fluid flow parameters for effective operation of any system and emission flows to define. Different fluid flow regimes were examined to show the impact of fluctuations on the development of the whole volume of the flow in specific environment. The obtained results rise the discussion on how these simulated flow fields are similar to real application ones. Experimental results have some discrepancies from simulation ones due to the models implemented to fluid flow analysis in initial stage, not developed one and due to the difficulties of models to cover transitional regimes. Recommendations are essential for energy harvesting systems in both, indoor and outdoor cases. Further investigations aim to be shifted to experimental analysis of flow under laboratory conditions using state-of-the-art techniques as flow visualization tool and later on to in situ situations that is complicated, cost and time consuming study.

Keywords: fluid flow, initial region, tube coefficient, distinct shape

Procedia PDF Downloads 337
4560 Mathematical Modelling of Drying Kinetics of Cantaloupe in a Solar Assisted Dryer

Authors: Melike Sultan Karasu Asnaz, Ayse Ozdogan Dolcek

Abstract:

Crop drying, which aims to reduce the moisture content to a certain level, is a method used to extend the shelf life and prevent it from spoiling. One of the oldest food preservation techniques is open sunor shade drying. Even though this technique is the most affordable of all drying methods, there are some drawbacks such as contamination by insects, environmental pollution, windborne dust, and direct expose to weather conditions such as wind, rain, hail. However, solar dryers that provide a hygienic and controllable environment to preserve food and extend its shelf life have been developed and used to dry agricultural products. Thus, foods can be dried quickly without being affected by weather variables, and quality products can be obtained. This research is mainly devoted to investigating the modelling of drying kinetics of cantaloupe in a forced convection solar dryer. Mathematical models for the drying process should be defined to simulate the drying behavior of the foodstuff, which will greatly contribute to the development of solar dryer designs. Thus, drying experiments were conducted and replicated five times, and various data such as temperature, relative humidity, solar irradiation, drying air speed, and weight were instantly monitored and recorded. Moisture content of sliced and pretreated cantaloupe were converted into moisture ratio and then fitted against drying time for constructing drying curves. Then, 10 quasi-theoretical and empirical drying models were applied to find the best drying curve equation according to the Levenberg-Marquardt nonlinear optimization method. The best fitted mathematical drying model was selected according to the highest coefficient of determination (R²), and the mean square of the deviations (χ^²) and root mean square error (RMSE) criterial. The best fitted model was utilized to simulate a thin layer solar drying of cantaloupe, and the simulation results were compared with the experimental data for validation purposes.

Keywords: solar dryer, mathematical modelling, drying kinetics, cantaloupe drying

Procedia PDF Downloads 127
4559 Producing Graphical User Interface from Activity Diagrams

Authors: Ebitisam K. Elberkawi, Mohamed M. Elammari

Abstract:

Graphical User Interface (GUI) is essential to programming, as is any other characteristic or feature, due to the fact that GUI components provide the fundamental interaction between the user and the program. Thus, we must give more interest to GUI during building and development of systems. Also, we must give a greater attention to the user who is the basic corner in the dealing with the GUI. This paper introduces an approach for designing GUI from one of the models of business workflows which describe the workflow behavior of a system, specifically through activity diagrams (AD).

Keywords: activity diagram, graphical user interface, GUI components, program

Procedia PDF Downloads 464
4558 The Forms of Representation in Architectural Design Teaching: The Cases of Politecnico Di Milano and Faculty of Architecture of the University of Porto

Authors: Rafael Sousa Santos, Clara Pimena Do Vale, Barbara Bogoni, Poul Henning Kirkegaard

Abstract:

The representative component, a determining aspect of the architect's training, has been marked by an exponential and unprecedented development. However, the multiplication of possibilities has also multiplied uncertainties about architectural design teaching, and by extension, about the very principles of architectural education. In this paper, it is intended to present the results of a research developed on the following problem: the relation between the forms of representation and the architectural design teaching-learning processes. The research had as its object the educational model of two schools – the Politecnico di Milano (POLIMI) and the Faculty of Architecture of the University of Porto (FAUP) – and was led by three main objectives: to characterize the educational model followed in both schools focused on the representative component and its role; to interpret the relation between forms of representation and the architectural design teaching-learning processes; to consider their possibilities of valorisation. Methodologically, the research was conducted according to a qualitative embedded multiple-case study design. The object – i.e., the educational model – was approached in both POLIMI and FAUP cases considering its Context and three embedded unities of analysis: the educational Purposes, Principles, and Practices. In order to guide the procedures of data collection and analysis, a Matrix for the Characterization (MCC) was developed. As a methodological tool, the MCC allowed to relate the three embedded unities of analysis with the three main sources of evidence where the object manifests itself: the professors, expressing how the model is assumed; the architectural design classes, expressing how the model is achieved; and the students, expressing how the model is acquired. The main research methods used were the naturalistic and participatory observation, in-person-interview and documentary and bibliographic review. The results reveal the importance of the representative component in the educational model of both cases, despite the differences in its role. In POLIMI's model, representation is particularly relevant in the teaching of architectural design, while in FAUP’s model, it plays a transversal role – according to an idea of 'general training through hand drawing'. In fact, the difference between models relative to representation can be partially understood by the level of importance that each gives to hand drawing. Regarding the teaching of architectural design, the two cases are distinguished in the relation with the representative component: while in POLIMI the forms of representation serve essentially an instrumental purpose, in FAUP they tend to be considered also for their methodological dimension. It seems that the possibilities for valuing these models reside precisely in the relation between forms of representation and architectural design teaching. It is expected that the knowledge base developed in this research may have three main contributions: to contribute to the maintenance of the educational model of POLIMI and FAUP; through the precise description of the methodological procedures, to contribute by transferability to similar studies; through the critical and objective framework of the problem underlying the forms of representation and its relation with architectural design teaching, to contribute to the broader discussion concerning the contemporary challenges on architectural education.

Keywords: architectural design teaching, architectural education, educational models, forms of representation

Procedia PDF Downloads 122
4557 Vehicles Analysis, Assessment and Redesign Related to Ergonomics and Human Factors

Authors: Susana Aragoneses Garrido

Abstract:

Every day, the roads are scenery of numerous accidents involving vehicles, producing thousands of deaths and serious injuries all over the world. Investigations have revealed that Human Factors (HF) are one of the main causes of road accidents in modern societies. Distracted driving (including external or internal aspects of the vehicle), which is considered as a human factor, is a serious and emergent risk to road safety. Consequently, a further analysis regarding this issue is essential due to its transcendence on today’s society. The objectives of this investigation are the detection and assessment of the HF in order to provide solutions (including a better vehicle design), which might mitigate road accidents. The methodology of the project is divided in different phases. First, a statistical analysis of public databases is provided between Spain and The UK. Second, data is classified in order to analyse the major causes involved in road accidents. Third, a simulation between different paths and vehicles is presented. The causes related to the HF are assessed by Failure Mode and Effects Analysis (FMEA). Fourth, different car models are evaluated using the Rapid Upper Body Assessment (RULA). Additionally, the JACK SIEMENS PLM tool is used with the intention of evaluating the Human Factor causes and providing the redesign of the vehicles. Finally, improvements in the car design are proposed with the intention of reducing the implication of HF in traffic accidents. The results from the statistical analysis, the simulations and the evaluations confirm that accidents are an important issue in today’s society, especially the accidents caused by HF resembling distractions. The results explore the reduction of external and internal HF through the global analysis risk of vehicle accidents. Moreover, the evaluation of the different car models using RULA method and the JACK SIEMENS PLM prove the importance of having a good regulation of the driver’s seat in order to avoid harmful postures and therefore distractions. For this reason, a car redesign is proposed for the driver to acquire the optimum position and consequently reducing the human factors in road accidents.

Keywords: analysis vehicles, asssesment, ergonomics, car redesign

Procedia PDF Downloads 335
4556 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour

Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling

Abstract:

Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.

Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model

Procedia PDF Downloads 99
4555 Silver Nanoparticles Synthesized in Plant Extract Against Acute Hepatopancreatic Necrosis of Shrimp: Estimated By Multiple Models

Authors: Luz del Carmen Rubí Félix Peña, Jose Adan Felix-Ortiz, Ely Sara Lopez-Alvarez, Wenceslao Valenzuela-Quiñonez

Abstract:

On a global scale, Mexico is the sixth largest producer of farmed white shrimp (Penaeus vannamei). The activity suffered significant economic losses due to acute hepatopancreatic necrosis (AHPND) caused by a strain of Vibrio parahaemolyticus. For control, the first option is the application of antibiotics in food, causing changes in the environment and bacterial communities, which has produced greater virulence and resistance of pathogenic bacteria. An alternative treatment is silver nanoparticles (AgNPs) generated by green synthesis, which have shown an antibacterial capacity by destroying the cell membrane or denaturing the cell. However, the doses at which these are effective are still unknown. The aim is to calculate the minimum inhibitory concentration (MIC) using the Gompertz, Richard, and Logistic model of biosynthesized AgNPs against a strain of V. parahaemolyticus. Through the testing of different formulations of AgNPs synthesized from Euphorbia prostrate (Ep) extracts against V. parahaemolyticus causing AHPND in white shrimp. Aqueous and ethanol extracts were obtained, and the concentration of phenols and flavonoids was quantified. In the antibiograms, AgNPs were formulated in ethanol extracts of Ep (20 and 30%). The inhibition halo at well dilution test were 18±1.7 and 17.67±2.1 mm against V. parahaemolyticus. A broth microdilution was performed with the inhibitory agents (aqueous and ethanolic extracts and AgNPs) and 20 μL of the inoculum of V. parahaemolyticus. The MIC for AgNPs was 6.2-9.3 μg/mL and for ethanol extract of 49-73 mg/mL. The Akaike index (AIC) was used to choose the Gompertz model for ethanol extracts of Ep as the best data descriptor (AIC=204.8, 10%; 45.5, 20%, and 204.8, 30%). The Richards model was at AgNPs ethanol extract with AIC=-9.3 (10%), -17.5 (20 and 30%). The MIC calculated for EP extracts with the modified Gompertz model were 20 mg/mL (10% and 20% extract) and 40 mg/mL at 30%, while Richard was winner for AgNPs-synthesized it was 5 μg/mL (10% and 20%) and 8 μg/mL (30%). The solver tool Excel was used for the calculations of the models and inhibition curves against V.parahaemolyticus.

Keywords: green synthesis, euphorbia prostata, phenols, flavonoids, bactericide

Procedia PDF Downloads 106
4554 Zero Energy Buildings in Hot-Humid Tropical Climates: Boundaries of the Energy Optimization Grey Zone

Authors: Nakul V. Naphade, Sandra G. L. Persiani, Yew Wah Wong, Pramod S. Kamath, Avinash H. Anantharam, Hui Ling Aw, Yann Grynberg

Abstract:

Achieving zero-energy targets in existing buildings is known to be a difficult task requiring important cuts in the building energy consumption, which in many cases clash with the functional necessities of the building wherever the on-site energy generation is unable to match the overall energy consumption. Between the building’s consumption optimization limit and the energy, target stretches a case-specific optimization grey zone, which requires tailored intervention and enhanced user’s commitment. In the view of the future adoption of more stringent energy-efficiency targets in the context of hot-humid tropical climates, this study aims to define the energy optimization grey zone by assessing the energy-efficiency limit in the state-of-the-art typical mid- and high-rise full AC office buildings, through the integration of currently available technologies. Energy models of two code-compliant generic office-building typologies were developed as a baseline, a 20-storey ‘high-rise’ and a 7-storey ‘mid-rise’. Design iterations carried out on the energy models with advanced market ready technologies in lighting, envelope, plug load management and ACMV systems and controls, lead to a representative energy model of the current maximum technical potential. The simulations showed that ZEB targets could be achieved in fully AC buildings under an average of seven floors only by compromising on energy-intense facilities (as full AC, unlimited power-supply, standard user behaviour, etc.). This paper argues that drastic changes must be made in tropical buildings to span the energy optimization grey zone and achieve zero energy. Fully air-conditioned areas must be rethought, while smart technologies must be integrated with an aggressive involvement and motivation of the users to synchronize with the new system’s energy savings goal.

Keywords: energy simulation, office building, tropical climate, zero energy buildings

Procedia PDF Downloads 184
4553 Primary School Students’ Modeling Processes: Crime Problem

Authors: Neslihan Sahin Celik, Ali Eraslan

Abstract:

As a result of PISA (Program for International Student Assessments) survey that tests how well students can apply the knowledge and skills they have learned at school to real-life challenges, the new and redesigned mathematics education programs in many countries emphasize the necessity for the students to face complex and multifaceted problem situations and gain experience in this sense allowing them to develop new skills and mathematical thinking to prepare them for their future life after school. At this point, mathematical models and modeling approaches can be utilized in the analysis of complex problems which represent real-life situations in which students can actively participate. In particular, model eliciting activities that bring about situations which allow the students to create solutions to problems and which involve mathematical modeling must be used right from primary school years, allowing them to face such complex, real-life situations from early childhood period. A qualitative study was conducted in a university foundation primary school in the city center of a big province in 2013-2014 academic years. The participants were 4th grade students in a primary school. After a four-week preliminary study applied to a fourth-grade classroom, three students included in the focus group were selected using criterion sampling technique. A focus group of three students was videotaped as they worked on the Crime Problem. The conversation of the group was transcribed, examined with students’ written work and then analyzed through the lens of Blum and Ferri’s modeling processing cycle. The results showed that primary fourth-grade students can successfully work with model eliciting problem while they encounter some difficulties in the modeling processes. In particular, they developed new ideas based on different assumptions, identified the patterns among variables and established a variety of models. On the other hand, they had trouble focusing on problems and occasionally had breaks in the process.

Keywords: primary school, modeling, mathematical modeling, crime problem

Procedia PDF Downloads 405
4552 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models

Authors: Morten Brøgger, Kim Wittchen

Abstract:

Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.

Keywords: building stock energy modelling, energy-savings, archetype

Procedia PDF Downloads 154
4551 Assessing Students’ Readiness for an Open and Distance Learning Higher Education Environment

Authors: Upasana G. Singh, Meera Gungea

Abstract:

Learning is no more confined to the traditional classroom, teacher, and student interaction. Many universities offer courses through the Open and Distance Learning (ODL) mode, attracting a diversity of learners in terms of age, gender, and profession to name a few. The ODL mode has surfaced as one of the famous sought-after modes of learning, allowing learners to invest in their educational growth without hampering their personal and professional commitments. This mode of learning, however, requires that those who ultimately choose to adopt it must be prepared to undertake studies through such medium. The purpose of this research is to assess whether students who join universities offering courses through the ODL mode are ready to embark and study within such a framework. This study will be helpful to unveil the challenges students face in such an environment and thus contribute to developing a framework to ease adoption and integration into the ODL environment. Prior to the implementation of e-learning, a readiness assessment is essential for any institution that wants to adopt any form of e-learning. Various e-learning readiness assessment models have been developed over the years. However, this study is based on a conceptual model for e-Learning Readiness Assessment which is a ‘hybrid model’. This hybrid model consists of 4 main parameters: 1) Technological readiness, 2) Culture readiness, 3) Content readiness, and 4) Demographics factors, with 4 sub-areas, namely, technology, innovation, people and self-development. The model also includes the attitudes of users towards the adoption of e-learning as an important aspect of assessing e-learning readiness. For this study, some factors and sub-factors of the hybrid model have been considered and adapted, together with the ‘Attitude’ component. A questionnaire was designed based on the models and students where the target population were students enrolled at the Open University of Mauritius, in undergraduate and postgraduate courses. Preliminary findings indicate that most (68%) learners have an average knowledge about ODL form of learning, despite not many (72%) having previous experience with ODL. Despite learning through ODL 74% of learners preferred hard copy learning material and 48% found difficulty in reading learning material on electronic devices.

Keywords: open learning, distance learning, student readiness, a hybrid model

Procedia PDF Downloads 109
4550 Study of the Relationship between the Civil Engineering Parameters and the Floating of Buoy Model Which Made from Expanded Polystyrene-Mortar

Authors: Panarat Saengpanya

Abstract:

There were five objectives in this study including the study of housing type with water environment, the physical and mechanical properties of the buoy material, the mechanical properties of the buoy models, the floating of the buoy models and the relationship between the civil engineering parameters and the floating of the buoy. The buoy examples made from Expanded Polystyrene (EPS) covered by 5 mm thickness of mortar with the equal thickness on each side. Specimens are 0.05 m cubes tested at a displacement rate of 0.005 m/min. The existing test method used to assess the parameters relationship is ASTM C 109 to provide comparative results. The results found that the three type of housing with water environment were Stilt Houses, Boat House, and Floating House. EPS is a lightweight material that has been used in engineering applications since at least the 1950s. Its density is about a hundredth of that of mortar, while the mortar strength was found 72 times of EPS. One of the advantage of composite is that two or more materials could be combined to take advantage of the good characteristics of each of the material. The strength of the buoy influenced by mortar while the floating influenced by EPS. Results showed the buoy example compressed under loading. The Stress-Strain curve showed the high secant modulus before reached the peak value. The failure occurred within 10% strain then the strength reduces while the strain was continuing. It was observed that the failure strength reduced by increasing the total volume of examples. For the buoy examples with same area, an increase of the failure strength is found when the high dimension is increased. The results showed the relationship between five parameters including the floating level, the bearing capacity, the volume, the high dimension and the unit weight. The study found increases in high of buoy lead to corresponding decreases in both modulus and compressive strength. The total volume and the unit weight had relationship with the bearing capacity of the buoy.

Keywords: floating house, buoy, floating structure, EPS

Procedia PDF Downloads 146
4549 A Sustainable Supplier Selection and Order Allocation Based on Manufacturing Processes and Product Tolerances: A Multi-Criteria Decision Making and Multi-Objective Optimization Approach

Authors: Ravi Patel, Krishna K. Krishnan

Abstract:

In global supply chains, appropriate and sustainable suppliers play a vital role in supply chain development and feasibility. In a larger organization with huge number of suppliers, it is necessary to divide suppliers based on their past history of quality and delivery of each product category. Since performance of any organization widely depends on their suppliers, well evaluated selection criteria and decision-making models lead to improved supplier assessment and development. In this paper, SCOR® performance evaluation approach and ISO standards are used to determine selection criteria for better utilization of supplier assessment by using hybrid model of Analytic Hierchchy Problem (AHP) and Fuzzy Techniques for Order Preference by Similarity to Ideal Solution (FTOPSIS). AHP is used to determine the global weightage of criteria which helps TOPSIS to get supplier score by using triangular fuzzy set theory. Both qualitative and quantitative criteria are taken into consideration for the proposed model. In addition, a multi-product and multi-time period model is selected for order allocation. The optimization model integrates multi-objective integer linear programming (MOILP) for order allocation and a hybrid approach for supplier selection. The proposed MOILP model optimizes order allocation based on manufacturing process and product tolerances as per manufacturer’s requirement for quality product. The integrated model and solution approach are tested to find optimized solutions for different scenario. The detailed analysis shows the superiority of proposed model over other solutions which considered individual decision making models.

Keywords: AHP, fuzzy set theory, multi-criteria decision making, multi-objective integer linear programming, TOPSIS

Procedia PDF Downloads 171
4548 Examples of Parameterization of Stabilizing Controllers with One-Side Coprime Factorization

Authors: Kazuyoshi Mori

Abstract:

Examples of parameterization of stabilizing controllers that require only one of right-/left-coprime factorizations are presented. One parameterization method requires one side coprime factorization. The other requires no coprime factorization. The methods are based on the factorization approach so that a number of models can be applied the method we use in this paper.

Keywords: parametrization, coprime factorization, factorization approach, linear systems

Procedia PDF Downloads 373
4547 Predictor Factors in Predictive Model of Soccer Talent Identification among Male Players Aged 14 to 17 Years

Authors: Muhamad Hafiz Ismail, Ahmad H., Nelfianty M. R.

Abstract:

The longitudinal study is conducted to identify predictive factors of soccer talent among male players aged 14 to 17 years. Convenience sampling involving elite respondents (n=20) and sub-elite respondents (n=20) male soccer players. Descriptive statistics were reported as frequencies and percentages. The inferential statistical analysis is used to report the status of reliability, independent samples t-test, paired samples t-test, and multiple regression analysis. Generally, there are differences in mean of height, muscular strength, muscular endurance, cardiovascular endurance, task orientation, cognitive anxiety, self-confidence, juggling skills, short pass skills, long pass skills, dribbling skills, and shooting skills for 20 elite players and sub-elite players. Accordingly, there was a significant difference between pre and post-test for thirteen variables of height, weight, fat percentage, muscle strength, muscle endurance, cardiovascular endurance, flexibility, BMI, task orientation, juggling skills, short pass skills, a long pass skills, and dribbling skills. Based on the first predictive factors (physical), second predictive factors (fitness), third predictive factors (psychological), and fourth predictive factors (skills in playing football) pledged to the soccer talent; four multiple regression models were produced. The first predictive factor (physical) contributed 53.5 percent, supported by height and percentage of fat in soccer talents. The second predictive factor (fitness) contributed 63.2 percent and the third predictive factors (psychology) contributed 66.4 percent of soccer talent. The fourth predictive factors (skills) contributed 59.0 percent of soccer talent. The four multiple regression models could be used as a guide for talent scouting for soccer players of the future.

Keywords: soccer talent identification, fitness and physical test, soccer skills test, psychological test

Procedia PDF Downloads 157
4546 Application of Hydrological Engineering Centre – River Analysis System (HEC-RAS) to Estuarine Hydraulics

Authors: Julia Zimmerman, Gaurav Savant

Abstract:

This study aims to evaluate the efficacy of the U.S. Army Corp of Engineers’ River Analysis System (HEC-RAS) application to modeling the hydraulics of estuaries. HEC-RAS has been broadly used for a variety of riverine applications. However, it has not been widely applied to the study of circulation in estuaries. This report details the model development and validation of a combined 1D/2D unsteady flow hydraulic model using HEC-RAS for estuaries and they are associated with tidally influenced rivers. Two estuaries, Galveston Bay and Delaware Bay, were used as case studies. Galveston Bay, a bar-built, vertically mixed estuary, was modeled for the 2005 calendar year. Delaware Bay, a drowned river valley estuary, was modeled from October 22, 2019, to November 5, 2019. Water surface elevation was used to validate both models by comparing simulation results to NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) gauge data. Simulations were run using the Diffusion Wave Equations (DW), the Shallow Water Equations, Eulerian-Lagrangian Method (SWE-ELM), and the Shallow Water Equations Eulerian Method (SWE-EM) and compared for both accuracy and computational resources required. In general, the Diffusion Wave Equations results were found to be comparable to the two Shallow Water equations sets while requiring less computational power. The 1D/2D combined approach was valid for study areas within the 2D flow area, with the 1D flow serving mainly as an inflow boundary condition. Within the Delaware Bay estuary, the HEC-RAS DW model ran in 22 minutes and had an average R² value of 0.94 within the 2-D mesh. The Galveston Bay HEC-RAS DW ran in 6 hours and 47 minutes and had an average R² value of 0.83 within the 2-D mesh. The longer run time and lower R² for Galveston Bay can be attributed to the increased length of the time frame modeled and the greater complexity of the estuarine system. The models did not accurately capture tidal effects within the 1D flow area.

Keywords: Delaware bay, estuarine hydraulics, Galveston bay, HEC-RAS, one-dimensional modeling, two-dimensional modeling

Procedia PDF Downloads 199
4545 Effect of Different Parameters of Converging-Diverging Vortex Finders on Cyclone Separator Performance

Authors: V. Kumar, K. Jha

Abstract:

The present study is done to explore design modifications of the vortex finder, as it has a significant effect on the cyclone separator performance. It is evident that modifications of the vortex finder improve the performance of the cyclone separator significantly. The study conducted strives to improve the overall performance of cyclone separators by utilizing a converging-diverging (CD) vortex finder instead of the traditional uniform diameter vortex finders. The velocity and pressure fields inside a Stairmand cyclone separator with body diameter 0.29m and vortex finder diameter 0.1305m are calculated. The commercial software, Ansys Fluent v14.0 is used to simulate the flow field in a uniform diameter cyclone and six cyclones modified with CD vortex finders. Reynolds stress model is used to simulate the effects of turbulence on the fluid and particulate phases, discrete phase model is used to calculate the particle trajectories. The performance of the modified vortex finders is compared with the traditional vortex finder. The effects of the lengths of the converging and diverging sections, the throat diameter and the end diameters of the convergent divergent section are also studied to achieve enhanced performance. The pressure and velocity fields inside the vortex finder are presented by means of contour plots and velocity vectors and changes in the flow pattern due to variation of the geometrical variables are also analysed. Results indicate that a convergent-divergent vortex finder is capable of decreasing the pressure drop than that achieved through a uniform diameter vortex finder. It is also observed that the end diameters of the CD vortex finder, the throat diameter and the length of the diverging part of the vortex finder have a significant impact on the cyclone separator performance. Increase in the lower diameter of the vortex finder by 66% results in 11.5% decrease in the dimensionless pressure drop (Euler number) with 5.8% decrease in separation efficiency. Whereas 50% decrease in the throat diameter gives 5.9% increase in the Euler number with 10.2% increase in the separation efficiency and increasing the length of the diverging part gives 10.28% increase in the Euler number with 5.74% increase in the separation efficiency. Increasing the upper diameter of the CD vortex finder is seen to produce an adverse effect on the performance as it increases the pressure drop significantly and decreases the separation efficiency. Increase in length of the converging is not seen to affect the performance significantly. From the present study, it is concluded that convergent-divergent vortex finders can be used in place of uniform diameter vortex finders to achieve a better cyclone separator performance.

Keywords: convergent-divergent vortex finder, cyclone separator, discrete phase modeling, Reynolds stress model

Procedia PDF Downloads 173
4544 Scheduling Building Projects: The Chronographical Modeling Concept

Authors: Adel Francis

Abstract:

Most of scheduling methods and software apply the critical path logic. This logic schedule activities, apply constraints between these activities and try to optimize and level the allocated resources. The extensive use of this logic produces a complex an erroneous network hard to present, follow and update. Planning and management building projects should tackle the coordination of works and the management of limited spaces, traffic, and supplies. Activities cannot be performed without the resources available and resources cannot be used beyond the capacity of workplaces. Otherwise, workspace congestion will negatively affect the flow of works. The objective of the space planning is to link the spatial and temporal aspects, promote efficient use of the site, define optimal site occupancy rates, and ensures suitable rotation of the workforce in the different spaces. The Chronographic scheduling modelling belongs to this category and models construction operations as well as their processes, logical constraints, association and organizational models, which help to better illustrate the schedule information using multiple flexible approaches. The model defined three categories of areas (punctual, surface and linear) and four different layers (space creation, systems, closing off space, finishing, and reduction of space). The Chronographical modelling is a more complete communication method, having the ability to alternate from one visual approach to another by manipulation of graphics via a set of parameters and their associated values. Each individual approach can help to schedule a certain project type or specialty. Visual communication can also be improved through layering, sheeting, juxtaposition, alterations, and permutations, allowing for groupings, hierarchies, and classification of project information. In this way, graphic representation becomes a living, transformable image, showing valuable information in a clear and comprehensible manner, simplifying the site management while simultaneously utilizing the visual space as efficiently as possible.

Keywords: building projects, chronographic modelling, CPM, critical path, precedence diagram, scheduling

Procedia PDF Downloads 155
4543 Evaluation of Reliability Flood Control System Based on Uncertainty of Flood Discharge, Case Study Wulan River, Central Java, Indonesia

Authors: Anik Sarminingsih, Krishna V. Pradana

Abstract:

The failure of flood control system can be caused by various factors, such as not considering the uncertainty of designed flood causing the capacity of the flood control system is exceeded. The presence of the uncertainty factor is recognized as a serious issue in hydrological studies. Uncertainty in hydrological analysis is influenced by many factors, starting from reading water elevation data, rainfall data, selection of method of analysis, etc. In hydrological modeling selection of models and parameters corresponding to the watershed conditions should be evaluated by the hydraulic model in the river as a drainage channel. River cross-section capacity is the first defense in knowing the reliability of the flood control system. Reliability of river capacity describes the potential magnitude of flood risk. Case study in this research is Wulan River in Central Java. This river occurring flood almost every year despite some efforts to control floods such as levee, floodway and diversion. The flood-affected areas include several sub-districts, mainly in Kabupaten Kudus and Kabupaten Demak. First step is analyze the frequency of discharge observation from Klambu weir which have time series data from 1951-2013. Frequency analysis is performed using several distribution frequency models such as Gumbel distribution, Normal, Normal Log, Pearson Type III and Log Pearson. The result of the model based on standard deviation overlaps, so the maximum flood discharge from the lower return periods may be worth more than the average discharge for larger return periods. The next step is to perform a hydraulic analysis to evaluate the reliability of river capacity based on the flood discharge resulted from several methods. The selection of the design flood discharge of flood control system is the result of the method closest to bankfull capacity of the river.

Keywords: design flood, hydrological model, reliability, uncertainty, Wulan river

Procedia PDF Downloads 294
4542 Implementation of Statistical Parameters to Form an Entropic Mathematical Models

Authors: Gurcharan Singh Buttar

Abstract:

It has been discovered that although these two areas, statistics, and information theory, are independent in their nature, they can be combined to create applications in multidisciplinary mathematics. This is due to the fact that where in the field of statistics, statistical parameters (measures) play an essential role in reference to the population (distribution) under investigation. Information measure is crucial in the study of ambiguity, assortment, and unpredictability present in an array of phenomena. The following communication is a link between the two, and it has been demonstrated that the well-known conventional statistical measures can be used as a measure of information.

Keywords: probability distribution, entropy, concavity, symmetry, variance, central tendency

Procedia PDF Downloads 156
4541 Digital Architectural Practice as a Challenge for Digital Architectural Technology Elements in the Era of Digital Design

Authors: Ling Liyun

Abstract:

In the field of contemporary architecture, complex forms of architectural works continue to emerge in the world, along with some new terminology emerged: digital architecture, parametric design, algorithm generation, building information modeling, CNC construction and so on. Architects gradually mastered the new skills of mathematical logic in the form of exploration, virtual simulation, and the entire design and coordination in the construction process. Digital construction technology has a greater degree in controlling construction, and ensure its accuracy, creating a series of new construction techniques. As a result, the use of digital technology is an improvement and expansion of the practice of digital architecture design revolution. We worked by reading and analyzing information about the digital architecture development process, a large number of cases, as well as architectural design and construction as a whole process. Thus current developments were introduced and discussed in our paper, such as architectural discourse, design theory, digital design models and techniques, material selecting, as well as artificial intelligence space design. Our paper also pays attention to the representative three cases of digital design and construction experiment at great length in detail to expound high-informatization, high-reliability intelligence, and high-technique in constructing a humane space to cope with the rapid development of urbanization. We concluded that the opportunities and challenges of the shift existed in architectural paradigms, such as the cooperation methods, theories, models, technologies and techniques which were currently employed in digital design research and digital praxis. We also find out that the innovative use of space can gradually change the way people learn, talk, and control information. The past two decades, digital technology radically breaks the technology constraints of industrial technical products, digests the publicity on a particular architectural style (era doctrine). People should not adapt to the machine, but in turn, it’s better to make the machine work for users.

Keywords: artificial intelligence, collaboration, digital architecture, digital design theory, material selection, space construction

Procedia PDF Downloads 136