Search results for: turbulence models
4540 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data
Authors: Kai Warsoenke, Maik Mackiewicz
Abstract:
To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.Keywords: automotive production, machine learning, process optimization, smart tolerancing
Procedia PDF Downloads 1174539 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data
Authors: Jian-Heng Wu, Bor-Shen Lin
Abstract:
The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.Keywords: water mass, Gaussian mixture model, data visualization, system framework
Procedia PDF Downloads 1454538 Functioning of a Temporarily Single Parent Family System Due to Migration from the Perspective of Adolescents with Cerebral Palsy
Authors: A. Gagat-Matuła
Abstract:
There is a definite lack – in Poland, as well as around the world – of empirical studies of families raising handicapped child, in which one parent migrates. In diagnostics of the functioning of such families emphasis should be placed not only on the difficulties, but most of all it should be indicated what possibilities are there for the family and how it overcomes the difficulties. Migration of a parent on the one hand is a chance to improve the family’s material situation. In certain circumstances this may only be an “escape” into work from the issues associated with the upbringing and rehabilitation of a handicapped child. The aim of the study was to learn the functioning of a temporarily single parent family system as a result of migration of a parent from the perspective of adolescents with cerebral palsy. The study was conducted in the year 2013 in the area of Eastern Poland. It involved an analysis of 70 persons (with cerebral palsy in an intellectual capacity) from families in which at least one of the parents migrates. The study incorporated the diagnostic survey method. These tools were used: Family Evaluation Scales (SOR) adapted for Poland by Andrzej Margasiński. The explorations in this study indicate, that 47% of studied temporarily single parent families are balanced models. This is evidence of the resources at the disposal of the family which, despite the disability of the child and temporary separation, is able to function properly. The conducted studies show, that 37% of temporarily single parent families are imbalanced models in the perception of adolescents with cerebral palsy. These families experience functional difficulties and require psychological and pedagogical support. There is a need for building skills related to effective coping with family stress. Especially considering, that families of an imbalanced type do not use the internal and external resources of the family system. Such a situation may deepen the disarrangement of family life. In intermediate families (16%) there are also temporary difficulties in functioning. Separation anxiety experienced by mothers may disrupt relations and introduce additional stress factors. For that reason it is important to provide support for women with difficulties coping with the emotions associated with raising handicapped adolescents and migratory separation.Keywords: child with cerebral palsy, family, migration, parents
Procedia PDF Downloads 4194537 Fatigue of Multiscale Nanoreinforced Composites: 3D Modelling
Authors: Leon Mishnaevsky Jr., Gaoming Dai
Abstract:
3D numerical simulations of fatigue damage of multiscale fiber reinforced polymer composites with secondary nanoclay reinforcement are carried out. Macro-micro FE models of the multiscale composites are generated automatically using Python based software. The effect of the nanoclay reinforcement (localized in the fiber/matrix interface (fiber sizing) and distributed throughout the matrix) on the crack path, damage mechanisms and fatigue behavior is investigated in numerical experiments.Keywords: computational mechanics, fatigue, nanocomposites, composites
Procedia PDF Downloads 6074536 Solitons and Universes with Acceleration Driven by Bulk Particles
Authors: A. C. Amaro de Faria Jr, A. M. Canone
Abstract:
Considering a scenario where our universe is taken as a 3d domain wall embedded in a 5d dimensional Minkowski space-time, we explore the existence of a richer class of solitonic solutions and their consequences for accelerating universes driven by collisions of bulk particle excitations with the walls. In particular it is shown that some of these solutions should play a fundamental role at the beginning of the expansion process. We present some of these solutions in cosmological scenarios that can be applied to models that describe the inflationary period of the Universe.Keywords: solitons, topological defects, branes, kinks, accelerating universes in brane scenarios
Procedia PDF Downloads 1374535 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply
Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele
Abstract:
In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant
Procedia PDF Downloads 1784534 Experimental Verification of Similarity Criteria for Sound Absorption of Perforated Panels
Authors: Aleksandra Majchrzak, Katarzyna Baruch, Monika Sobolewska, Bartlomiej Chojnacki, Adam Pilch
Abstract:
Scaled modeling is very common in the areas of science such as aerodynamics or fluid mechanics, since defining characteristic numbers enables to determine relations between objects under test and their models. In acoustics, scaled modeling is aimed mainly at investigation of room acoustics, sound insulation and sound absorption phenomena. Despite such a range of application, there is no method developed that would enable scaling acoustical perforated panels freely, maintaining their sound absorption coefficient in a desired frequency range. However, conducted theoretical and numerical analyses have proven that it is not physically possible to obtain given sound absorption coefficient in a desired frequency range by directly scaling only all of the physical dimensions of a perforated panel, according to a defined characteristic number. This paper is a continuation of the research mentioned above and presents practical evaluation of theoretical and numerical analyses. The measurements of sound absorption coefficient of perforated panels were performed in order to verify previous analyses and as a result find the relations between full-scale perforated panels and their models which will enable to scale them properly. The measurements were conducted in a one-to-eight model of a reverberation chamber of Technical Acoustics Laboratory, AGH. Obtained results verify theses proposed after theoretical and numerical analyses. Finding the relations between full-scale and modeled perforated panels will allow to produce measurement samples equivalent to the original ones. As a consequence, it will make the process of designing acoustical perforated panels easier and will also lower the costs of prototypes production. Having this knowledge, it will be possible to emulate in a constructed model panels used, or to be used, in a full-scale room more precisely and as a result imitate or predict the acoustics of a modeled space more accurately.Keywords: characteristic numbers, dimensional analysis, model study, scaled modeling, sound absorption coefficient
Procedia PDF Downloads 1964533 Research on the Impact of Spatial Layout Design on College Students’ Learning and Mental Health: Analysis Based on a Smart Classroom Renovation Project in Shanghai, China
Authors: Zhang Dongqing
Abstract:
Concern for students' mental health and the application of intelligent advanced technologies are driving changes in teaching models. The traditional teacher-centered classroom is beginning to transform into a student-centered smart interactive learning environment. Nowadays, smart classrooms are compatible with constructivist learning. This theory emphasizes the role of teachers in the teaching process as helpers and facilitators of knowledge construction, and students learn by interacting with them. The spatial design of classrooms is closely related to the teaching model and should also be developed in the direction of smart classroom design. The goal is to explore the impact of smart classroom layout on student-centered teaching environment and teacher-student interaction under the guidance of constructivist learning theory, by combining the design process and feedback analysis of the smart transformation project on the campus of Tongji University in Shanghai. During the research process, the theoretical basis of constructivist learning was consolidated through literature research and case analysis. The integration and visual field analysis of the traditional and transformed indoor floor plans were conducted using space syntax tools. Finally, questionnaire surveys and interviews were used to collect data. The main conclusions are as followed: flexible spatial layouts can promote students' learning effects and mental health; the interactivity of smart classroom layouts is different and needs to be combined with different teaching models; the public areas of teaching buildings can also improve the interactive learning atmosphere by adding discussion space. This article provides a data-based research basis for improving students' learning effects and mental health, and provides a reference for future smart classroom design.Keywords: spatial layout, smart classroom, space syntax, renovation, educational environment
Procedia PDF Downloads 734532 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection
Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra
Abstract:
In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging
Procedia PDF Downloads 874531 A Methodological Approach to Digital Engineering Adoption and Implementation for Organizations
Authors: Sadia H. Syeda, Zain H. Malik
Abstract:
As systems continue to become more complex and the interdependencies of processes and sub-systems continue to grow and transform, the need for a comprehensive method of tracking and linking the lifecycle of the systems in a digital form becomes ever more critical. Digital Engineering (DE) provides an approach to managing an authoritative data source that links, tracks, and updates system data as it evolves and grows throughout the system development lifecycle. DE enables the developing, tracking, and sharing system data, models, and other related artifacts in a digital environment accessible to all necessary stakeholders. The DE environment provides an integrated electronic repository that enables traceability between design, engineering, and sustainment artifacts. The DE activities' primary objective is to develop a set of integrated, coherent, and consistent system models for the program. It is envisioned to provide a collaborative information-sharing environment for various stakeholders, including operational users, acquisition personnel, engineering personnel, and logistics and sustainment personnel. Examining the processes that DE can support in the systems engineering life cycle (SELC) is a primary step in the DE adoption and implementation journey. Through an analysis of the U.S Department of Defense’s (DoD) Office of the Secretary of Defense (OSD’s) Digital Engineering Strategy and their implementation, examples of DE implementation by the industry and technical organizations, this paper will provide descriptions of the current DE processes and best practices of implementing DE across an enterprise. This will help identify the capabilities, environment, and infrastructure needed to develop a potential roadmap for implementing DE practices consistent with its business strategy. A capability maturity matrix will be provided to assess the organization’s DE maturity emphasizing how all the SELC elements interlink to form a cohesive ecosystem. If implemented, DE can increase efficiency and improve the systems engineering processes' quality and outcomes.Keywords: digital engineering, digital environment, digital maturity model, single source of truth, systems engineering life-cycle
Procedia PDF Downloads 934530 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background
Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong
Abstract:
Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.Keywords: deep learning, image fusion, image generation, layout analysis
Procedia PDF Downloads 1574529 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor under Liquefaction and Scour
Authors: Vinay Kumar Vanjakula, Frank Adam, Nils Goseberg, Christian Windt
Abstract:
When a structure is installed on a seabed, the presence of the structure will influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scour On fixed structures (bridges and Monopiles) in rivers and oceans has been carried out, and very limited research work on scour and liquefaction for gravity anchors, particularly for floating Tension Leg Platform (TLP) substructures. Due to its importance and need for enhancement of knowledge in scour and liquefaction around marine structures, the MarTERA funded a three-year (2020-2023) research program called NuLIMAS (Numerical Modeling of Liquefaction Around Marine Structures). It’s a group consists of European institutions (Universities, laboratories, and consulting companies). The objective of this study is to build a numerical model that replicates the reality, which indeed helps to simulate (predict) underwater flow conditions and to study different marine scour and Liquefication situations. It helps to design a heavyweight anchor for the TLP substructure and to minimize the time and expenditure on experiments. And also, the achieved results and the numerical model will be a basis for the development of other design and concepts For marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM. A conceptual design of heavyweight anchor for TLP substructure is designed through taking considerations of available state-of-the-art knowledge on scour and Liquefication concepts and references to Previous existing designs. These conceptual designs are validated with the available similar experimental benchmark data and also with the CFD numerical benchmark standards (CFD quality assurance study). CFD optimization model/tool is designed as to minimize the effect of fluid flow, scour, and Liquefication. A parameterized model is also developed to automate the calculation process to reduce user interactions. The parameters such as anchor Lowering Process, flow optimized outer contours, seabed interaction study, and FSSI (Fluid-Structure-Seabed Interactions) are investigated and used to carve the model as to build an optimized anchor.Keywords: gravity anchor, liquefaction, scour, computational fluid dynamics
Procedia PDF Downloads 1444528 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 904527 Effect of Different Porous Media Models on Drug Delivery to Solid Tumors: Mathematical Approach
Authors: Mostafa Sefidgar, Sohrab Zendehboudi, Hossein Bazmara, Madjid Soltani
Abstract:
Based on findings from clinical applications, most drug treatments fail to eliminate malignant tumors completely even though drug delivery through systemic administration may inhibit their growth. Therefore, better understanding of tumor formation is crucial in developing more effective therapeutics. For this purpose, nowadays, solid tumor modeling and simulation results are used to predict how therapeutic drugs are transported to tumor cells by blood flow through capillaries and tissues. A solid tumor is investigated as a porous media for fluid flow simulation. Most of the studies use Darcy model for porous media. In Darcy model, the fluid friction is neglected and a few simplified assumptions are implemented. In this study, the effect of these assumptions is studied by considering Brinkman model. A multi scale mathematical method which calculates fluid flow to a solid tumor is used in this study to investigate how neglecting fluid friction affects the solid tumor simulation. In this work, the mathematical model in our previous studies is developed by considering two model of momentum equation for porous media: Darcy and Brinkman. The mathematical method involves processes such as fluid flow through solid tumor as porous media, extravasation of blood flow from vessels, blood flow through vessels and solute diffusion, convective transport in extracellular matrix. The sprouting angiogenesis model is used for generating capillary network and then fluid flow governing equations are implemented to calculate blood flow through the tumor-induced capillary network. Finally, the two models of porous media are used for modeling fluid flow in normal and tumor tissues in three different shapes of tumors. Simulations of interstitial fluid transport in a solid tumor demonstrate that the simplifications used in Darcy model affect the interstitial velocity and Brinkman model predicts a lower value for interstitial velocity than the values that Darcy model does.Keywords: solid tumor, porous media, Darcy model, Brinkman model, drug delivery
Procedia PDF Downloads 3074526 Leisure, Domestic or Professional Activities so as to Prevent Cognitive Decline: Results FreLE Longitudinal Study
Authors: Caroline Dupre, David Hupin, Christ Goumou, Francois Belan, Frederic Roche, Thomas Celarier, Bienvenu Bongue
Abstract:
Background: Previous cohorts have been notably criticized for not studying the different type of physical activity and not investigating household activities. The objective of this work was to analyse the relationship between physical activity and cognitive decline in older people living in the community. Impact of type of physical activity on the results has been realised. Methods: The study used data from the longitudinal and observational study , FrèLE (FRagility: Longitudinal Study of Expressions). The collected data included: socio-demographic variables, lifestyle, and health status (frailty, comorbidities, cognitive status, depression). Cognitive decline was assessed by using: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA). Physical activity was assessed by the Physical Activity Scale for the Elderly (PASE). This tool is structured in three sections: the leisure activity, domestic activity, and professional activity. Logistic regressions and proportional hazards regression models (Cox) were used to estimate the risk of cognitive disorders. Results: At baseline, the prevalence of cognitive disorders was 6.9% according to MMSE. In total, 1167 participants without cognitive disorders were included in the analysis. The mean age was 77.4 years, and 52.1% of the participants were women. After a 2 years long follow-up, we found cognitive disorders on 53 participants (4.5%). Physical activity at baseline is lower in older adults for whom cognitive decline was observed after two years of follow-up. Subclass analyses showed that leisure and domestic activities were associated with cognitive decline, but not professional activities. Conclusions: Analysis showed a relationship between cognitive disorders and type of physical activity. The current study will be completed by the MoCA for mild cognitive impairment. These findings compared to other ongoing studies, will contribute to the debate on the beneficial effects of physical activity on cognition.Keywords: aging, cognitive function, physical activity, mixed models
Procedia PDF Downloads 1264525 An Ecological Systems Approach to Risk and Protective Factors of Sibling Conflict for Children in the United Kingdom
Authors: C. A. Bradley, D. Patsios, D. Berridge
Abstract:
This paper presents evidence to better understand the risk and protective factors related to sibling conflict and the patterns of association between sibling conflict and negative adjustment outcomes by incorporating additional familial and societal factors within statistical models of risk and adjustment. It was conducted through the secondary analysis of a large representative cross-sectional dataset of children in the UK. The original study includes proxy interviews for young children and self-report interviews for adolescents. The study applies an ecological systems framework for the analyses. Hierarchical regression models assess risk and protective factors and adjustment outcomes associated with sibling conflict. Interactions reveal differential effect between contextual risk factors and the social context of influence. The general pattern of findings suggested that, although factors affecting likelihood of experiencing sibling conflict were often determined by child age, some remained consistent across childhood. These factors were often conditional on each other, reinforcing the importance of an ecological framework. Across both age-groups, sibling conflict was associated with siblings closer in age; male sibling groups; most advantaged socio-economic group; and exposure to community violence, such as witnessing violent assault or robbery. The study develops the evidence base on the influence of ethnicity and socio-economic group on sibling conflict by exploring interactions between social context. It also identifies key new areas of influence – such as family structure, disability, and community violence in exacerbating or reducing risk of conflict. The study found negative associations between sibling conflict and young children’s mental well-being and adolescents' mental well-being and anti-social behaviour, but also more context specific associations – such as sibling conflict moderating the negative impact of adversity and high risk experiences for young children such as parental violence toward the child.Keywords: adjustment, conflict, ecological systems, family systems, risk and protective factors, sibling
Procedia PDF Downloads 1074524 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments
Authors: Rahul Paul, Peter Mctaggart, Luke Skinner
Abstract:
Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry
Procedia PDF Downloads 994523 Efficacy and Mechanisms of Acupuncture for Depression: A Meta-Analysis of Clinical and Preclinical Evidence
Authors: Yimeng Zhang
Abstract:
Major depressive disorder (MDD) is a prevalent mental health condition with a substantial economic impact and limited treatment options. Acupuncture has gained attention as a promising non-pharmacological intervention for alleviating depressive symptoms. However, its mechanisms and clinical effectiveness remain incompletely understood. This meta-analysis aims to (1) synthesize existing evidence on the mechanisms and clinical effectiveness of acupuncture for depression and (2) compare these findings with pharmacological interventions, providing insights for future research. Evidence from animal models and clinical studies indicates that acupuncture may enhance hippocampal and network neuroplasticity and reduce brain inflammation, potentially alleviating depressive disorders. Clinical studies suggest that acupuncture can effectively relieve primary depression, particularly in milder cases, and is beneficial in managing post-stroke depression, pain-related depression, and postpartum depression, both as a standalone and adjunctive treatment. Notably, combining acupuncture with antidepressant pharmacotherapy appears to enhance treatment outcomes and reduce medication side effects, addressing a critical issue in conventional drug therapy's high dropout rates. This meta-analysis, encompassing 12 studies and 710 participants, draws data from eight digital databases (PubMed, EMBASE, Web of Science, EBSCOhost, CNKI, CBM, Wangfang, and CQVIP) covering the period from 2012 to 2022. Utilizing Stata software 15.0, the meta-analysis employed random-effects and fixed-effects models to assess the distribution of depression in Traditional Chinese Medicine (TCM). The results underscore the substantial evidence supporting acupuncture's beneficial effects on depression. However, the small sample sizes of many clinical trials raise concerns about the generalizability of the findings, indicating a need for further research to validate these outcomes and optimize acupuncture's role in treating depression.Keywords: Chinese medicine, acupuncture, depression, meta-analysis
Procedia PDF Downloads 354522 Monetary Evaluation of Dispatching Decisions in Consideration of Choice of Transport
Authors: Marcel Schneider, Nils Nießen
Abstract:
Microscopic simulation programs enable the description of the two processes of railway operation and the previous timetabling. Occupation conflicts are often solved based on defined train priorities on both process levels. These conflict resolutions produce knock-on delays for the involved trains. The sum of knock-on delays is commonly used to evaluate the quality of railway operations. It is either compared to an acceptable level-of-service or the delays are evaluated economically by linearly monetary functions. It is impossible to properly evaluate dispatching decisions without a well-founded objective function. This paper presents a new approach for evaluation of dispatching decisions. It uses models of choice of transport and considers the behaviour of the end-costumers. These models evaluate the knock-on delays in more detail than linearly monetary functions and consider other competing modes of transport. The new approach pursues the coupling of a microscopic model of railway operation with the macroscopic model of choice of transport. First it will be implemented for the railway operations process, but it can also be used for timetabling. The evaluation considers the possibility to change over to other transport modes by the end-costumers. The new approach first looks at the rail-mounted and road transport, but it can also be extended to air transport. The split of the end-costumers is described by the modal-split. The reactions by the end-costumers have an effect on the revenues of the railway undertakings. Various travel purposes has different pavement reserves and tolerances towards delays. Longer journey times affect besides revenue changes also additional costs. The costs depend either on time or track and arise from circulation of workers and vehicles. Only the variable values are summarised in the contribution margin, which is the base for the monetary evaluation of the delays. The contribution margin is calculated for different resolution decisions of the same conflict. The conflict resolution is improved until the monetary loss becomes minimised. The iterative process therefore determines an optimum conflict resolution by observing the change of the contribution margin. Furthermore, a monetary value of each dispatching decision can also be determined.Keywords: choice of transport, knock-on delays, monetary evaluation, railway operations
Procedia PDF Downloads 3284521 Acoustic Finite Element Analysis of a Slit Model with Consideration of Air Viscosity
Authors: M. Sasajima, M. Watanabe, T. Yamaguchi Y. Kurosawa, Y. Koike
Abstract:
In very narrow pathways, the speed of sound propagation and the phase of sound waves change due to the air viscosity. We have developed a new Finite Element Method (FEM) that includes the effects of air viscosity for modeling a narrow sound pathway. This method is developed as an extension of the existing FEM for porous sound-absorbing materials. The numerical calculation results for several three-dimensional slit models using the proposed FEM are validated against existing calculation methods.Keywords: simulation, FEM, air viscosity, slit
Procedia PDF Downloads 3694520 Algorithmic Obligations: Proactive Liability for AI-Generated Content and Copyright Compliance
Authors: Aleksandra Czubek
Abstract:
As AI systems increasingly shape content creation, existing copyright frameworks face significant challenges in determining liability for AI-generated outputs. Current legal discussions largely focus on who bears responsibility for infringing works, be it developers, users, or entities benefiting from AI outputs. This paper introduces a novel concept of algorithmic obligations, proposing that AI developers be subject to proactive duties that ensure their models prevent copyright infringement before it occurs. Building on principles of obligations law traditionally applied to human actors, the paper suggests a shift from reactive enforcement to proactive legal requirements. AI developers would be legally mandated to incorporate copyright-aware mechanisms within their systems, turning optional safeguards into enforceable standards. These obligations could vary in implementation across international, EU, UK, and U.S. legal frameworks, creating a multi-jurisdictional approach to copyright compliance. This paper explores how the EU’s existing copyright framework, exemplified by the Copyright Directive (2019/790), could evolve to impose a duty of foresight on AI developers, compelling them to embed mechanisms that prevent infringing outputs. By drawing parallels to GDPR’s “data protection by design,” a similar principle could be applied to copyright law, where AI models are designed to minimize copyright risks. In the UK, post-Brexit text and data mining exemptions are seen as pro-innovation but pose risks to copyright protections. This paper proposes a balanced approach, introducing algorithmic obligations to complement these exemptions. AI systems benefiting from text and data mining provisions should integrate safeguards that flag potential copyright violations in real time, ensuring both innovation and protection. In the U.S., where copyright law focuses on human-centric works, this paper suggests an evolution toward algorithmic due diligence. AI developers would have a duty similar to product liability, ensuring that their systems do not produce infringing outputs, even if the outputs themselves cannot be copyrighted. This framework introduces a shift from post-infringement remedies to preventive legal structures, where developers actively mitigate risks. The paper also breaks new ground by addressing obligations surrounding the training data of large language models (LLMs). Currently, training data is often treated under exceptions such as the EU’s text and data mining provisions or U.S. fair use. However, this paper proposes a proactive framework where developers are obligated to verify and document the legal status of their training data, ensuring it is licensed or otherwise cleared for use. In conclusion, this paper advocates for an obligations-centered model that shifts AI-related copyright law from reactive litigation to proactive design. By holding AI developers to a heightened standard of care, this approach aims to prevent infringement at its source, addressing both the outputs of AI systems and the training processes that underlie them.Keywords: ip, technology, copyright, data, infringement, comparative analysis
Procedia PDF Downloads 194519 Multi-National Corporations and International Communication. An Analysis of Arçelik globals’ Online Presences
Authors: Aisha Iddrsiu
Abstract:
Public Relations (PR) has rapidly evolved around the world, just as companies have expanded to reach other parts of the world. With most multinational corporations conducting businesses in more than one country, only a few of these Multinational Corporations (MNC’s) are actual public relations firms, many have public relations departments or divisions that conduct public relations practices internationally. Hence international public relations is seen as a fast-growing specialty in the field of Public Relations. Multinational companies have devised strategies to effectively communicate and execute their roles within and between foreign publics and other cultures in which they operate through various means including the internet which is among the major inventions that have enabled corporations to establish their presents while targeting anonymous and diverse publics from varied cultures. International public relations practitioners rely on strategies coupled with internet use to communicate among and with foreign publics. Corporate websites and various social media handles have served as an important channel for public relations activities targeting both internal and international publics. In an incessant expansion of corporations and interactions with the publics from different cultures, it has become eminent to understand the public relation strategies used by MNCs in their international communication. This study therefore seeks to establish the international public relation strategies or models employed by Multinational Corporations specifically Arcelik Global in the management of its subsidiaries and communicating with international public. This study analyses both Arçelik global’s (one of the largest multinational companies in Turkey) website and social media accounts to understand the management strategy used with it subsidiary as well as strategies used to communicate with its global and local publics. Other underlying objective of this study are, 1. To examine the dominant international public relations models used by Multinational Corporations (Arcelik global). 2. To understand how Multinational Corporations manage (Arcelik global) its subsidiaries. 3. To understand how Multinational Corporations (Arcelik global) communicate with international or global publics. Research Questions 1. The main global PR strategies employed by multinational corporations (Arcelik global) 2. How subsidiaries of multinational corporations like Arcelik Global are managed. 3. How multinational corporations, like Arcelik worldwide, interact with international publics.Keywords: multinational corporation, ethnocentric model, polycentric model, international public relations
Procedia PDF Downloads 854518 Dynamic Model for Forecasting Rainfall Induced Landslides
Authors: R. Premasiri, W. A. H. A. Abeygunasekara, S. M. Hewavidana, T. Jananthan, R. M. S. Madawala, K. Vaheeshan
Abstract:
Forecasting the potential for disastrous events such as landslides has become one of the major necessities in the current world. Most of all, the landslides occurred in Sri Lanka are found to be triggered mostly by intense rainfall events. The study area is the landslide near Gerandiella waterfall which is located by the 41st kilometer post on Nuwara Eliya-Gampala main road in Kotmale Division in Sri Lanka. The landslide endangers the entire Kotmale town beneath the slope. Geographic Information System (GIS) platform is very much useful when it comes to the need of emulating the real-world processes. The models are used in a wide array of applications ranging from simple evaluations to the levels of forecast future events. This project investigates the possibility of developing a dynamic model to map the spatial distribution of the slope stability. The model incorporates several theoretical models including the infinite slope model, Green Ampt infiltration model and Perched ground water flow model. A series of rainfall values can be fed to the model as the main input to simulate the dynamics of slope stability. Hydrological model developed using GIS is used to quantify the perched water table height, which is one of the most critical parameters affecting the slope stability. Infinite slope stability model is used to quantify the degree of slope stability in terms of factor of safety. DEM was built with the use of digitized contour data. Stratigraphy was modeled in Surfer using borehole data and resistivity images. Data available from rainfall gauges and piezometers were used in calibrating the model. During the calibration, the parameters were adjusted until a good fit between the simulated ground water levels and the piezometer readings was obtained. This model equipped with the predicted rainfall values can be used to forecast of the slope dynamics of the area of interest. Therefore it can be investigated the slope stability of rainfall induced landslides by adjusting temporal dimensions.Keywords: factor of safety, geographic information system, hydrological model, slope stability
Procedia PDF Downloads 4234517 3D-Shape-Perception Studied Exemplarily with Tetrahedron and Icosahedron as Prototypes of the Polarities Sharp versus Round
Authors: Iris Sauerbrei, Jörg Trojan, Erich Lehner
Abstract:
Introduction and significance of the study: This study examines if three-dimensional shapes elicit distinct patterns of perceptions. If so, it is relevant for all fields of design, especially for the design of the built environment. Description of basic methodologies: The five platonic solids are the geometrical base for all other three-dimensional shapes, among which tetrahedron and icosahedron provide the clearest representation of the qualities sharp and round. The component pair of attributes ‘sharp versus round’ has already been examined in various surveys in a psychology of perception and in neuroscience by means of graphics, images of products of daily use, as well as by photographs and walk-through-videos of landscapes and architecture. To verify a transfer of outcomes of the existing surveys to the perception of three-dimensional shapes, walk-in models (total height 2.2m) of tetrahedron and icosahedron were set up in a public park in Frankfurt am Main, Germany. Preferences of park visitors were tested by questionnaire; also they were asked to write down associations in a free text. In summer 2015, the tetrahedron was assembled eight times, the icosahedron seven times. In total 288 participants took part in the study; 116 rated the tetrahedron, 172 rated the icosahedron. Findings: Preliminary analyses of the collected data using Wilcoxon Rank-Sum tests show that the perceptions of the two solids differ in respect to several attributes and that each of the tested model show significance for specific attributes. Conclusion: These findings confirm the assumptions and provide first evidence that the perception of three-dimensional shapes are associated to characteristic attributes and to which. In order to enable conscious choices for spatial arrangements in design processes for the built environment, future studies should examine attributes for the other three basic bodies - Octahedron, Cube, and Dodecahedron. Additionally, similarities and differences between the perceptions of two- and three-dimensional shapes as well as shapes that are more complex need further research.Keywords: 3D shapes, architecture, geometrical features, space perception, walk-in models
Procedia PDF Downloads 2284516 Training to Evaluate Creative Activity in a Training Context, Analysis of a Learner Evaluation Model
Authors: Massy Guillaume
Abstract:
Introduction: The implementation of creativity in educational policies or curricula raises several issues, including the evaluation of creativity and the means to do so. This doctoral research focuses on the appropriation and transposition of creativity assessment models by future teachers. Our objective is to identify the elements of the models that are most transferable to practice in order to improve their implementation in the students' curriculum while seeking to create a new model for assessing creativity in the school environment. Methods: In order to meet our objective, this preliminary quantitative exploratory study by questionnaire was conducted at two points in the participants' training: at the beginning of the training module and throughout the practical work. The population is composed of 40 people of diverse origins with an average age of 26 (s:8,623) years. In order to be as close as possible to our research objective and to test our questionnaires, we set up a pre-test phase during the spring semester of 2022. Results: The results presented focus on aspects of the OECD Creative Competencies Assessment Model. Overall, 72% of participants support the model's focus on skill levels as appropriate for the school context. More specifically, the data indicate that the separation of production and process in the rubric facilitates observation by the assessor. From the point of view of transposing the grid into teaching practice, the participants emphasised that production is easier to plan and observe in students than in the process. This difference is reinforced by a lack of knowledge about certain concepts such as innovation or risktaking in schools. Finally, the qualitative results indicate that the addition of multiple levels of competencies to the OECD rubric would allow for better implementation in the classroom. Conclusion: The identification by the students of the elements allowing the evaluation of creativity in the school environment generates an innovative approach to the training contents. These first data, from the test phase of our research, demonstrate the difficulty that exists between the implementation of an evaluation model in a training program and its potential transposition by future teachers.Keywords: creativity, evaluation, schooling, training
Procedia PDF Downloads 954515 The Computational Psycholinguistic Situational-Fuzzy Self-Controlled Brain and Mind System Under Uncertainty
Authors: Ben Khayut, Lina Fabri, Maya Avikhana
Abstract:
The models of the modern Artificial Narrow Intelligence (ANI) cannot: a) independently and continuously function without of human intelligence, used for retraining and reprogramming the ANI’s models, and b) think, understand, be conscious, cognize, infer, and more in state of Uncertainty, and changes in situations, and environmental objects. To eliminate these shortcomings and build a new generation of Artificial Intelligence systems, the paper proposes a Conception, Model, and Method of Computational Psycholinguistic Cognitive Situational-Fuzzy Self-Controlled Brain and Mind System (CPCSFSCBMSUU) using a neural network as its computational memory, operating under uncertainty, and activating its functions by perception, identification of real objects, fuzzy situational control, forming images of these objects, modeling their psychological, linguistic, cognitive, and neural values of properties and features, the meanings of which are identified, interpreted, generated, and formed taking into account the identified subject area, using the data, information, knowledge, and images, accumulated in the Memory. The functioning of the CPCSFSCBMSUU is carried out by its subsystems of the: fuzzy situational control of all processes, computational perception, identifying of reactions and actions, Psycholinguistic Cognitive Fuzzy Logical Inference, Decision making, Reasoning, Systems Thinking, Planning, Awareness, Consciousness, Cognition, Intuition, Wisdom, analysis and processing of the psycholinguistic, subject, visual, signal, sound and other objects, accumulation and using the data, information and knowledge in the Memory, communication, and interaction with other computing systems, robots and humans in order of solving the joint tasks. To investigate the functional processes of the proposed system, the principles of Situational Control, Fuzzy Logic, Psycholinguistics, Informatics, and modern possibilities of Data Science were applied. The proposed self-controlled System of Brain and Mind is oriented on use as a plug-in in multilingual subject Applications.Keywords: computational brain, mind, psycholinguistic, system, under uncertainty
Procedia PDF Downloads 1774514 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations
Authors: Milena Nanova, Radul Shishkov, Martin Georgiev, Damyan Damov
Abstract:
This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper explores how modern digital tools, particularly computational design, and algorithmic modelling, can optimize the early stages of residential building design. By creating a basic parametric model of a residential district, the paper investigates how automated design tools can explore multiple design variants based on predefined parameters (e.g., building cost, dimensions, orientation) and constraints. The paper aims to demonstrate how these tools can rapidly generate and refine architectural solutions that meet the required criteria for quality of life, cost efficiency, and functionality. The study utilizes computational design for database processing and algorithmic modelling within the fields of applied geodesy and architecture. It focuses on optimizing the forms of residential development by adjusting specific parameters and constraints. The results of multiple iterations are analysed, refined, and selected based on their alignment with predefined quality and cost criteria. The findings of this research will contribute to a modern, complex approach to residential area design. The paper demonstrates the potential for integrating BIM models into the design process and their application in virtual 3D Geographic Information Systems (GIS) environments. The study also examines the transformation of BIM models into suitable 3D GIS file formats, such as CityGML, to facilitate the visualization and evaluation of urban planning solutions. In conclusion, our research demonstrates that a generative parametric approach based on real geodesic data and collaborative decision-making could be introduced in the early phases of the design process. This gives the designers powerful tools to explore diverse design possibilities, significantly improving the qualities of the investment during its entire lifecycle.Keywords: architectural design, residential buildings, urban development, geodesic data, generative design, parametric models, workflow optimization
Procedia PDF Downloads 124513 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients
Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming
Abstract:
Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry
Procedia PDF Downloads 2944512 Characterization of the Groundwater Aquifers at El Sadat City by Joint Inversion of VES and TEM Data
Authors: Usama Massoud, Abeer A. Kenawy, El-Said A. Ragab, Abbas M. Abbas, Heba M. El-Kosery
Abstract:
Vertical Electrical Sounding (VES) and Transient Electro Magnetic (TEM) survey have been applied for characterizing the groundwater aquifers at El Sadat industrial area. El-Sadat city is one of the most important industrial cities in Egypt. It has been constructed more than three decades ago at about 80 km northwest of Cairo along the Cairo–Alexandria desert road. Groundwater is the main source of water supplies required for domestic, municipal, and industrial activities in this area due to the lack of surface water sources. So, it is important to maintain this vital resource in order to sustain the development plans of this city. In this study, VES and TEM data were identically measured at 24 stations along three profiles trending NE–SW with the elongation of the study area. The measuring points were arranged in a grid like pattern with both inter-station spacing and line–line distance of about 2 km. After performing the necessary processing steps, the VES and TEM data sets were inverted individually to multi-layer models, followed by a joint inversion of both data sets. Joint inversion process has succeeded to overcome the model-equivalence problem encountered in the inversion of individual data set. Then, the joint models were used for the construction of a number of cross sections and contour maps showing the lateral and vertical distribution of the geo-electrical parameters in the subsurface medium. Interpretation of the obtained results and correlation with the available geological and hydrogeological information revealed TWO aquifer systems in the area. The shallow Pleistocene aquifer consists of sand and gravel saturated with fresh water and exhibits large thickness exceeding 200 m. The deep Pliocene aquifer is composed of clay and sand and shows low resistivity values. The water bearing layer of the Pleistocene aquifer and the upper surface of Pliocene aquifer are continuous and no structural features have cut this continuity through the investigated area.Keywords: El Sadat city, joint inversion, VES, TEM
Procedia PDF Downloads 3704511 Investigation of Aerodynamic and Design Features of Twisting Tall Buildings
Authors: Sinan Bilgen, Bekir Ozer Ay, Nilay Sezer Uzol
Abstract:
After decades of conventional shapes, irregular forms with complex geometries are getting more popular for form generation of tall buildings all over the world. This trend has recently brought out diverse building forms such as twisting tall buildings. This study investigates both the aerodynamic and design features of twisting tall buildings through comparative analyses. Since twisting a tall building give rise to additional complexities related with the form and structural system, lateral load effects become of greater importance on these buildings. The aim of this study is to analyze the inherent characteristics of these iconic forms by comparing the wind loads on twisting tall buildings with those on their prismatic twins. Through a case study research, aerodynamic analyses of an existing twisting tall building and its prismatic counterpart were performed and the results have been compared. The prismatic twin of the original building were generated by removing the progressive rotation of its floors with the same plan area and story height. Performance-based measures under investigation have been evaluated in conjunction with the architectural design. Aerodynamic effects have been analyzed by both wind tunnel tests and computational methods. High frequency base balance tests and pressure measurements on 3D models were performed to evaluate wind load effects on a global and local scale. Comparisons of flat and real surface models were conducted to further evaluate the effects of the twisting form without façade texture contribution. Comparisons highlighted that, the twisting form under investigation shows better aerodynamic behavior both for along wind but particularly for across wind direction. Compared to the prismatic counterpart; twisting model is superior on reducing vortex-shedding dynamic response by disorganizing the wind vortices. Consequently, despite the difficulties arisen from inherent complexity of twisted forms, they could still be feasible and viable with their attractive images in the realm of tall buildings.Keywords: aerodynamic tests, motivation for twisting, tall buildings, twisted forms, wind excitation
Procedia PDF Downloads 234