Search results for: dynamic modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5548

Search results for: dynamic modelling

1078 Holistic Approach to Assess the Potential of Using Traditional and Advance Insulation Materials for Energy Retrofit of Office Buildings

Authors: Marco Picco, Mahmood Alam

Abstract:

Improving the energy performance of existing buildings can be challenging, particularly when facades cannot be modified, and the only available option is internal insulation. In such cases, the choice of the most suitable material becomes increasingly complex, as in addition to thermal transmittance and capital cost, the designer needs to account for the impact of the intervention on the internal spaces, and in particular the loss of usable space due to the additional layers of materials installed. This paper explores this issue by analysing a case study of an average office building needing to go through a refurbishment in order to reach the limits imposed by current regulations to achieve energy efficiency in buildings. The building is simulated through dynamic performance simulation under three different climate conditions in order to evaluate its energy needs. The use of Vacuum Insulated Panels as an option for energy refurbishment is compared to traditional insulation materials (XPS, Mineral Wool). For each scenario, energy consumptions are calculated and, in combination with their expected capital costs, used to perform a financial feasibility analysis. A holistic approach is proposed, taking into account the impact of the intervention on internal space by quantifying the value of the lost usable space and used in the financial feasibility analysis. The proposed approach highlights how taking into account different drivers will lead to the choice of different insulation materials, showing how accounting for the economic value of space can make VIPs an attractive solution for energy retrofitting under various climate conditions.

Keywords: vacuum insulated panels, building performance simulation, payback period, building energy retrofit

Procedia PDF Downloads 154
1077 Influence of Local Soil Conditions on Optimal Load Factors for Seismic Design of Buildings

Authors: Miguel A. Orellana, Sonia E. Ruiz, Juan Bojórquez

Abstract:

Optimal load factors (dead, live and seismic) used for the design of buildings may be different, depending of the seismic ground motion characteristics to which they are subjected, which are closely related to the type of soil conditions where the structures are located. The influence of the type of soil on those load factors, is analyzed in the present study. A methodology that is useful for establishing optimal load factors that minimize the cost over the life cycle of the structure is employed; and as a restriction, it is established that the probability of structural failure must be less than or equal to a prescribed value. The life-cycle cost model used here includes different types of costs. The optimization methodology is applied to two groups of reinforced concrete buildings. One set (consisting on 4-, 7-, and 10-story buildings) is located on firm ground (with a dominant period Ts=0.5 s) and the other (consisting on 6-, 12-, and 16-story buildings) on soft soil (Ts=1.5 s) of Mexico City. Each group of buildings is designed using different combinations of load factors. The statistics of the maximums inter-story drifts (associated with the structural capacity) are found by means of incremental dynamic analyses. The buildings located on firm zone are analyzed under the action of 10 strong seismic records, and those on soft zone, under 13 strong ground motions. All the motions correspond to seismic subduction events with magnitudes M=6.9. Then, the structural damage and the expected total costs, corresponding to each group of buildings, are estimated. It is concluded that the optimal load factors combination is different for the design of buildings located on firm ground than that for buildings located on soft soil.

Keywords: life-cycle cost, optimal load factors, reinforced concrete buildings, total costs, type of soil

Procedia PDF Downloads 306
1076 A Study of Soft Soil Improvement by Using Lime Grit

Authors: Ashim Kanti Dey, Briti Sundar Bhowmik

Abstract:

This paper presents an idea to improve the soft soil by using lime grits which are normally produced as waste product in the paper manufacturing industries. This waste material cannot be used as a construction material because of its light weight, uniform size and poor compaction control. With scarcity in land, effective disposal of lime grit is a major concern of all paper manufacturing industries. Considering its non-plasticity and high permeability characteristics the lime grit may suitably be used as a drainage material for speedy consolidation of cohesive soil. It can also be used to improve the bearing capacity of soft clay. An attempt has been made in this paper to show the usefulness of lime grit in improving the bearing capacity of shallow foundation resting on soft clayey soil. A series of undrained unconsolidated cyclic triaxial tests performed at different area ratios and at three different water contents shows that dynamic shear modulus and damping ratio can be substantially improved with lime grit. Improvement is observed to be more in case of higher area ratio and higher water content. Static triaxial tests were also conducted on lime grit reinforced clayey soil after application of 50 load cycles to determine the effect of lime grit columns on cyclically loaded clayey soils. It is observed that the degradation is less for lime grit stabilized soil. A study of model test with different area ratio of lime column installation is also included to see the field behaviour of lime grit reinforced soil.

Keywords: lime grit column, area ratio, shear modulus, damping ratio, strength ratio, improvement factor, degradation factor

Procedia PDF Downloads 503
1075 Economic Recession and its Psychological Effects on Educated Youth: A Case Study of Pakistan

Authors: Aroona Hashmi

Abstract:

An economic recession can lead people to feel more insecure about their financial situation. The series of events leading into a recession can be especially distressing for Educated Youth. One of the most salient factors linking economic recession to psychological distress is unemployment. It is proved that a large number of educated young people are facing higher unemployment rate in Pakistan. Young people are likely to get frustrated at the lack of opportunities made available to them. If the young population increases more rapidly than job opportunities, then number of unemployment is likely to increase. The aim of present study was to investigate the relationship between economic instability, growing rate of aggression and frustration among educated youth. The study aimed to find out the impact of increased economic instability on the learning abilities of the students. Data was gathered from six university students of Punjab, Pakistan. The sample of the study consisted of three hundred male and female university students. The data was analyzed by applying Chi -square test. The results of the research indicate that there is a significant relationship between low household income and growing rate of aggression among educated youth. The increasing trend of economic instability significantly influences the learning abilities of the students. The study concludes that feeling of deprivation produce frustration and could be expressed through aggression. Therefore, if factors that are responsible for youth unemployment in Pakistan are addressed, psychological effects will be reduced. The right way of tackling the youth bulge is to turn the youth into a productive workforce. There is a dire need to transform the education system to societal needs. At the same time creating demand for the young workforce is achieved through dynamic changes in the economic structure.

Keywords: psychological effects, economic recession, educated youth, environmental factors

Procedia PDF Downloads 388
1074 Potential Risks of Using Disconnected Composite Foundation Systems in Active Seismic Zones

Authors: Mohamed ElMasry, Ahmad Ragheb, Tareq AbdelAziz, Mohamed Ghazy

Abstract:

Choosing the suitable infrastructure system is becoming more challenging with the increase in demand for heavier structures contemporarily. This is the case where piled raft foundations have been widely used around the world to support heavy structures without extensive settlement. In the latter system, piles are rigidly connected to the raft, and most of the load goes to the soil layer on which the piles are bearing. In spite of that, when soil profiles contain thicker soft clay layers near the surface, or at relatively shallow depths, it is unfavorable to use the rigid piled raft foundation system. Consequently, the disconnected piled raft system was introduced as an alternative approach for the rigidly connected system. In this system, piles are disconnected from the raft using a cushion of soil, mostly of a granular interlayer. The cushion is used to redistribute the stresses among the piles and the subsoil. Piles are also used to stiffen the subsoil, and by this way reduce the settlement without being rigidly connected to the raft. However, the seismic loading effect on such disconnected foundation systems remains a problem, since the soil profiles may include thick clay layers which raise risks of amplification of the dynamic earthquake loads. In this paper, the effects of seismic behavior on the connected and disconnected piled raft systems are studied through a numerical model using Midas GTS NX Software. The study concerns the soil-structure interaction and the expected behavior of the systems. Advantages and disadvantages of each foundation approach are studied, and a comparison between the results are presented to show the effects of using disconnected piled raft systems in highly seismic zones. This was done by showing the excitation amplification in each of the foundation systems.

Keywords: soil-structure interaction, disconnected piled-raft, risks, seismic zones

Procedia PDF Downloads 265
1073 Crab Shell Waste Chitosan-Based Thin Film for Acoustic Sensor Applications

Authors: Maydariana Ayuningtyas, Bambang Riyanto, Akhiruddin Maddu

Abstract:

Industrial waste of crustacean shells, such as shrimp and crab, has been considered as one of the major issues contributing to environmental pollution. The waste processing mechanisms to form new, practical substances with added value have been developed. Chitosan, a derived matter from chitin, which is obtained from crab and shrimp shells, performs prodigiously in broad range applications. A chitosan composite-based diaphragm is a new inspiration in fiber optic acoustic sensor advancement. Elastic modulus, dynamic response, and sensitivity to acoustic wave of chitosan-based composite film contribute great potentials of organic-based sound-detecting material. The objective of this research was to develop chitosan diaphragm application in fiber optic microphone system. The formulation was conducted by blending 5% polyvinyl alcohol (PVA) solution with dissolved chitosan at 0%, 1% and 2% in 1:1 ratio, respectively. Composite diaphragms were characterized for the morphological and mechanical properties to predict the desired acoustic sensor sensitivity. The composite with 2% chitosan indicated optimum performance with 242.55 µm thickness, 67.9% relative humidity, and 29-76% light transmittance. The Young’s modulus of 2%-chitosan composite material was 4.89×104 N/m2, which generated the voltage amplitude of 0.013V and performed sensitivity of 3.28 mV/Pa at 1 kHz. Based on the results above, chitosan from crustacean shell waste can be considered as a viable alternative material for fiber optic acoustic sensor sensing pad development. Further, the research in chitosan utilisation is proposed as novel optical microphone development in anthropogenic noise controlling effort for environmental and biodiversity conservation.

Keywords: acoustic sensor, chitosan, composite, crab shell, diaphragm, waste utilisation

Procedia PDF Downloads 257
1072 Preparation of Nano-Scaled linbo3 by Polyol Method

Authors: Gabriella Dravecz, László Péter, Zsolt Kis

Abstract:

Abstract— The growth of optical LiNbO3 single crystal and its physical and chemical properties are well known on the macroscopic scale. Nowadays the rare-earth doped single crystals became important for coherent quantum optical experiments: electromagnetically induced transparency, slow down of light pulses, coherent quantum memory. The expansion of applications is increasingly requiring the production of nano scaled LiNbO3 particles. For example, rare-earth doped nanoscaled particles of lithium niobate can be act like single photon source which can be the bases of a coding system of the quantum computer providing complete inaccessibility to strangers. The polyol method is a chemical synthesis where oxide formation occurs instead of hydroxide because of the high temperature. Moreover the polyol medium limits the growth and agglomeration of the grains producing particles with the diameter of 30-200 nm. In this work nano scaled LiNbO3 was prepared by the polyol method. The starting materials (niobium oxalate and LiOH) were diluted in H2O2. Then it was suspended in ethylene glycol and heated up to about the boiling point of the mixture with intensive stirring. After the thermal equilibrium was reached, the mixture was kept in this temperature for 4 hours. The suspension was cooled overnight. The mixture was centrifuged and the particles were filtered. Dynamic Light Scattering (DLS) measurement was carried out and the size of the particles were found to be 80-100 nms. This was confirmed by Scanning Electron Microscope (SEM) investigations. The element analysis of SEM showed large amount of Nb in the sample. The production of LiNbO3 nano particles were succesful by the polyol method. The agglomeration of the particles were avoided and the size of 80-100nm could be reached.

Keywords: lithium-niobate, nanoparticles, polyol, SEM

Procedia PDF Downloads 134
1071 Research on the Aesthetic Characteristics of Calligraphy Art Under The Cross-Cultural Background Based on Eye Tracking

Authors: Liu Yang

Abstract:

Calligraphy has a unique aesthetic value in Chinese traditional culture. Calligraphy reflects the physical beauty and the dynamic beauty of things through the structure of writing and the order of strokes to standardize the style of writing. In recent years, Chinese researchers have carried out research on the appreciation of calligraphy works from the perspective of psychology, such as how Chinese people appreciate the beauty of stippled lines, the beauty of virtual and real, and the beauty of the composition. However, there is currently no domestic research on how foreigners appreciate Chinese calligraphy. People's appreciation of calligraphy is mainly in the form of visual perception, and psychologists have been working on the use of eye trackers to record eye tracking data to explore the relationship between eye tracking and psychological activities. The purpose of this experimental study is to use eye tracking recorders to analyze the eye gaze trajectories of college students with different cultural backgrounds when they appreciate the same calligraphy work to reveal the differences in cognitive processing with different cultural backgrounds. It was found that Chinese students perceived calligraphy as words when viewing calligraphy works, so they first noticed fonts with easily recognizable glyphs, and the overall viewed time was short. Foreign students perceived calligraphy works as graphics, and they first noticed novel and abstract fonts, and the overall viewing time is longer. The understanding of calligraphy content has a certain influence on the appreciation of calligraphy works by foreign students. It is shown that when foreign students who understand the content of calligraphy works. The eye tracking path is more consistent with the calligraphy writing path, and it helps to develop associations with calligraphy works to better understand the connotation of calligraphy works. This result helps us understand the impact of cultural background differences on calligraphy appreciation and helps us to take more effective strategies to help foreign audiences understand Chinese calligraphy art.

Keywords: Chinese calligraphy, eye-tracking, cross-cultural, cultural communication

Procedia PDF Downloads 107
1070 Clinical Supervisors Experience of Supervising Nursing Students from a Higher Education Institution

Authors: J. Magerman, P. Martin

Abstract:

Nursing students' clinical abilities is highly dependent on the quality of the clinical experience obtained while placed in the clinical environment. The clinical environment has amongst other, key role players which include the clinical supervisor. The primary role of the clinical supervisor is to guide nursing students to become the best practice nursing professionals. However, globally literature alludes to the failure of educating institutions to deliver competent nursing professionals to meet the needs of patients and deliver quality patient care. At the participating university, this may be due to various factors such as large student numbers and social and environmental challenges experienced by clinical supervisors. The aim of this study was to explore and describe the lived experiences of clinical supervisors who supervise nursing students at a higher education institution. The study employed a qualitative research approach utilizing a descriptive phenomenological design. Purposive sampling was used to select participants, who supervised first and second year nursing studnets at the higher education institution under study. TH esample comprised of eight clinical supervisors who supervise first and secon year nursing studnets at teh institution under study. Data was collected by means of in-depht interviews. Data was analysed using Collaizzi's seven steps method of qualitative analysis. Five major themes identified , focussed on the experiences regarding time a sa constraint to job productivity, the impact of teh organisational culture on the fluidity of support, interpersonal relationships a sa dynamic communication process, impact on the self, and limited resources. Trustworthiness of the data was ensured by means of applying Guba's model of truth value, applicability, consistency and neutrality. Reflexivity was also used by the researcher to further enhance trustworthiness.

Keywords: clinical supervision, clinical supervisors, nursing students, clinical placements

Procedia PDF Downloads 230
1069 Recognising the Importance of Smoking Cessation Support in Substance Misuse Patients

Authors: Shaine Mehta, Neelam Parmar, Patrick White, Mark Ashworth

Abstract:

Patients with a history of substance have a high prevalence of comorbidities, including asthma and chronic obstructive pulmonary disease (COPD). Mortality rates are higher than that of the general population and the link to respiratory disease is reported. Randomised controlled trials (RCTs) support opioid substitution therapy as an effective means for harm reduction. However, whilst a high proportion of patients receiving opioid substitution therapy are smokers, to the author’s best knowledge there have been no studies of respiratory disease and smoking intensity in these patients. A cross sectional prevalence study was conducted using an anonymised patient-level database in primary care, Lambeth DataNet (LDN). We included patients aged 18 years and over who had records of ever having been prescribed methadone in primary care. Patients under 18 years old or prescribed buprenorphine (because of uncertainty about the prescribing indication) were excluded. Demographic, smoking, alcohol and asthma and COPD coding data were extracted. Differences between methadone and non-methadone users were explored with multivariable analysis. LDN contained data on 321, 395 patients ≥ 18 years; 676 (0.16%) had a record of methadone prescription. Patients prescribed methadone were more likely to be male (70.7% vs. 50.4%), older (48.9yrs vs. 41.5yrs) and less likely to be from an ethnic minority group (South Asian 2.1% vs. 7.8%; Black African 8.9% vs. 21.4%). Almost all those prescribed methadone were smokers or ex-smokers (97.3% vs. 40.9%); more were non-alcohol drinkers (41.3% vs. 24.3%). We found a high prevalence of COPD (12.4% vs 1.4%) and asthma (14.2% vs 4.4%). Smoking intensity data shows a high prevalence of ≥ 20 cigarettes per day (21.5% vs. 13.1%). Risk of COPD, adjusted for age, gender, ethnicity and deprivation, was raised in smokers: odds ratio 14.81 (95%CI 11.26, 19.47), and in the methadone group: OR 7.51 (95%CI: 5.78, 9.77). Furthermore, after adjustment for smoking intensity (number of cigarettes/day), the risk was raised in methadone group: OR 4.77 (95%CI: 3.13, 7.28). High burden of respiratory disease compounded by the high rates of smoking is a public health concern. This supports an integrated approach to health in patients treated for opiate dependence, with access to smoking cessation support. Further work may evaluate the current structure and commissioning of substance misuse services, including smoking cessation. Regression modelling highlights that methadone as a ‘risk factor’ was independently associated with COPD prevalence, even after adjustment for smoking intensity. This merits further exploration, as the association may be related to unexplored aspects of smoking (such as the number of years smoked) or may be related to other related exposures, such as smoking heroin or crack cocaine.

Keywords: methadone, respiratory disease, smoking cessation, substance misuse

Procedia PDF Downloads 145
1068 ADP Approach to Evaluate the Blood Supply Network of Ontario

Authors: Usama Abdulwahab, Mohammed Wahab

Abstract:

This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.

Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem

Procedia PDF Downloads 506
1067 The Influence of Emotion on Numerical Estimation: A Drone Operators’ Context

Authors: Ludovic Fabre, Paola Melani, Patrick Lemaire

Abstract:

The goal of this study was to test whether and how emotions influence drone operators in estimation skills. The empirical study was run in the context of numerical estimation. Participants saw a two-digit number together with a collection of cars. They had to indicate whether the stimuli collection was larger or smaller than the number. The two-digit numbers ranged from 12 to 27, and collections included 3-36 cars. The presentation of the collections was dynamic (each car moved 30 deg. per second on the right). Half the collections were smaller collections (including fewer than 20 cars), and the other collections were larger collections (i.e., more than 20 cars). Splits between the number of cars in a collection and the two-digit number were either small (± 1 or 2 units; e.g., the collection included 17 cars and the two-digit number was 19) or larger (± 8 or 9 units; e.g., 17 cars and '9'). Half the collections included more items (and half fewer items) than the number indicated by the two-digit number. Before and after each trial, participants saw an image inducing negative emotions (e.g., mutilations) or neutral emotions (e.g., candle) selected from International Affective Picture System (IAPS). At the end of each trial, participants had to say if the second picture was the same as or different from the first. Results showed different effects of emotions on RTs and percent errors. Participants’ performance was modulated by emotions. They were slower on negative trials compared to the neutral trials, especially on the most difficult items. They errored more on small-split than on large-split problems. Moreover, participants highly overestimated the number of cars when in a negative emotional state. These findings suggest that emotions influence numerical estimation, that effects of emotion in estimation interact with stimuli characteristics. They have important implications for understanding the role of emotions on estimation skills, and more generally, on how emotions influence cognition.

Keywords: drone operators, emotion, numerical estimation, arithmetic

Procedia PDF Downloads 116
1066 Management of Interdependence in Manufacturing Networks

Authors: Atour Taghipour

Abstract:

In the real world each manufacturing company is an independent business unit. These business units are linked to each other through upstream and downstream linkages. The management of these linkages is called coordination which, could be considered as a difficult engineering task. The degree of difficulty of coordination depends on the type and the nature of information exchanged between partners as well as the structure of relationship from mutual to the network structure. The literature of manufacturing systems comprises a wide range of varieties of methods and approaches of coordination. In fact, two main streams of research can be distinguished: central coordination versus decentralized coordination. In the centralized systems a high degree of information exchanges is required. The high degree of information exchanges sometimes leads to difficulties when independent members do not want to share information. In order to address these difficulties, decentralized approaches of coordination of operations planning decisions based on some minimal information sharing have been proposed in many academic disciplines. This paper first proposes a framework of analysis in order to analyze the proposed approaches in the literature, based on this framework which includes the similarities between approaches we categorize the existing approaches. This classification can be used as a research map for future researches. The result of our paper highlights several opportunities for future research. First, it is proposed to develop more dynamic and stochastic mechanisms of planning coordination of manufacturing units. Second, in order to exploit the complementarities of approaches proposed by diverse science discipline, we propose to integrate the techniques of coordination. Finally, based on our approach we proposed to develop coordination standards to guaranty both the complementarity of these approaches as well as the freedom of companies to adopt any planning tools.

Keywords: network coordination, manufacturing, operations planning, supply chain

Procedia PDF Downloads 281
1065 The Concept of Female Beauty in Contemporary (2000-2020) Fine Arts and Design

Authors: Maria Ukolova

Abstract:

Social and cultural processes over the past decades have largely affected the understanding of conventional female beauty all over the world. Fine arts and design tendencies could not remain unchanged and show a dynamic interplay with female rights, gender equality, and other social processes. As of now, the area lacks comprehensive academic research on the tendencies of understanding female beauty in contemporary art. This article makes an attempt to outline and analyse the main tendencies of contemporary works of art that turn to the image of a woman, including photography, digital art, and various forms of design. The research bases itself on paintings, performing arts, photography, digital art, and various forms of design, mainly on the principle of the most broadly resonated in society, as an empirical basis, and on existing researches in the sphere. The results of the research show a general trend that the concept of female beauty in art is either challenged as such or its understanding has shifted to individuality, diversity, and the state of mental health. However, some categories of art, such as digital art in the gaming industry, remain resistant to change and retain the appearance-based understanding of beauty. Specific tendencies are, firstly, aestheticization of all types of appearances; secondly, a ubiquitous interest in mental health issues and understanding the state of mental health as a part of beauty; thirdly, a certain infantilization of the image of the woman is observed as compared to previous decades. The significance of the findings of the research is to contribute to a scientific understanding of the concept of beauty in contemporary art and to give ground for prospective further related research in sociology, phycology, etc. The findings might be perceived not only by academics but also by artists and practitioners in the spheres of art and society.

Keywords: fine arts, history of art, contemporary art, concept of beauty

Procedia PDF Downloads 86
1064 A Two-Phase Flow Interface Tracking Algorithm Using a Fully Coupled Pressure-Based Finite Volume Method

Authors: Shidvash Vakilipour, Scott Ormiston, Masoud Mohammadi, Rouzbeh Riazi, Kimia Amiri, Sahar Barati

Abstract:

Two-phase and multi-phase flows are common flow types in fluid mechanics engineering. Among the basic and applied problems of these flow types, two-phase parallel flow is the one that two immiscible fluids flow in the vicinity of each other. In this type of flow, fluid properties (e.g. density, viscosity, and temperature) are different at the two sides of the interface of the two fluids. The most challenging part of the numerical simulation of two-phase flow is to determine the location of interface accurately. In the present work, a coupled interface tracking algorithm is developed based on Arbitrary Lagrangian-Eulerian (ALE) approach using a cell-centered, pressure-based, coupled solver. To validate this algorithm, an analytical solution for fully developed two-phase flow in presence of gravity is derived, and then, the results of the numerical simulation of this flow are compared with analytical solution at various flow conditions. The results of the simulations show good accuracy of the algorithm despite using a nearly coarse and uniform grid. Temporal variations of interface profile toward the steady-state solution show that a greater difference between fluids properties (especially dynamic viscosity) will result in larger traveling waves. Gravity effect studies also show that favorable gravity will result in a reduction of heavier fluid thickness and adverse gravity leads to increasing it with respect to the zero gravity condition. However, the magnitude of variation in favorable gravity is much more than adverse gravity.

Keywords: coupled solver, gravitational force, interface tracking, Reynolds number to Froude number, two-phase flow

Procedia PDF Downloads 315
1063 Analyzing the Effectiveness of Communication Practices and Processes within Project-Based Firms

Authors: Paul Saah, Charles Mbohwa, Nelson Sizwe Madonsela

Abstract:

The capacity to deliver projects on schedule, within budget, and to the pleasure of the client depends on effective communication, which is the lifeblood of project-based businesses. In order to pinpoint areas for development and shed light on the crucial role that communication plays in project success, the aim of this study is to evaluate the efficacy of communication practises and processes inside project-based organisations. In order to analyse concepts and get a greater grasp of their theoretical basis, this study's methodology combines a careful review of the relevant literature with a conceptual analysis of the subject. Data from a varied sample of project-based businesses spanning all industries and sizes were collected via document analysis. The relationship between communication practises, and processes were investigated in connection to key performance measures such as project outcomes, client satisfaction, and team dynamics. According to the study's findings, project-based businesses that adopt effective communication practises, and procedures experience a reduction in unfavourable experiences, stronger integration, and coordination, clarity of purpose, and practises that can hasten problem resolution. However, failing to adopt effective communication practises and procedures in project-based company result in counter issues, including project derailment from the schedule, failure to meet goals, inefficient use of existing resources, and failure to meet organisational goals. Therefore, optimising their communication practises, and procedures are crucial for sustainable growth and competitive advantage as project-based enterprises continue to play a crucial part in today's dynamic business scene.

Keywords: effective communication, project-based firms, communication practices, project success, communication strategies

Procedia PDF Downloads 70
1062 Quantification of Lawsone and Adulterants in Commercial Henna Products

Authors: Ruchi B. Semwal, Deepak K. Semwal, Thobile A. N. Nkosi, Alvaro M. Viljoen

Abstract:

The use of Lawsonia inermis L. (Lythraeae), commonly known as henna, has many medicinal benefits and is used as a remedy for the treatment of diarrhoea, cancer, inflammation, headache, jaundice and skin diseases in folk medicine. Although widely used for hair dyeing and temporary tattooing, henna body art has popularized over the last 15 years and changed from being a traditional bridal and festival adornment to an exotic fashion accessory. The naphthoquinone, lawsone, is one of the main constituents of the plant and responsible for its dyeing property. Henna leaves typically contain 1.8–1.9% lawsone, which is used as a marker compound for the quality control of henna products. Adulteration of henna with various toxic chemicals such as p-phenylenediamine, p-methylaminophenol, p-aminobenzene and p-toluenodiamine to produce a variety of colours, is very common and has resulted in serious health problems, including allergic reactions. This study aims to assess the quality of henna products collected from different parts of the world by determining the lawsone content, as well as the concentrations of any adulterants present. Ultra high performance liquid chromatography-mass spectrometry (UPLC-MS) was used to determine the lawsone concentrations in 172 henna products. Separation of the chemical constituents was achieved on an Acquity UPLC BEH C18 column using gradient elution (0.1% formic acid and acetonitrile). The results from UPLC-MS revealed that of 172 henna products, 11 contained 1.0-1.8% lawsone, 110 contained 0.1-0.9% lawsone, whereas 51 samples did not contain detectable levels of lawsone. High performance thin layer chromatography was investigated as a cheaper, more rapid technique for the quality control of henna in relation to the lawsone content. The samples were applied using an automatic TLC Sampler 4 (CAMAG) to pre-coated silica plates, which were subsequently developed with acetic acid, acetone and toluene (0.5: 1.0: 8.5 v/v). A Reprostar 3 digital system allowed the images to be captured. The results obtained corresponded to those from UPLC-MS analysis. Vibrational spectroscopy analysis (MIR or NIR) of the powdered henna, followed by chemometric modelling of the data, indicates that this technique shows promise as an alternative quality control method. Principal component analysis (PCA) was used to investigate the data by observing clustering and identifying outliers. Partial least squares (PLS) multivariate calibration models were constructed for the quantification of lawsone. In conclusion, only a few of the samples analysed contain lawsone in high concentrations, indicating that they are of poor quality. Currently, the presence of adulterants that may have been added to enhance the dyeing properties of the products, is being investigated.

Keywords: Lawsonia inermis, paraphenylenediamine, temporary tattooing, lawsone

Procedia PDF Downloads 459
1061 Reflecting and Teaching on the Dialectical Nature of Social Work

Authors: Eli Buchbinder

Abstract:

Dialectics theory perceives two or more forces or themes as mutually opposed and negating on the one hand and as interdependent for their definition, existence, and resolution on the other. Such opposites might never be fully reconciled but might, simultaneously, continue to produce a higher level of integration and synthesis as well as tension, contradictions, and paradoxes. The identity of social work is constructed by poles; an understanding that emerges through key concepts that shape the profession. The key concept of person-in-environment creates dialectical tensions between the psychological versus the social pole. Important examples that reflect this focus on the psychological versus the social nature of human beings. This meta-perspective influences and constructs the implementation of values, ways of intervention, and professional relationships, e.g., creating a conflict between personal/social empowerment and social control and correction as the aims of the profession. Social work is dynamic and changing, with a unique way of perceiving and conceptualizing human behavior. Social workers must be able to face and accept the contradicting elements inherent in practicing social work. The basic philosophy for social work education is a dialectic conceptualization. In light of the above, social work students require dialectics as a critical mode of perception, reflection, and intervention. In the presentation, the focus will be on reflection on teaching students to conceptualize dialectics as a frame when training to be social workers. It is believed that the focus should emphasis two points: 1) the need to assist students to identify poles and to analyze the interrelationships created between them while coping emotionally with the tension and difficulties involved in containing these poles; 2) teaching students to integrate poles as a basis for assessment, planning, and intervention.

Keywords: professional ontology, a generic social work education, skills and values of social work, reflecting on social work teaching methods

Procedia PDF Downloads 84
1060 Erosion Modeling of Surface Water Systems for Long Term Simulations

Authors: Devika Nair, Sean Bellairs, Ken Evans

Abstract:

Flow and erosion modeling provides an avenue for simulating the fine suspended sediment in surface water systems like streams and creeks. Fine suspended sediment is highly mobile, and many contaminants that may have been released by any sort of catchment disturbance attach themselves to these sediments. Therefore, a knowledge of fine suspended sediment transport is important in assessing contaminant transport. The CAESAR-Lisflood Landform Evolution Model, which includes a hydrologic model (TOPMODEL) and a hydraulic model (Lisflood), is being used to assess the sediment movement in tropical streams on account of a disturbance in the catchment of the creek and to determine the dynamics of sediment quantity in the creek through the years by simulating the model for future years. The accuracy of future simulations depends on the calibration and validation of the model to the past and present events. Calibration and validation of the model involve finding a combination of parameters of the model, which, when applied and simulated, gives model outputs similar to those observed for the real site scenario for corresponding input data. Calibrating the sediment output of the CAESAR-Lisflood model at the catchment level and using it for studying the equilibrium conditions of the landform is an area yet to be explored. Therefore, the aim of the study was to calibrate the CAESAR-Lisflood model and then validate it so that it could be run for future simulations to study how the landform evolves over time. To achieve this, the model was run for a rainfall event with a set of parameters, plus discharge and sediment data for the input point of the catchment, to analyze how similar the model output would behave when compared with the discharge and sediment data for the output point of the catchment. The model parameters were then adjusted until the model closely approximated the real site values of the catchment. It was then validated by running the model for a different set of events and checking that the model gave similar results to the real site values. The outcomes demonstrated that while the model can be calibrated to a greater extent for hydrology (discharge output) throughout the year, the sediment output calibration may be slightly improved by having the ability to change parameters to take into account the seasonal vegetation growth during the start and end of the wet season. This study is important to assess hydrology and sediment movement in seasonal biomes. The understanding of sediment-associated metal dispersion processes in rivers can be used in a practical way to help river basin managers more effectively control and remediate catchments affected by present and historical metal mining.

Keywords: erosion modelling, fine suspended sediments, hydrology, surface water systems

Procedia PDF Downloads 84
1059 [Keynote Talk]: Caught in the Tractorbeam of Larger Influences: The Filtration of Innovation in Education Technology Design

Authors: Justin D. Olmanson, Fitsum Abebe, Valerie Jones, Eric Kyle, Xianquan Liu, Katherine Robbins, Guieswende Rouamba

Abstract:

The history of education technology--and designing, adapting, and adopting technologies for use in educational spaces--is nuanced, complex, and dynamic. Yet, despite a range of continually emerging technologies, the design and development process often yields results that appear quite similar in terms of affordances and interactions. Through this study we (1) verify the extent to which designs have been constrained, (2) consider what might account for it, and (3) offer a way forward in terms of how we might identify and strategically sidestep these influences--thereby increasing the diversity of our designs with a given technology or within a particular learning domain. We begin our inquiry from the perspective that a host of co-influencing elements, fields, and meta narratives converge on the education technology design process to exert a tangible, often homogenizing effect on the resultant designs. We identify several elements that influence design in often implicit or unquestioned ways (e.g. curriculum, learning theory, economics, learning context, pedagogy), we describe our methodology for identifying the elemental positionality embedded in a design, we direct our analysis to a particular subset of technologies in the field of literacy, and unpack our findings. Our early analysis suggests that the majority of education technologies designed for use/used in US public schools are heavily influenced by a handful of mainstream theories and meta narratives. These findings have implications for how we approach the education technology design process--which we use to suggest alternative methods for designing/ developing with emerging technologies. Our analytical process and re conceptualized design process hold the potential to diversify the ways emerging and established technologies get incorporated into our designs.

Keywords: curriculum, design, innovation, meta narratives

Procedia PDF Downloads 509
1058 Solving a Micromouse Maze Using an Ant-Inspired Algorithm

Authors: Rolando Barradas, Salviano Soares, António Valente, José Alberto Lencastre, Paulo Oliveira

Abstract:

This article reviews the Ant Colony Optimization, a nature-inspired algorithm, and its implementation in the Scratch/m-Block programming environment. The Ant Colony Optimization is a part of Swarm Intelligence-based algorithms and is a subset of biological-inspired algorithms. Starting with a problem in which one has a maze and needs to find its path to the center and return to the starting position. This is similar to an ant looking for a path to a food source and returning to its nest. Starting with the implementation of a simple wall follower simulator, the proposed solution uses a dynamic graphical interface that allows young students to observe the ants’ movement while the algorithm optimizes the routes to the maze’s center. Things like interface usability, Data structures, and the conversion of algorithmic language to Scratch syntax were some of the details addressed during this implementation. This gives young students an easier way to understand the computational concepts of sequences, loops, parallelism, data, events, and conditionals, as they are used through all the implemented algorithms. Future work includes the simulation results with real contest mazes and two different pheromone update methods and the comparison with the optimized results of the winners of each one of the editions of the contest. It will also include the creation of a Digital Twin relating the virtual simulator with a real micromouse in a full-size maze. The first test results show that the algorithm found the same optimized solutions that were found by the winners of each one of the editions of the Micromouse contest making this a good solution for maze pathfinding.

Keywords: nature inspired algorithms, scratch, micromouse, problem-solving, computational thinking

Procedia PDF Downloads 126
1057 A Validated Estimation Method to Predict the Interior Wall of Residential Buildings Based on Easy to Collect Variables

Authors: B. Gepts, E. Meex, E. Nuyts, E. Knaepen, G. Verbeeck

Abstract:

The importance of resource efficiency and environmental impact assessment has raised the interest in knowing the amount of materials used in buildings. If no BIM model or energy performance certificate is available, material quantities can be obtained through an estimation or time-consuming calculation. For the interior wall area, no validated estimation method exists. However, in the case of environmental impact assessment or evaluating the existing building stock as future material banks, knowledge of the material quantities used in interior walls is indispensable. This paper presents a validated method for the estimation of the interior wall area for dwellings based on easy-to-collect building characteristics. A database of 4963 residential buildings spread all over Belgium is used. The data are collected through onsite measurements of the buildings during the construction phase (between mid-2010 and mid-2017). The interior wall area refers to the area of all interior walls in the building, including the inner leaf of exterior (party) walls, minus the area of windows and doors, unless mentioned otherwise. The two predictive modelling techniques used are 1) a (stepwise) linear regression and 2) a decision tree. The best estimation method is selected based on the best R² k-fold (5) fit. The research shows that the building volume is by far the most important variable to estimate the interior wall area. A stepwise regression based on building volume per building, building typology, and type of house provides the best fit, with R² k-fold (5) = 0.88. Although the best R² k-fold value is obtained when the other parameters ‘building typology’ and ‘type of house’ are included, the contribution of these variables can be seen as statistically significant but practically irrelevant. Thus, if these parameters are not available, a simplified estimation method based on only the volume of the building can also be applied (R² k-fold = 0.87). The robustness and precision of the method (output) are validated three times. Firstly, the prediction of the interior wall area is checked by means of alternative calculations of the building volume and of the interior wall area; thus, other definitions are applied to the same data. Secondly, the output is tested on an extension of the database, so it has the same definitions but on other data. Thirdly, the output is checked on an unrelated database with other definitions and other data. The validation of the estimation methods demonstrates that the methods remain accurate when underlying data are changed. The method can support environmental as well as economic dimensions of impact assessment, as it can be used in early design. As it allows the prediction of the amount of interior wall materials to be produced in the future or that might become available after demolition, the presented estimation method can be part of material flow analyses on input and on output.

Keywords: buildings as material banks, building stock, estimation method, interior wall area

Procedia PDF Downloads 30
1056 A World Map of Seabed Sediment Based on 50 Years of Knowledge

Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès

Abstract:

Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.

Keywords: marine sedimentology, seabed map, sediment classification, world ocean

Procedia PDF Downloads 232
1055 Navigating a Changing Landscape: Opportunities for Research Managers

Authors: Samba Lamine Cisse, Cheick Oumar Tangara, Seynabou Sissoko, Mahamadou Diakite, Seydou Doumbia

Abstract:

Introduction: Over the past two decades, the world has been constantly changing, with new trends in project management. These trends are transforming the methods and priorities of research project management. They include the rise of digital technologies, multidisciplinary, open science, and the pressure for high-impact results. Managers, therefore, find themselves at a crossroads between the challenges and opportunities offered by these new trends. This paper aims to identify the challenges and opportunities they face while proposing strategies for effectively navigating this dynamic context. Methodology: This is a qualitative study based on an analysis of the challenges and opportunities facing the University Clinical Research Center in terms of new technologies and project management methods. This blended approach provides an overview of emerging trends and practices. Results: This article shows how research managers can turn new research trends in their favor and how they can adapt to the changes they face to optimize the productivity of research teams while ensuring the quality and ethics of the work. It also explores the importance of developing skills in data management, international collaboration, and innovation management. Finally, it proposes strategies for responding effectively to the challenges posed by these new trends while strengthening the position of research managers as essential facilitators of scientific progress. Conclusion: Navigating this changing landscape requires research managers to be highly flexible and able to anticipate the realities of their institution. By adopting modern project management methodologies and cultivating a culture of innovation, they can turn challenges into opportunities and propel research toward new horizons. This paper provides a strategic framework for overcoming current obstacles and capitalizing on future developments in research.

Keywords: new trends, research management, opportunities, challenges

Procedia PDF Downloads 11
1054 Application of the Shallow Seismic Refraction Technique to Characterize the Foundation Rocks at the Proposed Tushka New City Site, South Egypt

Authors: Abdelnasser Mohamed, R. Fat-Helbary, H. El Khashab, K. EL Faragawy

Abstract:

Tushka New City is one of the proposed new cities in South Egypt. It is located in the eastern part of the western Desert of Egypt between latitude 22.878º and 22.909º N and longitude 31.525º and 31.635º E, about 60 kilometers far from Abu Simble City. The main target of the present study is the investigation of the shallow subsurface structure conditions and the dynamic characteristics of subsurface rocks using the shallow seismic refraction technique. Forty seismic profiles were conducted to calculate the P- and S-waves velocity at the study area. P- and SH-waves velocities can be used to obtain the geotechnical parameters and also SH-wave can be used to study the vibration characteristics of the near surface layers, which are important for earthquakes resistant structure design. The output results of the current study indicated that the P-waves velocity ranged from 450 to 1800 m/sec and from 1550 to 3000 m/sec for the surface and bedrock layer respectively. The SH-waves velocity ranged from 300 to 1100 m/sec and from 1000 to 1800 m/sec for the surface and bedrock layer respectively. The thickness of the surface layer and the depth to the bedrock layer were determined along each profile. The bulk density ρ of soil layers that used in this study was calculated for all layers at each profile in the study area. In conclusion, the area is mainly composed of compacted sandstone with high wave velocities, which is considered as a good foundation rock. The south western part of the study area has minimum values of the computed P- and SH-waves velocities, minimum values of the bulk density and the maximum value of the mean thickness of the surface layer.

Keywords: seismic refraction, Tushak new city, P-waves, SH-waves

Procedia PDF Downloads 381
1053 Applications of Evolutionary Optimization Methods in Reinforcement Learning

Authors: Rahul Paul, Kedar Nath Das

Abstract:

The paradigm of Reinforcement Learning (RL) has become prominent in training intelligent agents to make decisions in environments that are both dynamic and uncertain. The primary objective of RL is to optimize the policy of an agent in order to maximize the cumulative reward it receives throughout a given period. Nevertheless, the process of optimization presents notable difficulties as a result of the inherent trade-off between exploration and exploitation, the presence of extensive state-action spaces, and the intricate nature of the dynamics involved. Evolutionary Optimization Methods (EOMs) have garnered considerable attention as a supplementary approach to tackle these challenges, providing distinct capabilities for optimizing RL policies and value functions. The ongoing advancement of research in both RL and EOMs presents an opportunity for significant advancements in autonomous decision-making systems. The convergence of these two fields has the potential to have a transformative impact on various domains of artificial intelligence (AI) applications. This article highlights the considerable influence of EOMs in enhancing the capabilities of RL. Taking advantage of evolutionary principles enables RL algorithms to effectively traverse extensive action spaces and discover optimal solutions within intricate environments. Moreover, this paper emphasizes the practical implementations of EOMs in the field of RL, specifically in areas such as robotic control, autonomous systems, inventory problems, and multi-agent scenarios. The article highlights the utilization of EOMs in facilitating RL agents to effectively adapt, evolve, and uncover proficient strategies for complex tasks that may pose challenges for conventional RL approaches.

Keywords: machine learning, reinforcement learning, loss function, optimization techniques, evolutionary optimization methods

Procedia PDF Downloads 80
1052 Expectation during Improvisation: The Way It Influences the Musical Dialogue

Authors: Elisa Negretto

Abstract:

Improvisation is a fundamental form of musical practice and an increasing amount of literature shows a particular interest on the consequences it might have in different kinds of social contexts. A relevant aspect of the musical experience is the ability to create expectations, which reflects a basic strategy of the human mind, an intentional movement toward the future which is based on previous experiences. Musical Expectation – an unconscious tendency to project forward in time, to predict future sound events and the ongoing of a musical experience – can be regarded as a process that strongly influences the listeners’ emotional and affective response to music, as well as their social and aesthetic experience. While improvising, composers, interpreters and listeners generate and exchange expectations, thus creating a dynamic dialogue and meaningful relationships. The aim of this paper is to investigate how expectation contributes to the creation of such a dialogue during the unfolding of the musical experience and to what extent it influences the meaning music acquires during the performance. The difference between the ability to create expectations and the anticipation of the future ongoing of music will be questioned. Does it influence in different ways the meaning of music and the kind of dialogical relationship established between musicians and between performers and audience? Such questions will be investigated with reference to recent research in music cognition and the analysis of a particular case: a free jazz performance during which musicians improvise and/or change the location of the sound source. The present paper is an attempt to provide new insights for investigating and understanding the cognitive mechanisms underlying improvisation as a musical and social practice. They contribute to the creation of a model that we can find in many others social practices in which people have to build meaningful relationships and responses to environmental stimuli.

Keywords: anticipation, expectation, improvisation, meaning, musical dialogue

Procedia PDF Downloads 248
1051 Tokyo Skyscrapers: Technologically Advanced Structures in Seismic Areas

Authors: J. Szolomicki, H. Golasz-Szolomicka

Abstract:

The architectural and structural analysis of selected high-rise buildings in Tokyo is presented in this paper. The capital of Japan is the most densely populated city in the world and moreover is located in one of the most active seismic zones. The combination of these factors has resulted in the creation of sophisticated designs and innovative engineering solutions, especially in the field of design and construction of high-rise buildings. The foreign architectural studios (as, for Jean Nouvel, Kohn Pedesen Associates, Skidmore, Owings & Merill) which specialize in the designing of skyscrapers, played a major role in the development of technological ideas and architectural forms for such extraordinary engineering structures. Among the projects completed by them, there are examples of high-rise buildings that set precedents for future development. An essential aspect which influences the design of high-rise buildings is the necessity to take into consideration their dynamic reaction to earthquakes and counteracting wind vortices. The need to control motions of these buildings, induced by the force coming from earthquakes and wind, led to the development of various methods and devices for dissipating energy which occur during such phenomena. Currently, Japan is a global leader in seismic technologies which safeguard seismic influence on high-rise structures. Due to these achievements the most modern skyscrapers in Tokyo are able to withstand earthquakes with a magnitude of over seven degrees at the Richter scale. Damping devices applied are of a passive, which do not require additional power supply or active one which suppresses the reaction with the input of extra energy. In recent years also hybrid dampers were used, with an additional active element to improve the efficiency of passive damping.

Keywords: core structures, damping system, high-rise building, seismic zone

Procedia PDF Downloads 175
1050 Conflict Resolution in Fuzzy Rule Base Systems Using Temporal Modalities Inference

Authors: Nasser S. Shebka

Abstract:

Fuzzy logic is used in complex adaptive systems where classical tools of representing knowledge are unproductive. Nevertheless, the incorporation of fuzzy logic, as it’s the case with all artificial intelligence tools, raised some inconsistencies and limitations in dealing with increased complexity systems and rules that apply to real-life situations and hinders the ability of the inference process of such systems, but it also faces some inconsistencies between inferences generated fuzzy rules of complex or imprecise knowledge-based systems. The use of fuzzy logic enhanced the capability of knowledge representation in such applications that requires fuzzy representation of truth values or similar multi-value constant parameters derived from multi-valued logic, which set the basis for the three t-norms and their based connectives which are actually continuous functions and any other continuous t-norm can be described as an ordinal sum of these three basic ones. However, some of the attempts to solve this dilemma were an alteration to fuzzy logic by means of non-monotonic logic, which is used to deal with the defeasible inference of expert systems reasoning, for example, to allow for inference retraction upon additional data. However, even the introduction of non-monotonic fuzzy reasoning faces a major issue of conflict resolution for which many principles were introduced, such as; the specificity principle and the weakest link principle. The aim of our work is to improve the logical representation and functional modelling of AI systems by presenting a method of resolving existing and potential rule conflicts by representing temporal modalities within defeasible inference rule-based systems. Our paper investigates the possibility of resolving fuzzy rules conflict in a non-monotonic fuzzy reasoning-based system by introducing temporal modalities and Kripke's general weak modal logic operators in order to expand its knowledge representation capabilities by means of flexibility in classifying newly generated rules, and hence, resolving potential conflicts between these fuzzy rules. We were able to address the aforementioned problem of our investigation by restructuring the inference process of the fuzzy rule-based system. This is achieved by using time-branching temporal logic in combination with restricted first-order logic quantifiers, as well as propositional logic to represent classical temporal modality operators. The resulting findings not only enhance the flexibility of complex rule-base systems inference process but contributes to the fundamental methods of building rule bases in such a manner that will allow for a wider range of applicable real-life situations derived from a quantitative and qualitative knowledge representational perspective.

Keywords: fuzzy rule-based systems, fuzzy tense inference, intelligent systems, temporal modalities

Procedia PDF Downloads 92
1049 Internet of Health Things as a Win-Win Solution for Mitigating the Paradigm Shift inside Senior Patient-Physician Shared Health Management

Authors: Marilena Ianculescu, Adriana Alexandru

Abstract:

Internet of Health Things (IoHT) has already proved to be a persuasive means to support a proper assessment of the living conditions by collecting a huge variety of data. For a customized health management of a senior patient, IoHT provides the capacity to build a dynamic solution for sustaining the shift inside the patient-physician relationship by allowing a real-time and continuous remote monitoring of the health status, well-being, safety and activities of the senior, especially in a non-clinical environment. Thus, is created a win-win solution in which both the patient and the physician enhance their involvement and shared decision-making, with significant outcomes. Health monitoring systems in smart environments are becoming a viable alternative to traditional healthcare solutions. The ongoing “Non-invasive monitoring and health assessment of the elderly in a smart environment (RO-SmartAgeing)” project aims to demonstrate that the existence of complete and accurate information is critical for assessing the health condition of the seniors, improving wellbeing and quality of life in relation to health. The researches performed inside the project aim to highlight how the management of IoHT devices connected to the RO-SmartAgeing platform in a secure way by using a role-based access control system, can allow the physicians to provide health services at a high level of efficiency and accessibility, which were previously only available in hospitals. The project aims to identify deficient aspects in the provision of health services tailored to a senior patient’s specificity and to offer a more comprehensive perspective of proactive and preventive medical acts.

Keywords: health management, internet of health things, remote monitoring, senior patient

Procedia PDF Downloads 100