Search results for: heritage values
1781 Architectural Approaches to a Sustainable Community with Floating Housing Units Adapting to Climate Change and Sea Level Rise in Vietnam
Authors: Nguyen Thi Thu Trang
Abstract:
Climate change and sea level rise is one of the greatest challenges facing human beings in the 21st century. Because of sea level rise, several low-lying coastal areas around the globe are at risk of being completely submerged, disappearing under water. Particularly in Viet Nam, the rise in sea level is predicted to result in more frequent and even permanently inundated coastal plains. As a result, land reserving fund of coastal cities is going to be narrowed in near future, while construction ground is becoming increasingly limited due to a rapid growth in population. Faced with this reality, the solutions are being discussed not only in tradition view such as accommodation is raised or moved to higher areas, or “living with the water”, but also forwards to “living on the water”. Therefore, the concept of a sustainable floating community with floating houses based on the precious value of long term historical tradition of water dwellings in Viet Nam would be a sustainable solution for adaptation of climate change and sea level rise in the coastal areas. The sustainable floating community is comprised of sustainability in four components: architecture, environment, socio-economic and living quality. This research paper is focused on sustainability in architectural component of floating community. Through detailed architectural analysis of current floating houses and floating communities in Viet Nam, this research not only accumulates precious values of traditional architecture that need to be preserved and developed in the proposed concept, but also illustrates its weaknesses that need to address for optimal design of the future sustainable floating communities. Based on these studies the research would provide guidelines with appropriate architectural solutions for the concept of sustainable floating community with floating housing units that are adapted to climate change and sea level rise in Viet Nam.Keywords: guidelines, sustainable floating community, floating houses, Vietnam
Procedia PDF Downloads 5181780 Future Design and Innovative Economic Models for Futuristic Markets in Developing Countries
Authors: Nessreen Y. Ibrahim
Abstract:
Designing the future according to realistic analytical study for the futuristic market needs can be a milestone strategy to make a huge improvement in developing countries economics. In developing countries, access to high technology and latest science approaches is very limited. The financial problems in low and medium income countries have negative effects on the kind and quality of imported new technologies and application for their markets. Thus, there is a strong need for shifting paradigm thinking in the design process to improve and evolve their development strategy. This paper discusses future possibilities in developing countries, and how they can design their own future according to specific future models FDM (Future Design Models), which established to solve certain economical problems, as well as political and cultural conflicts. FDM is strategic thinking framework provides an improvement in both content and process. The content includes; beliefs, values, mission, purpose, conceptual frameworks, research, and practice, while the process includes; design methodology, design systems, and design managements tools. In this paper the main objective was building an innovative economic model to design a chosen possible futuristic scenario; by understanding the market future needs, analyze real world setting, solve the model questions by future driven design, and finally interpret the results, to discuss to what extent the results can be transferred to the real world. The paper discusses Egypt as a potential case study. Since, Egypt has highly complex economical problems, extra-dynamic political factors, and very rich cultural aspects; we considered Egypt is a very challenging example for applying FDM. The paper results recommended using FDM numerical modeling as a starting point to design the future.Keywords: developing countries, economic models, future design, possible futures
Procedia PDF Downloads 2671779 Gamipulation: Exploring Covert Manipulation through Gamification in the Context of Education
Authors: Aguiar-Castillo Lidia, Perez-Jimenez Rafael
Abstract:
The integration of gamification in educational settings aims to enhance student engagement and motivation through game design elements in learning activities. This paper introduces "Gamipulation," the subtle manipulation of students via gamification techniques serving hidden agendas without explicit consent. It highlights the need to distinguish between beneficial and exploitative uses of gamification in education, focusing on its potential to psychologically manipulate students for purposes misaligned with their best interests. Through a literature review and expert interviews, this study presents a conceptual framework outlining gamipulation's features. It examines ethical concerns like gradually introducing desired behaviors, using distraction to divert attention from significant learning objectives, immediacy of rewards fostering short-term engagement over long-term learning, infantilization of students, and exploitation of emotional responses over reflective thinking. Additionally, it discusses ethical issues in collecting and utilizing student data within gamified environments. Key findings suggest that while gamification can enhance motivation and engagement, there's a fine line between ethical motivation and unethical manipulation. The study emphasizes the importance of transparency, respect for student autonomy, and alignment with educational values in gamified systems. It calls for educators and designers to be aware of gamification's manipulative potential and strive for ethical implementation that benefits students. In conclusion, this paper provides a framework for educators and researchers to understand and address gamipulation's ethical challenges. It encourages developing ethical guidelines and practices to ensure gamification in education remains a tool for positive engagement and learning rather than covert manipulation.Keywords: gradualness, distraction, immediacy, infantilization, emotion
Procedia PDF Downloads 271778 Body Types of Softball Players in the 39th National Games of Thailand
Authors: Nopadol Nimsuwan, Sumet Prom-in
Abstract:
The purpose of this study was to investigate the body types, size, and body compositions of softball players in the 39th National Games of Thailand. The population of this study was 352 softball players who participated in the 39th National Games of Thailand from which a sample size of 291 was determined using the Taro Yamane formula and selection is made with stratified sampling method. The data collected were weight, height, arm length, leg length, chest circumference, mid-upper arm circumference, calf circumference, subcutaneous fat in the upper arm area, the scapula bone area, above the pelvis area, and mid-calf area. Keys and Brozek formula was used to calculate the fat quantity, Kitagawa formula to calculate the muscle quantity, and Heath and Carter method was used to determine the values of body dimensions. The results of the study can be concluded as follows. The average body dimensions of the male softball players were the endo-mesomorph body type while the average body dimensions of female softball players were the meso-endomorph body type. When considered according to the softball positions, it was found that the male softball players in every position had the endo-mesomorph body type while the female softball players in every position had the meso-endomorph body type except for the center fielder that had the endo-ectomorph body type. The endo-mesomorph body type is suitable for male softball players, and the meso-endomorph body type is suitable for female softball players because these body types are suitable for the five basic softball skills which are: gripping, throwing, catching, hitting, and base running. Thus, people related to selecting softball players to play in sports competitions of different levels should consider factors in terms of body type, size, and body components of the players.Keywords: body types, softball players, national games of Thailand, social sustainability
Procedia PDF Downloads 4841777 Duration of the Disease in Systemic Sclerosis and Efficiency of Rituximab Therapy
Authors: Liudmila Garzanova, Lidia Ananyeva, Olga Koneva, Olga Ovsyannikova, Oxana Desinova, Mayya Starovoytova, Rushana Shayahmetova, Anna Khelkovskaya-Sergeeva
Abstract:
Objectives: The duration of the disease could be one of the leading factors in the effectiveness of therapy in systemic sclerosis (SSc). The aim of the study was to assess how the duration of the disease affects the changes of lung function in patients(pts) with interstitial lung disease (ILD) associated with SSc during long-term RTX therapy. Methods: We prospectively included 113pts with SSc in this study. 85% of pts were female. Mean age was 48.1±13years. The diffuse cutaneous subset of the disease had 62pts, limited–40, overlap–11. The mean disease duration was 6.1±5.4years. Pts were divided into 2 groups depending on the disease duration - group 1 (less than 5 years-63pts) and group 2 (more than 5 years-50 pts). All pts received prednisolone at mean dose of 11.5±4.6 mg/day and 53 of them - immunosuppressants at inclusion. The parameters were evaluated over the periods: at baseline (point 0), 13±2.3mo (point 1), 42±14mo (point 2) and 79±6.5mo (point 3) after initiation of RTX therapy. Cumulative mean dose of RTX in group 1 at point 1 was 1.7±0.6 g, at point 2 = 3.3±1.5g, at point 3 = 3.9±2.3g; in group 2 at point 1 = 1.6±0.6g, at point 2 = 2.7±1.5 g, at point 3 = 3.7±2.6 g. The results are presented in the form of mean values, delta(Δ), median(me), upper and lower quartile. Results. There was a significant increase of forced vital capacity % predicted (FVC) in both groups, but at points 1 and 2 the improvement was more significant in group 1. In group 2, an improvement of FVC was noted with a longer follow-up. Diffusion capacity for carbon monoxide % predicted (DLCO) remained stable at point 1, and then significantly improved by the 3rd year of RTX therapy in both groups. In group 1 at point 1: ΔFVC was 4.7 (me=4; [-1.8;12.3])%, ΔDLCO = -1.2 (me=-0.3; [-5.3;3.6])%, at point 2: ΔFVC = 9.4 (me=7.1; [1;16])%, ΔDLCO =3.7 (me=4.6; [-4.8;10])%, at point 3: ΔFVC = 13 (me=13.4; [2.3;25.8])%, ΔDLCO = 2.3 (me=1.6; [-5.6;11.5])%. In group 2 at point 1: ΔFVC = 3.4 (me=2.3; [-0.8;7.9])%, ΔDLCO = 1.5 (me=1.5; [-1.9;4.9])%; at point 2: ΔFVC = 7.6 (me=8.2; [0;12.6])%, ΔDLCO = 3.5 (me=0.7; [-1.6;10.7]) %; at point 3: ΔFVC = 13.2 (me=10.4; [2.8;15.4])%, ΔDLCO = 3.6 (me=1.7; [-2.4;9.2])%. Conclusion: Patients with an early SSc have more quick response to RTX therapy already in 1 year of follow-up. Patients with a disease duration more than 5 years also have response to therapy, but with longer treatment. RTX is effective option for the treatment of ILD-SSc, regardless of the duration of the disease.Keywords: interstitial lung disease, systemic sclerosis, rituximab, disease duration
Procedia PDF Downloads 231776 Multi-source Question Answering Framework Using Transformers for Attribute Extraction
Authors: Prashanth Pillai, Purnaprajna Mangsuli
Abstract:
Oil exploration and production companies invest considerable time and efforts to extract essential well attributes (like well status, surface, and target coordinates, wellbore depths, event timelines, etc.) from unstructured data sources like technical reports, which are often non-standardized, multimodal, and highly domain-specific by nature. It is also important to consider the context when extracting attribute values from reports that contain information on multiple wells/wellbores. Moreover, semantically similar information may often be depicted in different data syntax representations across multiple pages and document sources. We propose a hierarchical multi-source fact extraction workflow based on a deep learning framework to extract essential well attributes at scale. An information retrieval module based on the transformer architecture was used to rank relevant pages in a document source utilizing the page image embeddings and semantic text embeddings. A question answering framework utilizingLayoutLM transformer was used to extract attribute-value pairs incorporating the text semantics and layout information from top relevant pages in a document. To better handle context while dealing with multi-well reports, we incorporate a dynamic query generation module to resolve ambiguities. The extracted attribute information from various pages and documents are standardized to a common representation using a parser module to facilitate information comparison and aggregation. Finally, we use a probabilistic approach to fuse information extracted from multiple sources into a coherent well record. The applicability of the proposed approach and related performance was studied on several real-life well technical reports.Keywords: natural language processing, deep learning, transformers, information retrieval
Procedia PDF Downloads 1931775 Clinical and Epidemiological Profile of Patients with Chronic Obstructive Pulmonary Disease in a Medical Institution from the City of Medellin, Colombia
Authors: Camilo Andres Agudelo-Velez, Lina María Martinez-Sanchez, Natalia Perilla-Hernandez, Maria De Los Angeles Rodriguez-Gazquez, Felipe Hernandez-Restrepo, Dayana Andrea Quintero-Moreno, Camilo Ruiz-Mejia, Isabel Cristina Ortiz-Trujillo, Monica Maria Zuluaga-Quintero
Abstract:
Chronic obstructive pulmonary disease is common condition, characterized by a persistent blockage of airflow, partially reversible and progressive, that represents 5% of total deaths around the world, and it is expected to become the third leading cause of death by 2030. Objective: To establish the clinical and epidemiological profile of patients with chronic obstructive pulmonary disease in a medical institution from the city of Medellin, Colombia. Methods: A cross-sectional study was performed, with a sample of 50 patients with a diagnosis of chronic obstructive pulmonary disease in a private institution in Medellin, during 2015. The software SPSS vr. 20 was used for the statistical analysis. For the quantitative variables, averages, standard deviations, and maximun and minimun values were calculated, while for ordinal and nominal qualitative variables, proportions were estimated. Results: The average age was 73.5±9.3 years, 52% of the patients were women, 50% of them had retired, 46% ere married and 80% lived in the city of Medellín. The mean time of diagnosis was 7.8±1.3 years and 100% of the patients were treated at the internal medicine service. The most common clinical features were: 36% were classified as class D for the disease, 34% had a FEV1 <30%, 88% had a history of smoking and 52% had oxygen therapy at home. Conclusion: It was found that class D was the most common, and the majority of the patients had a history of smoking, indicating the need to strengthen promotion and prevention strategies in this regard.Keywords: pulmonary disease, chronic obstructive, pulmonary medicine, oxygen inhalation therapy
Procedia PDF Downloads 4441774 Econophysical Approach on Predictability of Financial Crisis: The 2001 Crisis of Turkey and Argentina Case
Authors: Arzu K. Kamberli, Tolga Ulusoy
Abstract:
Technological developments and the resulting global communication have made the 21st century when large capitals are moved from one end to the other via a button. As a result, the flow of capital inflows has accelerated, and capital inflow has brought with it crisis-related infectiousness. Considering the irrational human behavior, the financial crisis in the world under the influence of the whole world has turned into the basic problem of the countries and increased the interest of the researchers in the reasons of the crisis and the period in which they lived. Therefore, the complex nature of the financial crises and its linearly unexplained structure have also been included in the new discipline, econophysics. As it is known, although financial crises have prediction mechanisms, there is no definite information. In this context, in this study, using the concept of electric field from the electrostatic part of physics, an early econophysical approach for global financial crises was studied. The aim is to define a model that can take place before the financial crises, identify financial fragility at an earlier stage and help public and private sector members, policy makers and economists with an econophysical approach. 2001 Turkey crisis has been assessed with data from Turkish Central Bank which is covered between 1992 to 2007, and for 2001 Argentina crisis, data was taken from IMF and the Central Bank of Argentina from 1997 to 2007. As an econophysical method, an analogy is used between the Gauss's law used in the calculation of the electric field and the forecasting of the financial crisis. The concept of Φ (Financial Flux) has been adopted for the pre-warning of the crisis by taking advantage of this analogy, which is based on currency movements and money mobility. For the first time used in this study Φ (Financial Flux) calculations obtained by the formula were analyzed by Matlab software, and in this context, in 2001 Turkey and Argentina Crisis for Φ (Financial Flux) crisis of values has been confirmed to give pre-warning.Keywords: econophysics, financial crisis, Gauss's Law, physics
Procedia PDF Downloads 1531773 Effect of Mixture of Flaxseed and Pumpkin Seeds Powder on Hypercholesterolemia
Authors: Zahra Ashraf
Abstract:
Flax and pumpkin seeds are a rich source of unsaturated fatty acids, antioxidants and fiber, known to have anti-atherogenic properties. Hypercholesterolemia is a state characterized by the elevated level of cholesterol in the blood. This research was designed to study the effect of flax and pumpkin seeds powder mixture on hypercholesterolemia and body weight. Rat’s species were selected as human representative. Thirty male albino rats were divided into three groups: a control group, a CD-chol group (control diet+cholesterol) fed with 1.5% cholesterol and FP-chol group (flaxseed and pumpkin seed powder+ cholesterol) fed with 1.5% cholesterol. Flax and pumpkin seed powder mixed at proportion of (5/1) (omega-3 and omega-6). Blood samples were collected to examine lipid profile and body weight was also measured. Thus the data was subjected to analysis of variance. In CD-chol group, body weight, total cholesterol TC, triacylglycerides TG in plasma, plasma LDL-C, ratio significantly increased with a decrease in plasma HDL (good cholesterol). In FP-chol group lipid parameters and body weights were decreased significantly with an increase in HDL and decrease in LDL (bad cholesterol). The mean values of body weight, total cholesterol, triglycerides, low density lipoprotein and high density lipoproteins in FP-chol group were 240.66±11.35g, 59.60±2.20mg/dl, 50.20±1.79 mg/dl, 36.20±1.62mg/dl, 36.40±2.20 mg/dl, respectively. Flaxseed and pumpkin seeds powder mixture showed reduction in body weight, serum cholesterol, low density lipoprotein and triglycerides. While significant increase was shown in high density lipoproteins when given to hypercholesterolemic rats. Our results suggested that flax and pumpkin seed mixture has hypocholesterolemic effects which were probably mediated by polyunsaturated fatty acids (omega-3 and omega-6) present in seed mixture.Keywords: hypercolesterolemia, omega 3 and omega 6 fatty acids, cardiovascular diseases
Procedia PDF Downloads 4201772 Analysis of Rural Roads in Developing Countries Using Principal Component Analysis and Simple Average Technique in the Development of a Road Safety Performance Index
Authors: Muhammad Tufail, Jawad Hussain, Hammad Hussain, Imran Hafeez, Naveed Ahmad
Abstract:
Road safety performance index is a composite index which combines various indicators of road safety into single number. Development of a road safety performance index using appropriate safety performance indicators is essential to enhance road safety. However, a road safety performance index in developing countries has not been given as much priority as needed. The primary objective of this research is to develop a general Road Safety Performance Index (RSPI) for developing countries based on the facility as well as behavior of road user. The secondary objectives include finding the critical inputs in the RSPI and finding the better method of making the index. In this study, the RSPI is developed by selecting four main safety performance indicators i.e., protective system (seat belt, helmet etc.), road (road width, signalized intersections, number of lanes, speed limit), number of pedestrians, and number of vehicles. Data on these four safety performance indicators were collected using observation survey on a 20 km road section of the National Highway N-125 road Taxila, Pakistan. For the development of this composite index, two methods are used: a) Principal Component Analysis (PCA) and b) Equal Weighting (EW) method. PCA is used for extraction, weighting, and linear aggregation of indicators to obtain a single value. An individual index score was calculated for each road section by multiplication of weights and standardized values of each safety performance indicator. However, Simple Average technique was used for weighting and linear aggregation of indicators to develop a RSPI. The road sections are ranked according to RSPI scores using both methods. The two weighting methods are compared, and the PCA method is found to be much more reliable than the Simple Average Technique.Keywords: indicators, aggregation, principle component analysis, weighting, index score
Procedia PDF Downloads 1581771 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Authors: Donatella Giuliani
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation
Procedia PDF Downloads 2171770 Proposed Algorithms to Assess Concussion Potential in Rear-End Motor Vehicle Collisions: A Meta-Analysis
Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin McCleery
Abstract:
Introduction: Mild traumatic brain injuries, also referred to as concussions, represent an increasing burden to society. Due to limited objective diagnostic measures, concussions are diagnosed by assessing subjective symptoms, often leading to disputes to their presence. Common biomechanical measures associated with concussion are high linear and/or angular acceleration to the head. With regards to linear acceleration, approximately 80g’s has previously been shown to equate with a 50% probability of concussion. Motor vehicle collisions (MVCs) are a leading cause of concussion, due to high head accelerations experienced. The change in velocity (delta-V) of a vehicle in an MVC is an established metric for impact severity. As acceleration is the rate of delta-V with respect to time, the purpose of this paper is to determine the relation between delta-V (and occupant parameters) with linear head acceleration. Methods: A meta-analysis was conducted for manuscripts collected using the following keywords: head acceleration, concussion, brain injury, head kinematics, delta-V, change in velocity, motor vehicle collision, and rear-end. Ultimately, 280 studies were surveyed, 14 of which fulfilled the inclusion criteria as studies investigating the human response to impacts, reporting head acceleration, and delta-V of the occupant’s vehicle. Statistical analysis was conducted with SPSS and R. The best fit line analysis allowed for an initial understanding of the relation between head acceleration and delta-V. To further investigate the effect of occupant parameters on head acceleration, a quadratic model and a full linear mixed model was developed. Results: From the 14 selected studies, 139 crashes were analyzed with head accelerations and delta-V values ranging from 0.6 to 17.2g and 1.3 to 11.1 km/h, respectively. Initial analysis indicated that the best line of fit (Model 1) was defined as Head Acceleration = 0.465Keywords: acceleration, brain injury, change in velocity, Delta-V, TBI
Procedia PDF Downloads 2331769 Subsurface Structures Delineation and Tectonic History Investigation Using Gravity, Magnetic and Well Data, In the Cyrenaica Platform, NE Libya
Authors: Mohamed Abdalla saleem
Abstract:
Around one hundred wells were drilled in the Cyrenaica platform north-east Libya, and almost all of them were dry. Although the drilled samples reveal good oil shows and good source rock maturity. Most of the upper Cretaceous age and the above deposit successions are outcrops in different places. We have a thorough understanding and mapping of the structures related to the Cretaceous and above Cenozoic Era. But the subsurface beneath these outcrops still needs more investigation and delineation. This study aims to give answers to some questions about the tectonic history and the types of structures that are distributed in the area using gravity, magnetic, and well data. According to the information that has been obtained from groups of wells drilled in concessions 31, 35, and 37, one can note that the depositional sections become ticker and deeper southward. The topography map of the study area shows that the area is highly elevated at the north, about 300 m above the sea level, while the minimum elevation (16–18 m) exists nearly in the middle (lat. 30°). South to this latitude, the area is started elevated again (more than 100 m). The third-order residual gravity map, which was constructed from the Bouguer gravity map, reveals that the area is dominated by a large negative anomaly working as a sub-basin (245 km x 220 km), which means a very thick depositional section, and the basement is very deep. The mentioned depocenter is surrounded by four high gravity anomalies (12-37 mGal), which means a shallow basement and a relative thinner succession of sediments. The highest gravity values are located beside the coast line. The total horizontal gradient (THG) map reveals various systems of structures, the first system where the structures are oriented NE-SW, which is crosscut by the second regime extending NW-SE. This second system is distributed through the whole area, but it is very strong and shallow near the coast line and at the south part, while it is relatively deep at the middle depocenter area.Keywords: cyrenaica platform, gravity, structures, basement, tectonic history
Procedia PDF Downloads 31768 Balanced Scorecard (BSC) Project : A Methodological Proposal for Decision Support in a Corporate Scenario
Authors: David de Oliveira Costa, Miguel Ângelo Lellis Moreira, Carlos Francisco Simões Gomes, Daniel Augusto de Moura Pereira, Marcos dos Santos
Abstract:
Strategic management is a fundamental process for global companies that intend to remain competitive in an increasingly dynamic and complex market. To do so, it is necessary to maintain alignment with their principles and values. The Balanced Scorecard (BSC) proposes to ensure that the overall business performance is based on different perspectives (financial, customer, internal processes, and learning and growth). However, relying solely on the BSC may not be enough to ensure the success of strategic management. It is essential that companies also evaluate and prioritize strategic projects that need to be implemented to ensure they are aligned with the business vision and contribute to achieving established goals and objectives. In this context, the proposition involves the incorporation of the SAPEVO-M multicriteria method to indicate the degree of relevance between different perspectives. Thus, the strategic objectives linked to these perspectives have greater weight in the classification of structural projects. Additionally, it is proposed to apply the concept of the Impact & Probability Matrix (I&PM) to structure and ensure that strategic projects are evaluated according to their relevance and impact on the business. By structuring the business's strategic management in this way, alignment and prioritization of projects and actions related to strategic planning are ensured. This ensures that resources are directed towards the most relevant and impactful initiatives. Therefore, the objective of this article is to present the proposal for integrating the BSC methodology, the SAPEVO-M multicriteria method, and the prioritization matrix to establish a concrete weighting of strategic planning and obtain coherence in defining strategic projects aligned with the business vision. This ensures a robust decision-making support process.Keywords: MCDA process, prioritization problematic, corporate strategy, multicriteria method
Procedia PDF Downloads 811767 Roundabout Implementation Analyses Based on Traffic Microsimulation Model
Authors: Sanja Šurdonja, Aleksandra Deluka-Tibljaš, Mirna Klobučar, Irena Ištoka Otković
Abstract:
Roundabouts are a common choice in the case of reconstruction of an intersection, whether it is to improve the capacity of the intersection or traffic safety, especially in urban conditions. The regulation for the design of roundabouts is often related to driving culture, the tradition of using this type of intersection, etc. Individual values in the regulation are usually recommended in a wide range (this is the case in Croatian regulation), and the final design of a roundabout largely depends on the designer's experience and his/her choice of design elements. Therefore, before-after analyses are a good way to monitor the performance of roundabouts and possibly improve the recommendations of the regulation. This paper presents a comprehensive before-after analysis of a roundabout on the country road network near Rijeka, Croatia. The analysis is based on a thorough collection of traffic data (operating speeds and traffic load) and design elements data, both before and after the reconstruction into a roundabout. At the chosen location, the roundabout solution aimed to improve capacity and traffic safety. Therefore, the paper analyzed the collected data to see if the roundabout achieved the expected effect. A traffic microsimulation model (VISSIM) of the roundabout was created based on the real collected data, and the influence of the increase of traffic load and different traffic structures, as well as of the selected design elements on the capacity of the roundabout, were analyzed. Also, through the analysis of operating speeds and potential conflicts by application of the Surrogate Safety Assessment Model (SSAM), the traffic safety effect of the roundabout was analyzed. The results of this research show the practical value of before-after analysis as an indicator of roundabout effectiveness at a specific location. The application of a microsimulation model provides a practical method for analyzing intersection functionality from a capacity and safety perspective in present and changed traffic and design conditions.Keywords: before-after analysis, operating speed, capacity, design.
Procedia PDF Downloads 221766 Iranian Processed Cheese under Effect of Emulsifier Salts and Cooking Time in Process
Authors: M. Dezyani, R. Ezzati bbelvirdi, M. Shakerian, H. Mirzaei
Abstract:
Sodium Hexametaphosphate (SHMP) is commonly used as an Emulsifying Salt (ES) in process cheese, although rarely as the sole ES. It appears that no published studies exist on the effect of SHMP concentration on the properties of process cheese when pH is kept constant; pH is well known to affect process cheese functionality. The detailed interactions between the added phosphate, Casein (CN), and indigenous Ca phosphate are poorly understood. We studied the effect of the concentration of SHMP (0.25-2.75%) and holding time (0-20 min) on the textural and Rheological properties of pasteurized process Cheddar cheese using a central composite rotatable design. All cheeses were adjusted to pH 5.6. The meltability of process cheese (as indicated by the decrease in loss tangent parameter from small amplitude oscillatory rheology, degree of flow, and melt area from the Schreiber test) decreased with an increase in the concentration of SHMP. Holding time also led to a slight reduction in meltability. Hardness of process cheese increased as the concentration of SHMP increased. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is attributable to residual colloidal Ca phosphate, was shifted to lower pH values with increasing concentration of SHMP. The insoluble Ca and total and insoluble P contents increased as concentration of SHMP increased. The proportion of insoluble P as a percentage of total (indigenous and added) P decreased with an increase in ES concentration because of some of the (added) SHMP formed soluble salts. The results of this study suggest that SHMP chelated the residual colloidal Ca phosphate content and dispersed CN; the newly formed Ca-phosphate complex remained trapped within the process cheese matrix, probably by cross-linking CN. Increasing the concentration of SHMP helped to improve fat emulsification and CN dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.Keywords: Iranian processed cheese, emulsifying salt, rheology, texture
Procedia PDF Downloads 4311765 Predicting Low Birth Weight Using Machine Learning: A Study on 53,637 Ethiopian Birth Data
Authors: Kehabtimer Shiferaw Kotiso, Getachew Hailemariam, Abiy Seifu Estifanos
Abstract:
Introduction: Despite the highest share of low birth weight (LBW) for neonatal mortality and morbidity, predicting births with LBW for better intervention preparation is challenging. This study aims to predict LBW using a dataset encompassing 53,637 birth cohorts collected from 36 primary hospitals across seven regions in Ethiopia from February 2022 to June 2024. Methods: We identified ten explanatory variables related to maternal and neonatal characteristics, including maternal education, age, residence, history of miscarriage or abortion, history of preterm birth, type of pregnancy, number of livebirths, number of stillbirths, antenatal care frequency, and sex of the fetus to predict LBW. Using WEKA 3.8.2, we developed and compared seven machine learning algorithms. Data preprocessing included handling missing values, outlier detection, and ensuring data integrity in birth weight records. Model performance was evaluated through metrics such as accuracy, precision, recall, F1-score, and area under the Receiver Operating Characteristic curve (ROC AUC) using 10-fold cross-validation. Results: The results demonstrated that the decision tree, J48, logistic regression, and gradient boosted trees model achieved the highest accuracy (94.5% to 94.6%) with a precision of 93.1% to 93.3%, F1-score of 92.7% to 93.1%, and ROC AUC of 71.8% to 76.6%. Conclusion: This study demonstrates the effectiveness of machine learning models in predicting LBW. The high accuracy and recall rates achieved indicate that these models can serve as valuable tools for healthcare policymakers and providers in identifying at-risk newborns and implementing timely interventions to achieve the sustainable developmental goal (SDG) related to neonatal mortality.Keywords: low birth weight, machine learning, classification, neonatal mortality, Ethiopia
Procedia PDF Downloads 221764 Groundwater Investigation Using Resistivity Method and Drilling for Irrigation during the Dry Season in Lwantonde District, Uganda
Authors: Tamale Vincent
Abstract:
Groundwater investigation is the investigation of underground formations to understand the hydrologic cycle, known groundwater occurrences, and identify the nature and types of aquifers. There are different groundwater investigation methods and surface geophysical method is one of the groundwater investigation more especially the Geoelectrical resistivity Schlumberger configuration method which provides valuable information regarding the lateral and vertical successions of subsurface geomaterials in terms of their individual thickness and corresponding resistivity values besides using surface geophysical method, hydrogeological and geological investigation methods are also incorporated to aid in preliminary groundwater investigation. Investigation for groundwater in lwantonde district has been implemented. The area project is located cattle corridor and the dry seasonal troubles the communities in lwantonde district of which 99% of people living there are farmers, thus making agriculture difficult and local government to provide social services to its people. The investigation was done using the Geoelectrical resistivity Schlumberger configuration method. The measurement point is located in the three sub-counties, with a total of 17 measurement points. The study location is at 0025S, 3110E, and covers an area of 160 square kilometers. Based on the results of the Geoelectrical information data, it was found two types of aquifers, which are open aquifers in depth ranging from six meters to twenty-two meters and a confined aquifer in depth ranging from forty-five meters to eighty meters. In addition to the Geoelectrical information data, drilling was done at an accessible point by heavy equipment in the Lwakagura village, Kabura sub-county. At the drilling point, artesian wells were obtained at a depth of eighty meters and can rise to two meters above the soil surface. The discovery of artesian well is then used by residents to meet the needs of clean water and for irrigation considering that in this area most wells contain iron content.Keywords: artesian well, geoelectrical, lwantonde, Schlumberger
Procedia PDF Downloads 1241763 Energy Options and Environmental Impacts of Carbon Dioxide Utilization Pathways
Authors: Evar C. Umeozor, Experience I. Nduagu, Ian D. Gates
Abstract:
The energy requirements of carbon dioxide utilization (CDU) technologies/processes are diverse, so also are their environmental footprints. This paper explores the energy and environmental impacts of systems for CO₂ conversion to fuels, chemicals, and materials. Energy needs of the technologies and processes deployable in CO₂ conversion systems are met by one or combinations of hydrogen (chemical), electricity, heat, and light. Likewise, the environmental footprint of any CO₂ utilization pathway depends on the systems involved. So far, evaluation of CDU systems has been constrained to particular energy source/type or a subset of the overall system needed to make CDU possible. This introduces limitations to the general understanding of the energy and environmental implications of CDU, which has led to various pitfalls in past studies. A CDU system has an energy source, CO₂ supply, and conversion units. We apply a holistic approach to consider the impacts of all components in the process, including various sources of energy, CO₂ feedstock, and conversion technologies. The electricity sources include nuclear power, renewables (wind and solar PV), gas turbine, and coal. Heat is supplied from either electricity or natural gas, and hydrogen is produced from either steam methane reforming or electrolysis. The CO₂ capture unit uses either direct air capture or post-combustion capture via amine scrubbing, where applicable, integrated configurations of the CDU system are explored. We demonstrate how the overall energy and environmental impacts of each utilization pathway are obtained by aggregating the values for all components involved. Proper accounting of the energy and emission intensities of CDU must incorporate total balances for the utilization process and differences in timescales between alternative conversion pathways. Our results highlight opportunities for the use of clean energy sources, direct air capture, and a number of promising CO₂ conversion pathways for producing methanol, ethanol, synfuel, urea, and polymer materials.Keywords: carbon dioxide utilization, processes, energy options, environmental impacts
Procedia PDF Downloads 1471762 Multiple Version of Roman Domination in Graphs
Authors: J. C. Valenzuela-Tripodoro, P. Álvarez-Ruíz, M. A. Mateos-Camacho, M. Cera
Abstract:
In 2004, it was introduced the concept of Roman domination in graphs. This concept was initially inspired and related to the defensive strategy of the Roman Empire. An undefended place is a city so that no legions are established on it, whereas a strong place is a city in which two legions are deployed. This situation may be modeled by labeling the vertices of a finite simple graph with labels {0, 1, 2}, satisfying the condition that any 0-vertex must be adjacent to, at least, a 2-vertex. Roman domination in graphs is a variant of classic domination. Clearly, the main aim is to obtain such labeling of the vertices of the graph with minimum cost, that is to say, having minimum weight (sum of all vertex labels). Formally, a function f: V (G) → {0, 1, 2} is a Roman dominating function (RDF) in the graph G = (V, E) if f(u) = 0 implies that f(v) = 2 for, at least, a vertex v which is adjacent to u. The weight of an RDF is the positive integer w(f)= ∑_(v∈V)▒〖f(v)〗. The Roman domination number, γ_R (G), is the minimum weight among all the Roman dominating functions? Obviously, the set of vertices with a positive label under an RDF f is a dominating set in the graph, and hence γ(G)≤γ_R (G). In this work, we start the study of a generalization of RDF in which we consider that any undefended place should be defended from a sudden attack by, at least, k legions. These legions can be deployed in the city or in any of its neighbours. A function f: V → {0, 1, . . . , k + 1} such that f(N[u]) ≥ k + |AN(u)| for all vertex u with f(u) < k, where AN(u) represents the set of active neighbours (i.e., with a positive label) of vertex u, is called a [k]-multiple Roman dominating functions and it is denoted by [k]-MRDF. The minimum weight of a [k]-MRDF in the graph G is the [k]-multiple Roman domination number ([k]-MRDN) of G, denoted by γ_[kR] (G). First, we prove that the [k]-multiple Roman domination decision problem is NP-complete even when restricted to bipartite and chordal graphs. A problem that had been resolved for other variants and wanted to be generalized. We know the difficulty of calculating the exact value of the [k]-MRD number, even for families of particular graphs. Here, we present several upper and lower bounds for the [k]-MRD number that permits us to estimate it with as much precision as possible. Finally, some graphs with the exact value of this parameter are characterized.Keywords: multiple roman domination function, decision problem np-complete, bounds, exact values
Procedia PDF Downloads 1081761 Synthetic Bis(2-Pyridylmethyl)Amino-Chloroacetyl Chloride- Ethylenediamine-Grafted Graphene Oxide Sheets Combined with Magnetic Nanoparticles: Remove Metal Ions and Catalytic Application
Authors: Laroussi Chaabane, Amel El Ghali, Emmanuel Beyou, Mohamed Hassen V. Baouab
Abstract:
In this research, the functionalization of graphene oxide sheets by ethylenediamine (EDA) was accomplished and followed by the grafting of bis(2-pyridylmethyl) amino group (BPED) onto the activated graphene oxide sheets in the presence of chloroacetylchloride (CAC) and then combined with magnetic nanoparticles (Fe₃O₄NPs) to produce a magnetic graphene-based composite [(Go-EDA-CAC)@Fe₃O₄NPs-BPED]. The physicochemical properties of [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] composites were investigated by Fourier transform infrared (FT-IR), scanning electron microscopy (SEM), X-ray diffraction (XRD), thermogravimetric analysis (TGA). Additionally, the catalysts can be easily recycled within ten seconds by using an external magnetic field. Moreover, [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] was used for removing Cu(II) ions from aqueous solutions using a batch process. The effect of pH, contact time and temperature on the metal ions adsorption were investigated, however weakly dependent on ionic strength. The maximum adsorption capacity values of Cu(II) on the [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] at the pH of 6 is 3.46 mmol.g⁻¹. To examine the underlying mechanism of the adsorption process, pseudo-first, pseudo-second-order, and intraparticle diffusion models were fitted to experimental kinetic data. Results showed that the pseudo-second-order equation was appropriate to describe the Cu (II) adsorption by [(Go-EDA-CAC)@Fe₃O₄NPs-BPED]. Adsorption data were further analyzed by the Langmuir, Freundlich, and Jossens adsorption approaches. Additionally, the adsorption properties of the [(Go-EDA-CAC)@Fe₃O₄NPs-BPED], their reusability (more than 6 cycles) and durability in the aqueous solutions open the path to removal of Cu(II) from water solution. Based on the results obtained, we report the activity of Cu(II) supported on [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] as a catalyst for the cross-coupling of symmetric alkynes.Keywords: graphene, magnetic nanoparticles, adsorption kinetics/isotherms, cross coupling
Procedia PDF Downloads 1391760 Fatty Acid Structure and Composition Effects of Biodiesel on Its Oxidative Stability
Authors: Gelu Varghese, Khizer Saeed
Abstract:
Biodiesel is as a mixture of mono-alkyl esters of long chain fatty acids derived from vegetable oils or animal fats. Recent studies in the literature suggest that end property of biodiesel such as its oxidative stability (OS) is highly influenced by the structure and composition of its alkyl esters than by environmental conditions. The structure and composition of these long chain fatty acid components have been also associated with trends in Cetane number, heat of combustion, cold flow properties viscosity, and lubricity. In the present work, detailed investigation has been carried out to decouple and correlate the fatty acid structure indices of biodiesel such as degree of unsaturation, chain length, double bond orientation, and composition with its oxidative stability. Measurements were taken using the EN14214 established Rancimat oxidative stability test method (EN141120). Firstly, effects of the degree of unsaturation, chain length and bond orientation were tested for the pure fatty acids to establish their oxidative stability. Results for pure Fatty acid show that Saturated FAs are more stable than unsaturated ones to oxidation; superior oxidative stability can be achieved by blending biodiesel fuels with relatively high in saturated fatty acid contents. A lower oxidative stability is noticed when a greater quantity of double bonds is present in the methyl ester. A strong inverse relationship with the number of double bonds and the Rancimat IP values can be identified. Trans isomer Methyl elaidate shows superior stability to oxidation than its cis isomer methyl oleate (7.2 vs. 2.3). Secondly, the effects of the variation in the composition of the biodiesel were investigated and established. Finally, biodiesels with varying structure and composition were investigated and correlated.Keywords: biodiesel, fame, oxidative stability, fatty acid structure, acid composition
Procedia PDF Downloads 2861759 3D Biomechanical Analysis in Shot Put Techniques of International Throwers
Authors: Satpal Yadav, Ashish Phulkar, Krishna K. Sahu
Abstract:
Aim: The research aims at doing a 3 Dimension biomechanical analysis in the shot put techniques of International throwers to evaluate the performance. Research Method: The researcher adopted the descriptive method and the data was subjected to calculate by using Pearson’s product moment correlation for the correlation of the biomechanical parameters with the performance of shot put throw. In all the analyses, the 5% critical level (p ≤ 0.05) was considered to indicate statistical significance. Research Sample: Eight (N=08) international shot putters using rotational/glide technique in male category was selected as subjects for the study. The researcher used the following methods and tools to obtain reliable measurements the instrument which was used for the purpose of present study namely the tesscorn slow-motion camera, specialized motion analyzer software, 7.260 kg Shot Put (for a male shot-putter) and steel tape. All measurement pertaining to the biomechanical variables was taken by the principal investigator so that data collected for the present study was considered reliable. Results: The finding of the study showed that negative significant relationship between the angular velocity right shoulder, acceleration distance at pre flight (-0.70), (-0.72) respectively were obtained, the angular displacement of knee, angular velocity right shoulder and acceleration distance at flight (0.81), (0.75) and (0.71) respectively were obtained, the angular velocity right shoulder and acceleration distance at transition phase (0.77), (0.79) respectively were obtained and angular displacement of knee, angular velocity right shoulder, release velocity shot, angle of release, height of release, projected distance and measured distance as the values (0.76), (0.77), (-0.83), (-0.79), (-0.77), (0.99) and (1.00) were found higher than the tabulated value at 0.05 level of significance. On the other hand, there exists an insignificant relationship between the performance of shot put and acceleration distance [m], angular displacement shot, C.G at release and horizontal release distance on the technique of shot put.Keywords: biomechanics, analysis, shot put, international throwers
Procedia PDF Downloads 1871758 Fragility Analysis of a Soft First-Story Building in Mexico City
Authors: Rene Jimenez, Sonia E. Ruiz, Miguel A. Orellana
Abstract:
On 09/19/2017, a Mw = 7.1 intraslab earthquake occurred in Mexico causing the collapse of about 40 buildings. Many of these were 5- or 6-story buildings with soft first story; so, it is desirable to perform a structural fragility analysis of typical structures representative of those buildings and to propose a reliable structural solution. Here, a typical 5-story building constituted by regular R/C moment-resisting frames in the first story and confined masonry walls in the upper levels, similar to the collapsed structures on the 09/19/2017 Mexico earthquake, is analyzed. Three different structural solutions of the 5-story building are considered: S1) it is designed in accordance with the Mexico City Building Code-2004; S2) then, the column dimensions of the first story corresponding to S1 are reduced, and S3) viscous dampers are added at the first story of solution S2. A number of dynamic incremental analyses are performed for each structural solution, using a 3D structural model. The hysteretic behavior model of the masonry was calibrated with experiments performed at the Laboratory of Structures at UNAM. Ten seismic ground motions are used to excite the structures; they correspond to ground motions recorded in intermediate soil of Mexico City with a dominant period around 1s, where the structures are located. The fragility curves of the buildings are obtained for different values of the maximum inter-story drift demands. Results show that solutions S1 and S3 give place to similar probabilities of exceedance of a given value of inter-story drift for the same seismic intensity, and that solution S2 presents a higher probability of exceedance for the same seismic intensity and inter-story drift demand. Therefore, it is concluded that solution S3 (which corresponds to the building with soft first story and energy dissipation devices) can be a reliable solution from the structural point of view.Keywords: demand hazard analysis, fragility curves, incremental dynamic analyzes, soft-first story, structural capacity
Procedia PDF Downloads 1781757 Simulation and Analysis of Passive Parameters of Building in eQuest: A Case Study in Istanbul, Turkey
Authors: Mahdiyeh Zafaranchi
Abstract:
With rapid development of urbanization and improvement of living standards in the world, energy consumption and carbon emissions of the building sector are expected to increase in the near future; because of that, energy-saving issues have become more important among the engineers. Besides, the building sector is a major contributor to energy consumption and carbon emissions. The concept of efficient building appeared as a response to the need for reducing energy demand in this sector which has the main purpose of shifting from standard buildings to low-energy buildings. Although energy-saving should happen in all steps of a building during the life cycle (material production, construction, demolition), the main concept of efficient energy building is saving energy during the life expectancy of a building by using passive and active systems, and should not sacrifice comfort and quality to reach these goals. The main aim of this study is to investigate passive strategies (do not need energy consumption or use renewable energy) to achieve energy-efficient buildings. Energy retrofit measures were explored by eQuest software using a case study as a base model. The study investigates predictive accuracy for the major factors like thermal transmittance (U-value) of the material, windows, shading devices, thermal insulation, rate of the exposed envelope, window/wall ration, lighting system in the energy consumption of the building. The base model was located in Istanbul, Turkey. The impact of eight passive parameters on energy consumption had been indicated. After analyzing the base model by eQuest, a final scenario was suggested which had a good energy performance. The results showed a decrease in the U-values of materials, the rate of exposing buildings, and windows had a significant effect on energy consumption. Finally, savings in electric consumption of about 10.5%, and gas consumption by about 8.37% in the suggested model were achieved annually.Keywords: efficient building, electric and gas consumption, eQuest, Passive parameters
Procedia PDF Downloads 1121756 BI- And Tri-Metallic Catalysts for Hydrogen Production from Hydrogen Iodide Decomposition
Authors: Sony, Ashok N. Bhaskarwar
Abstract:
Production of hydrogen from a renewable raw material without any co-synthesis of harmful greenhouse gases is the current need for sustainable energy solutions. The sulfur-iodine (SI) thermochemical cycle, using intermediate chemicals, is an efficient process for producing hydrogen at a much lower temperature than that required for the direct splitting of water. No net byproduct forms in the cycle. Hydrogen iodide (HI) decomposition is a crucial reaction in this cycle, as the product, hydrogen, forms only in this step. It is an endothermic, reversible, and equilibrium-limited reaction. The theoretical equilibrium conversion at 550°C is just a meagre of 24%. There is a growing interest, therefore, in enhancing the HI conversion to near-equilibrium values at lower reaction temperatures and by possibly improving the rate. The reaction is relatively slow without a catalyst, and hence catalytic decomposition of HI has gained much significance. Bi-metallic Ni-Co, Ni-Mn, Co-Mn, and tri-metallic Ni-Co-Mn catalysts over zirconia support were tested for HI decomposition reaction. The catalysts were synthesized via a sol-gel process wherein Ni was 3wt% in all the samples, and Co and Mn had equal weight ratios in the Co-Mn catalyst. Powdered X-ray diffraction and Brunauer-Emmett-Teller surface area characterizations indicated the polycrystalline nature and well-developed mesoporous structure of all the samples. The experiments were performed in a vertical laboratory-scale packed bed reactor made of quartz, and HI (55 wt%) was fed along with nitrogen at a WHSV of 12.9 hr⁻¹. Blank experiments at 500°C for HI decomposition suggested conversion of less than 5%. The activities of all the different catalysts were checked at 550°C, and the highest conversion of 23.9% was obtained with the tri-metallic 3Ni-Co-Mn-ZrO₂ catalyst. The decreasing order of the performance of catalysts could be expressed as: 3Ni-Co-Mn-ZrO₂ > 3Ni-2Co-ZrO₂ > 3Ni-2Mn-ZrO₂ > 2.5Co-2.5Mn-ZrO₂. The tri-metallic catalyst remained active till 360 mins at 550°C without any observable drop in its activity/stability. Among the explored catalyst compositions, the tri-metallic catalyst certainly has a better performance for HI conversion when compared to the bi-metallic ones. Owing to their low costs and ease of preparation, these trimetallic catalysts could be used for large-scale hydrogen production.Keywords: sulfur-iodine cycle, hydrogen production, hydrogen iodide decomposition, bi-, and tri-metallic catalysts
Procedia PDF Downloads 1871755 Leadership in the Era of AI: Growing Organizational Intelligence
Authors: Mark Salisbury
Abstract:
The arrival of artificially intelligent avatars and the automation they bring is worrying many of us, not only for our livelihood but for the jobs that may be lost to our kids. We worry about what our place will be as human beings in this new economy where much of it will be conducted online in the metaverse – in a network of 3D virtual worlds – working with intelligent machines. The Future of Leadership was written to address these fears and show what our place will be – the right place – in this new economy of AI avatars, automation, and 3D virtual worlds. But to be successful in this new economy, our job will be to bring wisdom to our workplace and the marketplace. And we will use AI avatars and 3D virtual worlds to do it. However, this book is about more than AI and the avatars that we will work with in the metaverse. It’s about building Organizational intelligence (OI) -- the capability of an organization to comprehend and create knowledge relevant to its purpose; in other words, it is the intellectual capacity of the entire organization. To increase organizational intelligence requires a new kind of knowledge worker, a wisdom worker, that requires a new kind of leadership. This book begins your story for how to become a leader of wisdom workers and be successful in the emerging wisdom economy. After this presentation, conference participants will be able to do the following: Recognize the characteristics of the new generation of wisdom workers and how they differ from their predecessors. Recognize that new leadership methods and techniques are needed to lead this new generation of wisdom workers. Apply personal and professional values – personal integrity, belief in something larger than yourself, and keeping the best interest of others in mind – to improve your work performance and lead others. Exhibit an attitude of confidence, courage, and reciprocity of sharing knowledge to increase your productivity and influence others. Leverage artificial intelligence to accelerate your ability to learn, augment your decision-making, and influence others.Utilize new technologies to communicate with human colleagues and intelligent machines to develop better solutions more quickly.Keywords: metaverse, generative artificial intelligence, automation, leadership, organizational intelligence, wisdom worker
Procedia PDF Downloads 431754 In Silico Exploration of Quinazoline Derivatives as EGFR Inhibitors for Lung Cancer: A Multi-Modal Approach Integrating QSAR-3D, ADMET, Molecular Docking, and Molecular Dynamics Analyses
Authors: Mohamed Moussaoui
Abstract:
A series of thirty-one potential inhibitors targeting the epidermal growth factor receptor kinase (EGFR), derived from quinazoline, underwent 3D-QSAR analysis using CoMFA and CoMSIA methodologies. The training and test sets of quinazoline derivatives were utilized to construct and validate the QSAR models, respectively, with dataset alignment performed using the lowest energy conformer of the most active compound. The best-performing CoMFA and CoMSIA models demonstrated impressive determination coefficients, with R² values of 0.981 and 0.978, respectively, and Leave One Out cross-validation determination coefficients, Q², of 0.645 and 0.729, respectively. Furthermore, external validation using a test set of five compounds yielded predicted determination coefficients, R² test, of 0.929 and 0.909 for CoMFA and CoMSIA, respectively. Building upon these promising results, eighteen new compounds were designed and assessed for drug likeness and ADMET properties through in silico methods. Additionally, molecular docking studies were conducted to elucidate the binding interactions between the selected compounds and the enzyme. Detailed molecular dynamics simulations were performed to analyze the stability, conformational changes, and binding interactions of the quinazoline derivatives with the EGFR kinase. These simulations provided deeper insights into the dynamic behavior of the compounds within the active site. This comprehensive analysis enhances the understanding of quinazoline derivatives as potential anti-cancer agents and provides valuable insights for lead optimization in the early stages of drug discovery, particularly for developing highly potent anticancer therapeuticsKeywords: 3D-QSAR, CoMFA, CoMSIA, ADMET, molecular docking, quinazoline, molecular dynamic, egfr inhibitors, lung cancer, anticancer
Procedia PDF Downloads 481753 Study of a Lean Premixed Combustor: A Thermo Acoustic Analysis
Authors: Minoo Ghasemzadeh, Rouzbeh Riazi, Shidvash Vakilipour, Alireza Ramezani
Abstract:
In this study, thermo acoustic oscillations of a lean premixed combustor has been investigated, and a mono-dimensional code was developed in this regard. The linearized equations of motion are solved for perturbations with time dependence〖 e〗^iwt. Two flame models were considered in this paper and the effect of mean flow and boundary conditions were also investigated. After manipulation of flame heat release equation together with the equations of flow perturbation within the main components of the combustor model (i.e., plenum/ premixed duct/ and combustion chamber) and by considering proper boundary conditions between the components of model, a system of eight homogeneous equations can be obtained. This simplification, for the main components of the combustor model, is convenient since low frequency acoustic waves are not affected by bends. Moreover, some elements in the combustor are smaller than the wavelength of propagated acoustic perturbations. A convection time is also assumed to characterize the required time for the acoustic velocity fluctuations to travel from the point of injection to the location of flame front in the combustion chamber. The influence of an extended flame model on the acoustic frequencies of combustor was also investigated, assuming the effect of flame speed as a function of equivalence ratio perturbation, on the rate of flame heat release. The abovementioned system of equations has a related eigenvalue equation which has complex roots. The sign of imaginary part of these roots determines whether the disturbances grow or decay and the real part of these roots would give the frequency of the modes. The results show a reasonable agreement between the predicted values of dominant frequencies in the present model and those calculated in previous related studies.Keywords: combustion instability, dominant frequencies, flame speed, premixed combustor
Procedia PDF Downloads 3791752 The Effects of Nanoemulsions Based on Commercial Oils for the Quality of Vacuum-Packed Sea Bass at 2±2°C
Authors: Mustafa Durmuş, Yesim Ozogul, Esra Balıkcı, Saadet Gokdoğan, Fatih Ozogul, Ali Rıza Köşker, İlknur Yuvka
Abstract:
Food scientists and researchers have paid attention to develop new ways for improving the nutritional value of foods. The application of nanotechnology techniques to the food industry may allow the modification of food texture, taste, sensory attributes, coloring strength, processability, and stability during shelf life of products. In this research, the effects of nanoemulsions based on commercial oils for vacuum-packed sea bass fillets stored at 2±2°C were investigated in terms of the sensory, chemical (total volatile basic nitrogen (TVB-N), thiobarbituric acid (TBA), peroxide value (PV) and free fatty acids (FFA), pH, water holding capacity (WHC)) and microbiological qualities (total anaerobic bacteria and total lactic acid bacteria). Physical properties of emulsions (viscosity, the particle size of droplet, thermodynamic stability, refractive index, and surface tension) were determined. Nanoemulsion preparation method was based on high energy principle, with ultrasonic homojenizator. Sensory analyses of raw fish showed that the demerit points of the control group were found higher than those of treated groups. The sensory score (odour, taste and texture) of the cooked fillets decreased with storage time, especially in the control. Results obtained from chemical and microbiological analyses also showed that nanoemulsions significantly (p<0.05) decreased the values of biochemical parameters and growth of bacteria during storage period, thus improving quality of vacuum-packed sea bass.Keywords: quality parameters, nanoemulsion, sea bass, shelf life, vacuum packing
Procedia PDF Downloads 459