Search results for: satisfaction functions
187 The Institutional Change Occurring in the Chinese Sport Sector: A Case Study on the Chinese Football Association Reform
Authors: Qi Peng
Abstract:
The Chinese sport sector is currently undergoing a dramatic institutional change. A sport system that was heavily dominated by the government is starting to shift towards one that is driven by the market. During the past sixty years, the Chinese Football Association (CFA), although ostensibly a ‘non-governmental organization’, has been in fact operated under the close supervision and control of the government. The double-identity of CFA has taken most of the blame for the poor performance of the Chinese football team, especially the men’s team. In 2015, a policy initiated by the Chinese government introduced a potentially radical change to the institutional structure of CFA by delegating the power of government agency – the General Administration of Sport of China - to the organization (CFA) itself. Against such background, an overarching research question was brought up- will an organization remained institutionalized within the system change in response to the external (policy) jolt? To answer this question, three principal data collection methods were employed: document review, participant observation and semi-structured interviews. Document review provides the mapping of the structural and cultural framework in which the CFA functions during the change process. The author have had the chance to interact closely with the organization as participant observer in the organization for a period of time, long enough to collect the data, but never too long to get biased view of the situation. This stage enables the author to gain an in-depth understanding of how CFA managed to restructure the governance and legitimacy. Conducting semi-structured interviews with staff within the CFA and from staff within selected stakeholders of CFA also provided a crucial step to gain an insight into the factors for change as well as the implications of the change. A wide range of interviewees that have been and to be interviewed include: CFA members (senior officials and staff); local football associations members; senior Chinese Super League football club managers; CFA Super League Co., LTD (senior officials and staff); CSL broadcasters; Chinese Olympic Committee members. The preliminary research data shows that the CFA is currently undergoing two levels of change: although the settings of CFA has been gradually restructured (organizational framework), the organizational values and beliefs remain almost the same as the CFA before the reform. This means that the plan of shifting from a governmental agency to an autonomous association is an going process, and that organizational core beliefs and values are more difficult to change than its structural framework. This is due to the inertia of the organizational history and the effect of institutionalization. The change of Chinese Football Association is looked at as a pioneering sport organization in China to undertake the “decoupling” road. It is believed that many other sport organizations, especially sport governing bodies will follow the step of CFA in the near future. Therefore, the experience of CFA change is worthy of studying.Keywords: Chinese Football Association, Organizational Change, Organizational Culture, Structural Framework
Procedia PDF Downloads 344186 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 27185 Interoperability of 505th Search and Rescue Group and the 205th Tactical Helicopter Wing of the Philippine Air Force in Search and Rescue Operations: An Assessment
Authors: Ryan C. Igama
Abstract:
The complexity of disaster risk reduction management paved the way for various innovations and approaches to mitigate the loss of lives and casualties during disaster-related situations. The efficiency of doing response operations during disasters relies on the timely and organized deployment of search, rescue and retrieval teams. Indeed, the assistance provided by the search, rescue, and retrieval teams during disaster operations is a critical service needed to further minimize the loss of lives and casualties. The Armed Forces of the Philippines was mandated to provide humanitarian assistance and disaster relief operations during calamities and disasters. Thus, this study “Interoperability of 505TH Search and Rescue Group and the 205TH Tactical Helicopter Wing of the Philippine Air Force in Search and Rescue Operations: An Assessment” was intended to provide substantial information to further strengthen and promote the capabilities of search and rescue operations in the Philippines. Further, this study also aims to assess the interoperability of the 505th Search and Rescue Group of the Philippine Air Force and the 205th Tactical Helicopter Wing Philippine Air Force. This study was undertaken covering the component units in the Philippine Air Force of the Armed Forces of the Philippines – specifically the 505th SRG and the 205th THW as the involved units who also acted as the respondents of the study. The qualitative approach was the mechanism utilized in the form of focused group discussions, key informant interviews, and documentary analysis as primary means to obtain the needed data for the study. Essentially, this study was geared towards the evaluation of the effectiveness of the interoperability of the two (2) involved PAF units during search and rescue operations. Further, it also delved into the identification of the impacts, gaps, and challenges confronted regarding interoperability as to training, equipment, and coordination mechanism vis-à-vis the needed measures for improvement, respectively. The result of the study regarding the interoperability of the two (2) PAF units during search and rescue operations showed that there was a duplication in terms of functions or tasks in HADR activities, specifically during the conduct of air rescue operations in situations like calamities. In addition, it was revealed that there was a lack of equipment and training for the personnel involved in search and rescue operations which is a vital element during calamity response activities. Based on the findings of the study, it was recommended that a strategic planning workshop/activity must be conducted regarding the duties and responsibilities of the personnel involved in the search and rescue operations to address the command and control and interoperability issues of these units. Additionally, the conduct of intensive HADR-related training for the personnel involved in search and rescue operations of the two (2) PAF Units must also be conducted so they can be more proficient in their skills and sustainably increase their knowledge of search and rescue scenarios, including the capabilities of the respective units. Lastly, the updating of existing doctrines or policies must be undertaken to adapt advancement to the evolving situations in search and rescue operations.Keywords: interoperability, search and rescue capability, humanitarian assistance, disaster response
Procedia PDF Downloads 93184 Use of Socially Assistive Robots in Early Rehabilitation to Promote Mobility for Infants with Motor Delays
Authors: Elena Kokkoni, Prasanna Kannappan, Ashkan Zehfroosh, Effrosyni Mavroudi, Kristina Strother-Garcia, James C. Galloway, Jeffrey Heinz, Rene Vidal, Herbert G. Tanner
Abstract:
Early immobility affects the motor, cognitive, and social development. Current pediatric rehabilitation lacks the technology that will provide the dosage needed to promote mobility for young children at risk. The addition of socially assistive robots in early interventions may help increase the mobility dosage. The aim of this study is to examine the feasibility of an early intervention paradigm where non-walking infants experience independent mobility while socially interacting with robots. A dynamic environment is developed where both the child and the robot interact and learn from each other. The environment involves: 1) a range of physical activities that are goal-oriented, age-appropriate, and ability-matched for the child to perform, 2) the automatic functions that perceive the child’s actions through novel activity recognition algorithms, and decide appropriate actions for the robot, and 3) a networked visual data acquisition system that enables real-time assessment and provides the means to connect child behavior with robot decision-making in real-time. The environment was tested by bringing a two-year old boy with Down syndrome for eight sessions. The child presented delays throughout his motor development with the current being on the acquisition of walking. During the sessions, the child performed physical activities that required complex motor actions (e.g. climbing an inclined platform and/or staircase). During these activities, a (wheeled or humanoid) robot was either performing the action or was at its end point 'signaling' for interaction. From these sessions, information was gathered to develop algorithms to automate the perception of activities which the robot bases its actions on. A Markov Decision Process (MDP) is used to model the intentions of the child. A 'smoothing' technique is used to help identify the model’s parameters which are a critical step when dealing with small data sets such in this paradigm. The child engaged in all activities and socially interacted with the robot across sessions. With time, the child’s mobility was increased, and the frequency and duration of complex and independent motor actions were also increased (e.g. taking independent steps). Simulation results on the combination of the MDP and smoothing support the use of this model in human-robot interaction. Smoothing facilitates learning MDP parameters from small data sets. This paradigm is feasible and provides an insight on how social interaction may elicit mobility actions suggesting a new early intervention paradigm for very young children with motor disabilities. Acknowledgment: This work has been supported by NIH under grant #5R01HD87133.Keywords: activity recognition, human-robot interaction, machine learning, pediatric rehabilitation
Procedia PDF Downloads 292183 Enhancing the Effectiveness of Witness Examination through Deposition System in Korean Criminal Trials: Insights from the U.S. Evidence Discovery Process
Authors: Qi Wang
Abstract:
With the expansion of trial-centered principles, the importance of witness examination in Korean criminal proceedings has been increasingly emphasized. However, several practical challenges have emerged in courtroom examinations, including concerns about witnesses’ memory deterioration due to prolonged trial periods, the possibility of inaccurate testimony due to courtroom anxiety and tension, risks of testimony retraction, and witnesses’ refusal to appear. These issues have led to a decline in the effective utilization of witness testimony. This study analyzes the deposition system, which is widely used in the U.S. evidence discovery process, and examines its potential implementation within the Korean criminal procedure framework. Furthermore, it explores the scope of application, procedural design, and measures to prevent potential abuse if the system were to be adopted. Under the adversarial litigation structure that has evolved through several amendments to the Criminal Procedure Act, the deposition system, although conducted pre-trial, serves as a preliminary procedure to facilitate efficient and effective witness examination during trial. This system not only aligns with the goal of discovering substantive truth but also upholds the practical ideals of trial-centered principles while promoting judicial economy. Furthermore, with the legal foundation established by Article 266 of the Criminal Procedure Act and related provisions, this study concludes that the implementation of the deposition system is both feasible and appropriate for the Korean criminal justice system. The specific functions of depositions include providing case-related information to refresh witnesses’ memory as a preliminary to courtroom examination, pre-reviewing existing statement documents to enhance trial efficiency, and conducting preliminary examinations on key issues and anticipated questions. The subsequent courtroom witness examination focuses on verifying testimony through public and cross-examination, identifying and analyzing contradictions in testimony, and conducting double verification of testimony credibility under judicial supervision. Regarding operational aspects, both prosecution and defense may request depositions, subject to court approval. The deposition process involves video or audio recording, complete documentation by court reporters, and the preparation of transcripts, with copies provided to all parties and the original included in court records. The admissibility of deposition transcripts is recognized under Article 311 of the Criminal Procedure Act. Given prosecutors’ advantageous position in evidence collection, which may lead to indifference or avoidance of depositions, the study emphasizes the need to reinforce prosecutors’ public interest status and objective duties. Additionally, it recommends strengthening pre-employment ethics education and post-violation disciplinary measures for prosecutors.Keywords: witness examination, deposition system, Korean criminal procedure, evidence discovery, trial-centered principle
Procedia PDF Downloads 5182 Nursing Experience in the Intensive Care of a Lung Cancer Patient with Pulmonary Embolism on Extracorporeal Membrane Oxygenation
Authors: Huang Wei-Yi
Abstract:
Objective: This article explores the intensive care nursing experience of a lung cancer patient with pulmonary embolism who was placed on ECMO. Following a sudden change in the patient’s condition and a consensus reached during a family meeting, the decision was made to withdraw life-sustaining equipment and collaborate with the palliative care team. Methods: The nursing period was from October 20 to October 27, 2023. The author monitored physiological data, observed, provided direct care, conducted interviews, performed physical assessments, and reviewed medical records. Together with the critical care team and bypass personnel, a comprehensive assessment was conducted using Gordon's Eleven Functional Health Patterns to identify the patient’s health issues, which included pain related to lung cancer and invasive devices, fear of death due to sudden deterioration, and altered tissue perfusion related to hemodynamic instability. Results: The patient was admitted with fever, back pain, and painful urination. During hospitalization, the patient experienced sudden discomfort followed by cardiac arrest, requiring multiple CPR attempts and ECMO placement. A subsequent CT angiogram revealed a pulmonary embolism. The patient's condition was further complicated by severe pain due to compression fractures, and a diagnosis of terminal lung cancer was unexpectedly confirmed, leading to emotional distress and uncertainty about future treatment. Throughout the critical care process, ECMO was removed on October 24, stabilizing the patient’s body temperature between 36.5-37°C and maintaining a mean arterial pressure of 60-80 mmHg. Pain management, including Morphine 8mg in 0.9% N/S 100ml IV drip q6h PRN and Ultracet 37.5 mg/325 mg 1# PO q6h, kept the pain level below 3. The patient was transferred to the ward on October 27 and discharged home on October 30. Conclusion: During the care period, collaboration with the medical team and palliative care professionals was crucial. Adjustments to pain medication, symptom management, and lung cancer-targeted therapy improved the patient’s physical discomfort and pain levels. By applying the unique functions of nursing and the four principles of palliative care, positive encouragement was provided. Family members, along with social workers, clergy, psychologists, and nutritionists, participated in cross-disciplinary care, alleviating anxiety and fear. The consensus to withdraw ECMO and life-sustaining equipment enabled the patient and family to receive high-quality care and maintain autonomy in decision-making. A follow-up call on November 1 confirmed that the patient was emotionally stable, pain-free, and continuing with targeted lung cancer therapy.Keywords: intensive care, lung cancer, pulmonary embolism, ECMO
Procedia PDF Downloads 27181 Application of Typha domingensis Pers. in Artificial Floating for Sewage Treatment
Authors: Tatiane Benvenuti, Fernando Hamerski, Alexandre Giacobbo, Andrea M. Bernardes, Marco A. S. Rodrigues
Abstract:
Population growth in urban areas has caused damages to the environment, a consequence of the uncontrolled dumping of domestic and industrial wastewater. The capacity of some plants to purify domestic and agricultural wastewater has been demonstrated by several studies. Since natural wetlands have the ability to transform, retain and remove nutrients, constructed wetlands have been used for wastewater treatment. They are widely recognized as an economical, efficient and environmentally acceptable means of treating many different types of wastewater. T. domingensis Pers. species have shown a good performance and low deployment cost to extract, detoxify and sequester pollutants. Constructed Floating Wetlands (CFWs) consist of emergent vegetation established upon a buoyant structure, floating on surface waters. The upper parts of the vegetation grow and remain primarily above the water level, while the roots extend down in the water column, developing an extensive under water-level root system. Thus, the vegetation grows hydroponically, performing direct nutrient uptake from the water column. Biofilm is attached on the roots and rhizomes, and as physical and biochemical processes take place, the system functions as a natural filter. The aim of this study is to diagnose the application of macrophytes in artificial floating in the treatment of domestic sewage in south Brazil. The T. domingensis Pers. plants were placed in a flotation system (polymer structure), in full scale, in a sewage treatment plant. The sewage feed rate was 67.4 m³.d⁻¹ ± 8.0, and the hydraulic retention time was 11.5 d ± 1.3. This CFW treat the sewage generated by 600 inhabitants, which corresponds to 12% of the population served by this municipal treatment plant. During 12 months, samples were collected every two weeks, in order to evaluate parameters as chemical oxygen demand (COD), biochemical oxygen demand in 5 days (BOD5), total Kjeldahl nitrogen (TKN), total phosphorus, total solids, and metals. The average removal of organic matter was around 55% for both COD and BOD5. For nutrients, TKN was reduced in 45.9% what was similar to the total phosphorus removal, while for total solids the reduction was 33%. For metals, aluminum, copper, and cadmium, besides in low concentrations, presented the highest percentage reduction, 82.7, 74.4 and 68.8% respectively. Chromium, iron, and manganese removal achieved values around 40-55%. The use of T. domingensis Pers. in artificial floating for sewage treatment is an effective and innovative alternative in Brazilian sewage treatment systems. The evaluation of additional parameters in the treatment system may give useful information in order to improve the removal efficiency and increase the quality of the water bodies.Keywords: constructed wetland, floating system, sewage treatment, Typha domingensis Pers.
Procedia PDF Downloads 210180 Automatic Identification and Classification of Contaminated Biodegradable Plastics using Machine Learning Algorithms and Hyperspectral Imaging Technology
Authors: Nutcha Taneepanichskul, Helen C. Hailes, Mark Miodownik
Abstract:
Plastic waste has emerged as a critical global environmental challenge, primarily driven by the prevalent use of conventional plastics derived from petrochemical refining and manufacturing processes in modern packaging. While these plastics serve vital functions, their persistence in the environment post-disposal poses significant threats to ecosystems. Addressing this issue necessitates approaches, one of which involves the development of biodegradable plastics designed to degrade under controlled conditions, such as industrial composting facilities. It is imperative to note that compostable plastics are engineered for degradation within specific environments and are not suited for uncontrolled settings, including natural landscapes and aquatic ecosystems. The full benefits of compostable packaging are realized when subjected to industrial composting, preventing environmental contamination and waste stream pollution. Therefore, effective sorting technologies are essential to enhance composting rates for these materials and diminish the risk of contaminating recycling streams. In this study, it leverage hyperspectral imaging technology (HSI) coupled with advanced machine learning algorithms to accurately identify various types of plastics, encompassing conventional variants like Polyethylene terephthalate (PET), Polypropylene (PP), Low density polyethylene (LDPE), High density polyethylene (HDPE) and biodegradable alternatives such as Polybutylene adipate terephthalate (PBAT), Polylactic acid (PLA), and Polyhydroxyalkanoates (PHA). The dataset is partitioned into three subsets: a training dataset comprising uncontaminated conventional and biodegradable plastics, a validation dataset encompassing contaminated plastics of both types, and a testing dataset featuring real-world packaging items in both pristine and contaminated states. Five distinct machine learning algorithms, namely Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Logistic Regression, and Decision Tree Algorithm, were developed and evaluated for their classification performance. Remarkably, the Logistic Regression and CNN model exhibited the most promising outcomes, achieving a perfect accuracy rate of 100% for the training and validation datasets. Notably, the testing dataset yielded an accuracy exceeding 80%. The successful implementation of this sorting technology within recycling and composting facilities holds the potential to significantly elevate recycling and composting rates. As a result, the envisioned circular economy for plastics can be established, thereby offering a viable solution to mitigate plastic pollution.Keywords: biodegradable plastics, sorting technology, hyperspectral imaging technology, machine learning algorithms
Procedia PDF Downloads 79179 Teaching Young Children Social and Emotional Learning through Shared Book Reading: Project GROW
Authors: Stephanie Al Otaiba, Kyle Roberts
Abstract:
Background and Significance Globally far too many students read below grade level; thus improving literacy outcomes is vital. Research suggests that non-cognitive factors, including Social and Emotional Learning (SEL) are linked to success in literacy outcomes. Converging evidence exists that early interventions are more effective than later remediation; therefore teachers need strategies to support early literacy while developing students’ SEL and their vocabulary, or language, for learning. This presentation describe findings from a US federally-funded project that trained teachers to provide an evidence-based read-aloud program for young children, using commercially available books with multicultural characters and themes to help their students “GROW”. The five GROW SEL themes include: “I can name my feelings”, “I can learn from my mistakes”, “I can persist”, “I can be kind to myself and others”, and “I can work toward and achieve goals”. Examples of GROW vocabulary (from over 100 words taught across the 5 units) include: emotions, improve, resilient, cooperate, accomplish, responsible, compassion, adapt, achieve, analyze. Methodology This study used a mixed methods research design, with qualitative methods to describe data from teacher feedback surveys (regarding satisfaction, feasibility), observations of fidelity of implementation, and with quantitative methods to assess the effect sizes for student vocabulary growth. GROW Intervention and Teacher Training Procedures Researchers trained classroom teachers to implement GROW. Each thematic unit included four books, vocabulary cards with images of the vocabulary words, and scripted lessons. Teacher training included online and in-person training; researchers incorporated virtual reality videos of instructors with child avatars to model lessons. Classroom teachers provided 2-3 20 min lessons per week ranging from short-term (8 weeks) to longer-term trials for up to 16 weeks. Setting and Participants The setting for the study included two large urban charter schools in the South. Data was collected across two years; during the first year, participants included 7 kindergarten teachers and 108 and the second year involved an additional set of 5 kindergarten and first grade teachers and 65 students. Initial Findings The initial qualitative findings indicate teachers reported the lessons to be feasible to implement and they reported that students enjoyed the books. Teachers found the vocabulary words to be challenging and important. They were able to implement lessons with fidelity. Quantitative analyses of growth for each taught word suggest that students’ growth on taught words ranged from large (ES = .75) to small (<.20). Researchers will contrast the effects for more and less successful books within the GROW units. Discussion and Conclusion It is feasible for teachers of young students to effectively teach SEL vocabulary and themes during shared book reading. Teachers and students enjoyed the books and students demonstrated growth on taught vocabulary. Researchers will discuss implications of the study and about the GROW program for researchers in learning sciences, will describe some limitations about research designs that are inherent in school-based research partnerships, and will provide some suggested directions for future research and practice.Keywords: early literacy, learning science, language and vocabulary, social and emotional learning, multi-cultural
Procedia PDF Downloads 43178 Breaching Treaty Obligations of the Rome Statute of the International Criminal Court: The Case of South Africa
Authors: David Abrahams
Abstract:
In October 2016 South Africa deposited its ‘instrument of withdrawal’ from the Rome Statute of the International Criminal Court, with the Secretary-General of the United Nations. The Rome Statute is the founding document of the treaty-based International Criminal Court (ICC). The ICC has jurisdiction to hear cases where crimes against humanity, war crimes and genocide have been committed, on the basis of individual criminal responsibility. It is therefore not surprising that one of the ICCs mandates is to ensure that the sufferings, due to gross human rights violations towards the civilian population is, in principle, brought to an end by punishing those individuals responsible, thus providing justice to the victims. The ICC is unable to effectively fulfill its mandate and thus depends, in part on the willingness of states to assist the Court in its functions. This requires states to ratify the Statute and to domesticate its provisions, depending on whether it is a monist or dualist state. South Africa ratified the Statute in November 2000, and domesticated the Statute in 2002 by virtue of the Implementation of the Rome Statute of the International Criminal Court Act 27 of 2002. South Africa thus remains under an obligation to cooperate with the ICC until the final date of withdrawal, which is October 2017. An AU Summit was hosted by South Africa during June 2015. Omar Al-Bashir, whom the prosecutor of the ICC has indicted on two separate occasions, was invited to the summit. South Africa made an agreement with the AU that it will honour its obligations in terms of its Diplomatic and Immunities Privileges Act of 2001, by granting immunity to all heads of state, including that of Sudan. This decision by South Africa has raised a plethora of questions regarding the status and hierarchy of international laws versus regional laws versus domestic laws. In particular, this paper explores whether a state’s international law treaty obligations may be suspended in favour of, firstly, regional peace (thus safeguarding the security of the civilian population against further atrocities and other gross violations of human rights), and secondly, head of state immunity. This paper also reflects on the effectiveness of the trias politca in South Africa in relation the manner in which South African courts have confirmed South Africa’s failure in fulfilling its obligations in terms of the Rome Statute. A secondary question which will also be explored, is whether the Rome Statute is currently an effective tool in dealing with gross violations of human rights, particularly in a regional African context, given the desire by a number of African states currently party to the Statute, to engage in a mass exodus from the Statute. Finally, the paper concludes with a proposal that there can be no justice for victims of gross human rights violations unless states are serious in playing an instrumental role in bringing an end to impunity in Africa, and that withdrawing from the ICC without an alternative, effective system in place, will simply perpetuate impunity.Keywords: African Union, diplomatic immunity, impunity, international criminal court, South Africa
Procedia PDF Downloads 529177 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)
Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula
Abstract:
This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.Keywords: MINLP, mixed-integer non-linear programming, optimization, structures
Procedia PDF Downloads 46176 Developing Three-Dimensional Digital Image Correlation Method to Detect the Crack Variation at the Joint of Weld Steel Plate
Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung
Abstract:
The purposes of hydraulic gate are to maintain the functions of storing and draining water. It bears long-term hydraulic pressure and earthquake force and is very important for reservoir and waterpower plant. The high tensile strength of steel plate is used as constructional material of hydraulic gate. The cracks and rusts, induced by the defects of material, bad construction and seismic excitation and under water respectively, thus, the mechanics phenomena of gate with crack are probing into the cause of stress concentration, induced high crack increase rate, affect the safety and usage of hydroelectric power plant. Stress distribution analysis is a very important and essential surveying technique to analyze bi-material and singular point problems. The finite difference infinitely small element method has been demonstrated, suitable for analyzing the buckling phenomena of welding seam and steel plate with crack. Especially, this method can easily analyze the singularity of kink crack. Nevertheless, the construction form and deformation shape of some gates are three-dimensional system. Therefore, the three-dimensional Digital Image Correlation (DIC) has been developed and applied to analyze the strain variation of steel plate with crack at weld joint. The proposed Digital image correlation (DIC) technique is an only non-contact method for measuring the variation of test object. According to rapid development of digital camera, the cost of this digital image correlation technique has been reduced. Otherwise, this DIC method provides with the advantages of widely practical application of indoor test and field test without the restriction on the size of test object. Thus, the research purpose of this research is to develop and apply this technique to monitor mechanics crack variations of weld steel hydraulic gate and its conformation under action of loading. The imagines can be picked from real time monitoring process to analyze the strain change of each loading stage. The proposed 3-Dimensional digital image correlation method, developed in the study, is applied to analyze the post-buckling phenomenon and buckling tendency of welded steel plate with crack. Then, the stress intensity of 3-dimensional analysis of different materials and enhanced materials in steel plate has been analyzed in this paper. The test results show that this proposed three-dimensional DIC method can precisely detect the crack variation of welded steel plate under different loading stages. Especially, this proposed DIC method can detect and identify the crack position and the other flaws of the welded steel plate that the traditional test methods hardly detect these kind phenomena. Therefore, this proposed three-dimensional DIC method can apply to observe the mechanics phenomena of composite materials subjected to loading and operating.Keywords: welded steel plate, crack variation, three-dimensional digital image correlation (DIC), crack stel plate
Procedia PDF Downloads 520175 Indeterminacy: An Urban Design Tool to Measure Resilience to Climate Change, a Caribbean Case Study
Authors: Tapan Kumar Dhar
Abstract:
How well are our city forms designed to adapt to climate change and its resulting uncertainty? What urban design tools can be used to measure and improve resilience to climate change, and how would they do so? In addressing these questions, this paper considers indeterminacy, a concept originated in the resilience literature, to measure the resilience of built environments. In the realm of urban design, ‘indeterminacy’ can be referred to as built-in design capabilities of an urban system to serve different purposes which are not necessarily predetermined. An urban system, particularly that with a higher degree of indeterminacy, can enable the system to be reorganized and changed to accommodate new or unknown functions while coping with uncertainty over time. Underlying principles of this concept have long been discussed in the urban design and planning literature, including open architecture, landscape urbanism, and flexible housing. This paper argues that the concept indeterminacy holds the potential to reduce the impacts of climate change incrementally and proactively. With regard to sustainable development, both planning and climate change literature highly recommend proactive adaptation as it involves less cost, efforts, and energy than last-minute emergency or reactive actions. Nevertheless, the concept still remains isolated from resilience and climate change adaptation discourses even though the discourses advocate the incremental transformation of a system to cope with climatic uncertainty. This paper considers indeterminacy, as an urban design tool, to measure and increase resilience (and adaptive capacity) of Long Bay’s coastal settlements in Negril, Jamaica. Negril is one of the popular tourism destinations in the Caribbean highly vulnerable to sea-level rise and its associated impacts. This paper employs empirical information obtained from direct observation and informal interviews with local people. While testing the tool, this paper deploys an urban morphology study, which includes land use patterns and the physical characteristics of urban form, including street networks, block patterns, and building footprints. The results reveal that most resorts in Long Bay are designed for pre-determined purposes and offer a little potential to use differently if needed. Additionally, Negril’s street networks are found to be rigid and have limited accessibility to different points of interest. This rigidity can expose the entire infrastructure further to extreme climatic events and also impedes recovery actions after a disaster. However, Long Bay still has room for future resilient developments in other relatively less vulnerable areas. In adapting to climate change, indeterminacy can be reached through design that achieves a balance between the degree of vulnerability and the degree of indeterminacy: the more vulnerable a place is, the more indeterminacy is useful. This paper concludes with a set of urban design typologies to increase the resilience of coastal settlements.Keywords: climate change adaptation, resilience, sea-level rise, urban form
Procedia PDF Downloads 365174 Engineered Control of Bacterial Cell-to-Cell Signaling Using Cyclodextrin
Authors: Yuriko Takayama, Norihiro Kato
Abstract:
Quorum sensing (QS) is a cell-to-cell communication system in bacteria to regulate expression of target genes. In gram-negative bacteria, activation on QS is controlled by a concentration increase of N-acylhomoserine lactone (AHL), which can diffuse in and out of the cell. Effective control of QS is expected to avoid virulence factor production in infectious pathogens, biofilm formation, and antibiotic production because various cell functions in gram-negative bacteria are controlled by AHL-mediated QS. In this research, we applied cyclodextrins (CDs) as artificial hosts for the AHL signal to reduce the AHL concentration in the culture broth below its threshold for QS activation. The AHL-receptor complex induced under the high AHL concentration activates transcription of the QS-target gene. Accordingly, artificial reduction of the AHL concentration is one of the effective strategies to inhibit the QS. A hydrophobic cavity of the CD can interact with the acyl-chain of the AHL due to hydrophobic interaction in aqueous media. We studied N-hexanoylhomoserine lactone (C6HSL)-mediated QS in Serratia marcescens; accumulation of C6HSL is responsible for regulation of the expression of pig cluster. Inhibitory effects of added CDs on QS were demonstrated by determination of prodigiosin amount inside cells after reaching stationary phase, because production of prodigiosin depends on the C6HSL-mediated QS. By adding approximately 6 wt% hydroxypropyl-β-CD (HP-β-CD) in Luria-Bertani (LB) medium prior to inoculation of S. maecescens AS-1, the intracellularly accumulated prodigiosin was drastically reduced to 7-10%, which was determined after the extraction of prodigiosin in acidified ethanol. The AHL retention ability of HP-β-CD was also demonstrated by Chromobacterium violacuem CV026 bioassay. The CV026 strain is an AHL-synthase defective mutant that activates QS solely by adding AHLs from outside of cells. A purple pigment violacein is induced by activation of the AHL-mediated QS. We demonstrated that the violacein production was effectively suppressed when the C6HSL standard solution was spotted on a LB agar plate dispersing CV026 cells and HP-β-CD. Physico-chemical analysis was performed to study the affinity between the immobilized CD and added C6HSL using a quartz crystal microbalance (QCM) sensor. The COOH-terminated self-assembled monolayer was prepared on a gold electrode of 27-MHz AT-cut quartz crystal. Mono(6-deoxy-6-N, N-diethylamino)-β-CD was immobilized on the electrode using water-soluble carbodiimide. The C6HSL interaction with the β-CD cavity was studied by injecting the C6HSL solution to a cup-type sensor cell filled with buffer solution. A decrement of resonant frequency (ΔFs) clearly showed the effective C6HSL complexation with immobilized β-CD and its stability constant for MBP-SpnR-C6HSL complex was on the order of 102 M-1. The CD has high potential for engineered control of QS because it is safe for human use.Keywords: acylhomoserine lactone, cyclodextrin, intracellular signaling, quorum sensing
Procedia PDF Downloads 238173 Impact of Ecosystem Engineers on Soil Structuration in a Restored Floodplain in Switzerland
Authors: Andreas Schomburg, Claire Le Bayon, Claire Guenat, Philip Brunner
Abstract:
Numerous river restoration projects have been established in Switzerland in recent years after decades of human activity in floodplains. The success of restoration projects in terms of biodiversity and ecosystem functions largely depend on the development of the floodplain soil system. Plants and earthworms as ecosystem engineers are known to be able to build up a stable soil structure by incorporating soil organic matter into the soil matrix that creates water stable soil aggregates. Their engineering efficiency however largely depends on changing soil properties and frequent floods along an evolutive floodplain transect. This study, therefore, aims to quantify the effect of flood frequency and duration as well as of physico-chemical soil parameters on plants’ and earthworms’ engineering efficiency. It is furthermore predicted that these influences may have a different impact on one of the engineers that leads to a varying contribution to aggregate formation within the floodplain transect. Ecosystem engineers were sampled and described in three different floodplain habitats differentiated according to the evolutionary stages of the vegetation ranging from pioneer to forest vegetation in a floodplain restored 15 years ago. In addition, the same analyses were performed in an embanked adjacent pasture as a reference for the pre-restored state. Soil aggregates were collected and analyzed for their organic matter quantity and quality using Rock Eval pyrolysis. Water level and discharge measurements dating back until 2008 were used to quantify the return period of major floods. Our results show an increasing amount of water stable aggregates in soil with increasing distance to the river and show largest values in the reference site. A decreasing flood frequency and the proportion of silt and clay in the soil texture explain these findings according to F values from one way ANOVA of a fitted mixed effect model. Significantly larger amounts of labile organic matter signatures were found in soil aggregates in the forest habitat and in the reference site that indicates a larger contribution of plants to soil aggregation in these habitats compared to the pioneer vegetation zone. Earthworms’ contribution to soil aggregation does not show significant differences in the floodplain transect, but their effect could be identified even in the pioneer vegetation with its large proportion of coarse sand in the soil texture and frequent inundations. These findings indicate that ecosystem engineers seem to be able to create soil aggregates even under unfavorable soil conditions and under frequent floods. A restoration success can therefore be expected even in ecosystems with harsh soil properties and frequent external disturbances.Keywords: ecosystem engineers, flood frequency, floodplains, river restoration, rock eval pyrolysis, soil organic matter incorporation, soil structuration
Procedia PDF Downloads 269172 Assessing the High Rate of Deforestation Caused by the Operations of Timber Industries in Ghana
Authors: Obed Asamoah
Abstract:
Forests are very vital for human survival and our well-being. During the past years, the world has taken an increasingly significant role in the modification of the global environment. The high rate of deforestation in Ghana is of primary national concern as the forests provide many ecosystem services and functions that support the country’s predominantly agrarian economy and foreign earnings. Ghana forest is currently major source of carbon sink that helps to mitigate climate change. Ghana forests, both the reserves and off-reserves, are under pressure of deforestation. The causes of deforestation are varied but can broadly be categorized into anthropogenic and natural factors. For the anthropogenic factors, increased wood fuel collection, clearing of forests for agriculture, illegal and poorly regulated timber extraction, social and environmental conflicts, increasing urbanization and industrialization are the primary known causes for the loss of forests and woodlands. Mineral exploitation in the forest areas is considered as one of the major causes of deforestation in Ghana. Mining activities especially mining of gold by both the licensed mining companies and illegal mining groups who are locally known as "gallantly mining" also cause damage to the nation's forest reserves. Several works have been conducted regarding the causes of the high rate of deforestation in Ghana, major attention has been placed on illegal logging and using forest lands for illegal farming and mining activities. Less emphasis has been placed on the timber production companies on their harvesting methods in the forests in Ghana and other activities that are carried out in the forest. The main objective of the work is to find out the harvesting methods and the activities of the timber production companies and their effects on the forests in Ghana. Both qualitative and quantitative research methods were engaged in the research work. The study population comprised of 20 Timber industries (Sawmills) forest areas of Ghana. These companies were selected randomly. The cluster sampling technique was engaged in selecting the respondents. Both primary and secondary data were employed. In the study, it was observed that most of the timber production companies do not know the age, the weight, the distance covered from the harvesting to the loading site in the forest. It was also observed that old and heavy machines are used by timber production companies in their operations in the forest, which makes the soil compact prevents regeneration and enhances soil erosion. It was observed that timber production companies do not abide by the rules and regulations governing their operations in the forest. The high rate of corruption on the side of the officials of the Ghana forestry commission makes the officials relax and do not embark on proper monitoring on the operations of the timber production companies which makes the timber companies to cause more harm to the forest. In other to curb this situation the Ghana forestry commission with the ministry of lands and natural resources should monitor the activities of the timber production companies and sanction all the companies that make foul play in their activities in the forest. The commission should also pay more attention to the policy “fell one plant 10” to enhance regeneration in both reserves and off-reserves forest.Keywords: companies, deforestation, forest, Ghana, timber
Procedia PDF Downloads 198171 Management of Dysphagia after Supra Glottic Laryngectomy
Authors: Premalatha B. S., Shenoy A. M.
Abstract:
Background: Rehabilitation of swallowing is as vital as speech in surgically treated head and neck cancer patients to maintain nutritional support, enhance wound healing and improve quality of life. Aspiration following supraglottic laryngectomy is very common, and rehabilitation of the same is crucial which requires involvement of speech therapist in close contact with head and neck surgeon. Objectives: To examine the functions of swallowing outcomes after intensive therapy in supraglottic laryngectomy. Materials: Thirty-nine supra glottic laryngectomees were participated in the study. Of them, 36 subjects were males and 3 were females, in the age range of 32-68 years. Eighteen subjects had undergone standard supra glottis laryngectomy (Group1) for supraglottic lesions where as 21 of them for extended supraglottic laryngectomy (Group 2) for base tongue and lateral pharyngeal wall lesion. Prior to surgery visit by speech pathologist was mandatory to assess the sutability for surgery and rehabilitation. Dysphagia rehabilitation started after decannulation of tracheostoma by focusing on orientation about anatomy, physiological variation before and after surgery, which was tailor made for each individual based on their type and extent of surgery. Supraglottic diet - Soft solid with supraglottic swallow method was advocated to prevent aspiration. The success of intervention was documented as number of sessions taken to swallow different food consistency and also percentage of subjects who achieved satisfactory swallow in terms of number of weeks in both the groups. Results: Statistical data was computed in two ways in both the groups 1) to calculate percentage (%) of subjects who swallowed satisfactorily in the time frame of less than 3 weeks to more than 6 weeks, 2) number of sessions taken to swallow without aspiration as far as food consistency was concerned. The study indicated that in group 1 subjects of standard supraglottic laryngectomy, 61% (n=11) of them were successfully rehabilitated but their swallowing normalcy was delayed by an average 29th post operative day (3-6 weeks). Thirty three percentages (33%) (n=6) of the subjects could swallow satisfactorily without aspiration even before 3 weeks and only 5 % (n=1) of the needed more than 6 weeks to achieve normal swallowing ability. Group 2 subjects of extended SGL only 47 %( n=10) of them could achieved satisfactory swallow by 3-6 weeks and 24% (n=5) of them of them achieved normal swallowing ability before 3 weeks. Around 4% (n=1) needed more than 6 weeks and as high as 24 % (n=5) of them continued to be supplemented with naso gastric feeding even after 8-10 months post operative as they exhibited severe aspiration. As far as type of food consistencies were concerned group 1 subject could able to swallow all types without aspiration much earlier than group 2 subjects. Group 1 needed only 8 swallowing therapy sessions for thickened soft solid and 15 sessions for liquids whereas group 2 required 14 sessions for soft solid and 17 sessions for liquids to achieve swallowing normalcy without aspiration. Conclusion: The study highlights the importance of dysphagia intervention in supraglottic laryngectomees by speech pathologist.Keywords: dysphagia management, supraglotic diet, supraglottic laryngectomy, supraglottic swallow
Procedia PDF Downloads 231170 Consumer Utility Analysis of Halal Certification on Beef Using Discrete Choice Experiment: A Case Study in the Netherlands
Authors: Rosa Amalia Safitri, Ine van der Fels-Klerx, Henk Hogeveen
Abstract:
Halal is a dietary law observed by people following Islamic faith. It is considered as a type of credence food quality which cannot be easily assured by consumers even upon and after consumption. Therefore, Halal certification takes place as a practical tool for the consumers to make an informed choice particularly in a non-Muslim majority country, including the Netherlands. Discrete choice experiment (DCE) was employed in this study for its ability to assess the importance of attributes attached to Halal beef in the Dutch market and to investigate consumer utilities. Furthermore, willingness to pay (WTP) for the desired Halal certification was estimated. Four most relevant attributes were selected, i.e., the slaughter method, traceability information, place of purchase, and Halal certification. Price was incorporated as an attribute to allow estimation of willingness to pay for Halal certification. There were 242 Muslim respondents who regularly consumed Halal beef completed the survey, from Dutch (53%) and non-Dutch consumers living in the Netherlands (47%). The vast majority of the respondents (95%) were within the age of 18-45 years old, with the largest group being student (43%) followed by employee (30%) and housewife (12%). Majority of the respondents (76%) had disposable monthly income less than € 2,500, while the rest earned more than € 2,500. The respondents assessed themselves of having good knowledge of the studied attributes, except for traceability information with 62% of the respondents considered themselves not knowledgeable. The findings indicated that slaughter method was valued as the most important attribute, followed by Halal certificate, place of purchase, price, and traceability information. This order of importance varied across sociodemographic variables, except for the slaughter method. Both Dutch and non-Dutch subgroups valued Halal certification as the third most important attributes. However, non-Dutch respondents valued it with higher importance (0,20) than their Dutch counterparts (0,16). For non-Dutch, the price was more important than Halal certification. The ideal product preferred by the consumers indicated the product serving the highest utilities for consumers, and characterized by beef obtained without pre-slaughtering stunning, with traceability info, available at Halal store, certified by an official certifier, and sold at 2.75 € per 500 gr. In general, an official Halal certifier was mostly preferred. However, consumers were not willing to pay for premium for any type of Halal certifiers, indicated by negative WTP of -0.73 €, -0.93 €, and -1,03€ for small, official, and international certifiers, respectively. This finding indicated that consumers tend to lose their utility when confronted with price. WTP estimates differ across socio-demographic variables with male and non-Dutch respondents had the lowest WTP. The unfamiliarity to traceability information might cause respondents to perceive it as the least important attribute. In the context of Halal certified meat, adding traceability information into meat packaging can serve two functions, first consumers can justify for themselves whether the processes comply with Halal requirements, for example, the use of pre-slaughtering stunning, and secondly to assure its safety. Therefore, integrating traceability info into meat packaging can help to make informed decision for both Halal status and food safety.Keywords: consumer utilities, discrete choice experiments, Halal certification, willingness to pay
Procedia PDF Downloads 128169 Marketing and Business Intelligence and Their Impact on Products and Services Through Understanding Based on Experiential Knowledge of Customers in Telecommunications Companies
Authors: Ali R. Alshawawreh, Francisco Liébana-Cabanillas, Francisco J. Blanco-Encomienda
Abstract:
Collaboration between marketing and business intelligence (BI) is crucial in today's ever-evolving business landscape. These two domains play pivotal roles in molding customers' experiential knowledge. Marketing insights offer valuable information regarding customer needs, preferences, and behaviors. Conversely, BI facilitates data-driven decision-making, leading to heightened operational efficiency, product quality, and customer satisfaction. Customer experiential knowledge (CEK) encompasses customers' implicit comprehension of consumption experiences influenced by diverse factors, including social and cultural influences. This study primarily focuses on telecommunications companies in Jordan, scrutinizing how experiential customer knowledge mediates the relationship between marketing intelligence and business intelligence. Drawing on theoretical frameworks such as the resource-based view (RBV) and service-dominant logic (SDL), the research aims to comprehend how organizations utilize their resources, particularly knowledge, to foster Evolution. Employing a quantitative research approach, the study collected and analyzed primary data to explore hypotheses. Structural equation modeling (SEM) facilitated by Smart PLS software evaluated the relationships between the constructs, followed by mediation analysis to assess the indirect associations in the model. The study findings offer insights into the intricate dynamics of organizational Creation, uncovering the interconnected relationships between business intelligence, customer experiential knowledge-based innovation (CEK-DI), marketing intelligence (MI), and product and service innovation (PSI), underscoring the pivotal role of advanced intelligence capabilities in developing innovative practices rooted in a profound understanding of customer experiences. Furthermore, the positive impact of BI on PSI reaffirms the significance of data-driven decision-making in shaping the innovation landscape. The significant impact of CEK-DI on PSI highlights the critical role of customer experiences in driving an organization. Companies that actively integrate customer insights into their opportunity creation processes are more likely to create offerings that match customer expectations, which drives higher levels of product and service sophistication. Additionally, the positive and significant impact of MI on CEK-DI underscores the critical role of market insights in shaping evolutionary strategies. While the relationship between MI and PSI is positive, the slightly weaker significance level indicates a subtle association, suggesting that while MI contributes to the development of ideas, In conclusion, the study emphasizes the fundamental role of intelligence capabilities, especially artificial intelligence, emphasizing the need for organizations to leverage market and customer intelligence to achieve effective and competitive innovation practices. Collaborative efforts between marketing and business intelligence serve as pivotal drivers of development, influencing customer experiential knowledge and shaping organizational strategies and practices. Future research could adopt longitudinal designs and gather data from various sectors to offer broader insights. Additionally, the study focuses on the effects of marketing intelligence, business intelligence, customer experiential knowledge, and innovation, but other unexamined variables may also influence innovation processes. Future studies could investigate additional factors, mediators, or moderators, including the role of emerging technologies like AI and machine learning in driving innovation.Keywords: marketing intelligence, business intelligence, product, customer experiential knowledge-driven innovation
Procedia PDF Downloads 32168 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method
Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov
Abstract:
The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection
Procedia PDF Downloads 214167 The Location of Park and Ride Facilities Using the Fuzzy Inference Model
Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas
Abstract:
Contemporary cities are facing serious congestion and parking problems. In urban transport policy the introduction of the park and ride system (P&R) is an increasingly popular way of limiting vehicular traffic. The determining of P&R facilities location is a key aspect of the system. Criteria for assessing the quality of the selected location are formulated generally and descriptively. The research outsourced to specialists are expensive and time consuming. The most focus is on the examination of a few selected places. The practice has shown that the choice of the location of these sites in a intuitive way without a detailed analysis of all the circumstances, often gives negative results. Then the existing facilities are not used as expected. Methods of location as a research topic are also widely taken in the scientific literature. Built mathematical models often do not bring the problem comprehensively, e.g. assuming that the city is linear, developed along one important communications corridor. The paper presents a new method where the expert knowledge is applied to fuzzy inference model. With such a built system even a less experienced person could benefit from it, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed location can also be verified in a short time. The proposed method is intended for testing of car parks location in a city. The paper will show selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analysis of existing objects will also be shown in the paper and they will be confronted with the opinions of the system users, with particular emphasis on unpopular locations. The research are executed using the fuzzy inference model which was built and described in more detail in the earlier paper of the authors. The results of analyzes are compared to documents of P&R facilities location outsourced by the city and opinions of existing facilities users expressed on social networking sites. The research of existing facilities were conducted by means of the fuzzy model. The results are consistent with actual users feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. The performed studies show that the method has been confirmed. The method can be applied in urban planning of the P&R facilities location in relation to the accompanying functions. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.Keywords: fuzzy logic inference, park and ride system, P&R facilities, P&R location
Procedia PDF Downloads 325166 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells
Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez
Abstract:
Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation
Procedia PDF Downloads 249165 The Home as Memory Palace: Three Case Studies of Artistic Representations of the Relationship between Individual and Collective Memory and the Home
Authors: Laura M. F. Bertens
Abstract:
The houses we inhabit are important containers of memory. As homes, they take on meaning for those who live inside, and memories of family life become intimately tied up with rooms, windows, and gardens. Each new family creates a new layer of meaning, resulting in a palimpsest of family memory. These houses function quite literally as memory palaces, as a walk through a childhood home will show; each room conjures up images of past events. Over time, these personal memories become woven together with the cultural memory of countries and generations. The importance of the home is a central theme in art, and several contemporary artists have a special interest in the relationship between memory and the home. This paper analyses three case studies in order to get a deeper understanding of the ways in which the home functions and feels like a memory palace, both on an individual and on a collective, cultural level. Close reading of the artworks is performed on the theoretical intersection between Art History and Cultural Memory Studies. The first case study concerns works from the exhibition Mnemosyne by the artist duo Anne and Patrick Poirier. These works combine interests in architecture, archaeology, and psychology. Models of cities and fantastical architectural designs resemble physical structures (such as the brain), architectural metaphors used in representing the concept of memory (such as the memory palace), and archaeological remains, essential to our shared cultural memories. Secondly, works by Do Ho Suh will help us understand the relationship between the home and memory on a far more personal level; outlines of rooms from his former homes, made of colourful, transparent fabric and combined into new structures, provide an insight into the way these spaces retain individual memories. The spaces have been emptied out, and only the husks remain. Although the remnants of walls, light switches, doors, electricity outlets, etc. are standard, mass-produced elements found in many homes and devoid of inherent meaning, together they remind us of the emotional significance attached to the muscle memory of spaces we once inhabited. The third case study concerns an exhibition in a house put up for sale on the Dutch real estate website Funda. The house was built in 1933 by a Jewish family fleeing from Germany, and the father and son were later deported and killed. The artists Anne van As and CA Wertheim have used the history and memories of the house as a starting point for an exhibition called (T)huis, a combination of the Dutch words for home and house. This case study illustrates the way houses become containers of memories; each new family ‘resets’ the meaning of a house, but traces of earlier memories remain. The exhibition allows us to explore the transition of individual memories into shared cultural memory, in this case of WWII. Taken together, the analyses provide a deeper understanding of different facets of the relationship between the home and memory, both individual and collective, and the ways in which art can represent these.Keywords: Anne and Patrick Poirier, cultural memory, Do Ho Suh, home, memory palace
Procedia PDF Downloads 159164 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 90163 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas
Authors: Sahithi Yarlagadda
Abstract:
The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm
Procedia PDF Downloads 109162 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 262161 Developing Geriatric Oral Health Network is a Public Health Necessity for Older Adults
Authors: Maryam Tabrizi, Shahrzad Aarup
Abstract:
Objectives- Understanding the close association between oral health and overall health for older adults at the right time and right place, a person, focus treatment through Project ECHO telementoring. Methodology- Data from monthly ECHO telementoring sessions were provided for three years. Sessions including case presentations, overall health conditions, considering medications, organ functions limitations, including the level of cognition. Contributions- Providing the specialist level of providing care to all elderly regardless of their location and other health conditions and decreasing oral health inequity by increasing workforce via Project ECHO telementoring program worldwide. By 2030, the number of adults in the USA over the age of 65 will increase more than 60% (approx.46 million) and over 22 million (30%) of 74 million older Americans will need specialized geriatrician care. In 2025, a national shortage of medical geriatricians will be close to 27,000. Most individuals 65 and older do not receive oral health care due to lack of access, availability, or affordability. One of the main reasons is a significant shortage of Oral Health (OH) education and resources for the elderly, particularly in rural areas. Poor OH is a social stigma, a thread to quality and safety of overall health of the elderly with physical and cognitive decline. Poor OH conditions may be costly and sometimes life-threatening. Non-traumatic dental-related emergency department use in Texas alone was over $250 M in 2016. Most elderly over the age of 65 present with at least one or multiple chronic diseases such as arthritis, diabetes, heart diseases, and chronic obstructive pulmonary disease (COPD) are at higher risk to develop gum (periodontal) disease, yet they are less likely to get dental care. In addition, most older adults take both prescription and over-the-counter drugs; according to scientific studies, many of these medications cause dry mouth. Reduced saliva flow due to aging and medications may increase the risk of cavities and other oral conditions. Most dental schools have already increased geriatrics OH in their educational curriculums, but the aging population growth worldwide is faster than growing geriatrics dentists. However, without the use of advanced technology and creating a network between specialists and primary care providers, it is impossible to increase the workforce, provide equitable oral health to the elderly. Project ECHO is a guided practice model that revolutionizes health education and increases the workforce to provide best-practice specialty care and reduce health disparities. Training oral health providers for utilizing the Project ECHO model is a logical response to the shortage and increases oral health access to the elderly. Project ECHO trains general dentists & hygienists to provide specialty care services. This means more elderly can get the care they need, in the right place, at the right time, with better treatment outcomes and reduces costs.Keywords: geriatric, oral health, project echo, chronic disease, oral health
Procedia PDF Downloads 174160 Engineering Photodynamic with Radioactive Therapeutic Systems for Sustainable Molecular Polarity: Autopoiesis Systems
Authors: Moustafa Osman Mohammed
Abstract:
This paper introduces Luhmann’s autopoietic social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. A specific type of autopoietic system is explained in the three existing groups of the ecological phenomena: interaction, social and medical sciences. This hypothesis model, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for the exchange of photon energy with molecular without any changes in topology. The external forces in the systems environment might be concomitant with the natural fluctuations’ influence (e.g. radioactive radiation, electromagnetic waves). The cantilever sensor deploys insights to the future chip processor for prevention of social metabolic systems. Thus, the circuits with resonant electric and optical properties are prototyped on board as an intra–chip inter–chip transmission for producing electromagnetic energy approximately ranges from 1.7 mA at 3.3 V to service the detection in locomotion with the least significant power losses. Nowadays, therapeutic systems are assimilated materials from embryonic stem cells to aggregate multiple functions of the vessels nature de-cellular structure for replenishment. While, the interior actuators deploy base-pair complementarity of nucleotides for the symmetric arrangement in particular bacterial nanonetworks of the sequence cycle creating double-stranded DNA strings. The DNA strands must be sequenced, assembled, and decoded in order to reconstruct the original source reliably. The design of exterior actuators have the ability in sensing different variations in the corresponding patterns regarding beat-to-beat heart rate variability (HRV) for spatial autocorrelation of molecular communication, which consists of human electromagnetic, piezoelectric, electrostatic and electrothermal energy to monitor and transfer the dynamic changes of all the cantilevers simultaneously in real-time workspace with high precision. A prototype-enabled dynamic energy sensor has been investigated in the laboratory for inclusion of nanoscale devices in the architecture with a fuzzy logic control for detection of thermal and electrostatic changes with optoelectronic devices to interpret uncertainty associated with signal interference. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other and forms its unique spatial structure modules for providing the environment mutual contribution in the investigation of mass temperature changes due to pathogenic archival architecture of clusters.Keywords: autopoiesis, nanoparticles, quantum photonics, portable energy, photonic structure, photodynamic therapeutic system
Procedia PDF Downloads 124159 How to “Eat” without Actually Eating: Marking Metaphor with Spanish Se and Italian Si
Authors: Cinzia Russi, Chiyo Nishida
Abstract:
Using data from online corpora (Spanish CREA, Italian CORIS), this paper examines the relatively understudied use of Spanish se and Italian si exemplified in (1) and (2), respectively. (1) El rojo es … el que se come a los demás. ‘The red (bottle) is the one that outshines/*eats the rest.’(2) … ebbe anche la saggezza di mangiarsi tutto il suo patrimonio. ‘… he even had the wisdom to squander/*eat all his estate.’ In these sentences, se/si accompanies the consumption verb comer/mangiare ‘to eat’, without which the sentences would not be interpreted appropriately. This se/si cannot readily be attributed to any of the multiple functions so far identified in the literature: reflexive, ergative, middle/passive, inherent, benefactive, and complete consumptive. In particular, this paper argues against the feasibility of a recent construction-based analysis of sentences like (1) and (2), which situates se/si within a prototype-based network of meanings all deriving from the central meaning of 'COMPLETE CONSUMPTION' (e.g., Alice se comió toda la torta/Alicesi è mangiata tutta la torta ‘John ate the whole cake’). Clearly, the empirical adequacy of such an account is undermined by the fact that the events depicted in the se/si-sentences at issue do not always entail complete consumption because they may lack an INCREMENTAL THEME, the distinguishing property of complete consumption. Alternatively, it is proposed that the sentences under analysis represent instances of verbal METAPHORICAL EXTENSION: se/si represents an explicit marker of this cognitive process, which has independently developed from the complete consumptive se/si, and the meaning extension is captured by the general tenets of Conceptual Metaphor Theory (CMT). Two conceptual domains, Source (DS) and target (DT), are related by similarity, assigning an appropriate metaphorical interpretation to DT. The domains paired here are comer/mangiare (DS) and comerse/mangiarsi (DT). The eating event (DS) involves (a) the physical process of xEATER grinding yFOOD-STUFF into pieces and swallowing it; and (b) the aspect of xEATER savoring yFOOD-STUFF and being nurtured by it. In the physical act of eating, xEATER has dominance and exercises his force over yFOOD-STUFF. This general sense of dominance and force is mapped onto DT and is manifested in the ways exemplified in (1) and (2), and many others. According to CMT, two other properties are observed in each pair of DS & DT. First, DS tends to be more physical and concrete and DT more abstract, and systematic mappings are established between constituent elements in DS and those in DT: xEATER corresponds to the element that destroys and yFOOD-STUFF to the element that is destroyed in DT, as exemplified in (1) and (2). Though the metaphorical extension marker se/si appears by far most frequently with comer/mangiare in the corpora, similar systematic mappings are observed in several other verb pairs, for example, jugar/giocare ‘to play (games)’ and jugarse/giocarsi ‘to jeopardize/risk (life, reputation, etc.)’, perder/perdere ‘to lose (an object)’ and perderse/perdersi ‘to miss out on (an event)’, etc. Thus, this study provides evidence that languages may indeed formally mark metaphor using means available to them.Keywords: complete consumption value, conceptual metaphor, Italian si/Spanish se, metaphorical extension.
Procedia PDF Downloads 53158 Allylation of Active Methylene Compounds with Cyclic Baylis-Hillman Alcohols: Why Is It Direct and Not Conjugate?
Authors: Karim Hrratha, Khaled Essalahb, Christophe Morellc, Henry Chermettec, Salima Boughdiria
Abstract:
Among the carbon-carbon bond formation types, allylation of active methylene compounds with cyclic Baylis-Hillman (BH) alcohols is a reliable and widely used method. This reaction is a very attractive tool in organic synthesis of biological and biodiesel compounds. Thus, in view of an insistent and peremptory request for an efficient and straightly method for synthesizing the desired product, a thorough analysis of various aspects of the reaction processes is an important task. The product afforded by the reaction of active methylene with BH alcohols depends largely on the experimental conditions, notably on the catalyst properties. All experiments reported that catalysis is needed for this reaction type because of the poor ability of alcohol hydroxyl group to be as a suitable leaving group. Within the catalysts, several transition- metal based have been used such as palladium in the presence of acid or base and have been considered as reliable methods. Furthemore, acid catalysts such as BF3.OEt2, BiX3 (X= Cl, Br, I, (OTf)3), InCl3, Yb(OTf)3, FeCl3, p-TsOH and H-montmorillonite have been employed to activate the C-C bond formation through the alkylation of active methylene compounds. Interestingly a report of a smoothly process for the ability of 4-imethyaminopyridine(DMAP) to catalyze the allylation reaction of active methylene compounds with cyclic Baylis-Hillman (BH) alcohol appeared recently. However, the reaction mechanism remains ambiguous, since the C- allylation process leads to an unexpected product (noted P1), corresponding to a direct allylation instead of conjugate allylation, which involves the most electrophilic center according to the electron withdrawing group CO effect. The main objective of the present theoretical study is to better understand the role of the DMAP catalytic activity as well as the process leading to the end- product (P1) for the catalytic reaction of a cyclic BH alcohol with active methylene compounds. For that purpose, we have carried out computations of a set of active methylene compounds varying by R1 and R2 toward the same alcohol, and we have attempted to rationalize the mechanisms thanks to the acid–base approach, and conceptual DFT tools such as chemical potential, hardness, Fukui functions, electrophilicity index and dual descriptor, as these approaches have shown a good prediction of reactions products.The present work is then organized as follows: In a first part some computational details will be given, introducing the reactivity indexes used in the present work, then Section 3 is dedicated to the discussion of the prediction of the selectivity and regioselectivity. The paper ends with some concluding remarks. In this work, we have shown, through DFT method at the B3LYP/6-311++G(d,p) level of theory that: The allylation of active methylene compounds with cyclic BH alcohol is governed by orbital control character. Hence the end- product denoted P1 is generated by direct allylation.Keywords: DFT calculation, gas phase pKa, theoretical mechanism, orbital control, charge control, Fukui function, transition state
Procedia PDF Downloads 306