Search results for: dynamic modeling
306 A Variational Reformulation for the Thermomechanically Coupled Behavior of Shape Memory Alloys
Authors: Elisa Boatti, Ulisse Stefanelli, Alessandro Reali, Ferdinando Auricchio
Abstract:
Thanks to their unusual properties, shape memory alloys (SMAs) are good candidates for advanced applications in a wide range of engineering fields, such as automotive, robotics, civil, biomedical, aerospace. In the last decades, the ever-growing interest for such materials has boosted several research studies aimed at modeling their complex nonlinear behavior in an effective and robust way. Since the constitutive response of SMAs is strongly thermomechanically coupled, the investigation of the non-isothermal evolution of the material must be taken into consideration. The present study considers an existing three-dimensional phenomenological model for SMAs, able to reproduce the main SMA properties while maintaining a simple user-friendly structure, and proposes a variational reformulation of the full non-isothermal version of the model. While the considered model has been thoroughly assessed in an isothermal setting, the proposed formulation allows to take into account the full nonisothermal problem. In particular, the reformulation is inspired to the GENERIC (General Equations for Non-Equilibrium Reversible-Irreversible Coupling) formalism, and is based on a generalized gradient flow of the total entropy, related to thermal and mechanical variables. Such phrasing of the model is new and allows for a discussion of the model from both a theoretical and a numerical point of view. Moreover, it directly implies the dissipativity of the flow. A semi-implicit time-discrete scheme is also presented for the fully coupled thermomechanical system, and is proven unconditionally stable and convergent. The correspondent algorithm is then implemented, under a space-homogeneous temperature field assumption, and tested under different conditions. The core of the algorithm is composed of a mechanical subproblem and a thermal subproblem. The iterative scheme is solved by a generalized Newton method. Numerous uniaxial and biaxial tests are reported to assess the performance of the model and algorithm, including variable imposed strain, strain rate, heat exchange properties, and external temperature. In particular, the heat exchange with the environment is the only source of rate-dependency in the model. The reported curves clearly display the interdependence between phase transformation strain and material temperature. The full thermomechanical coupling allows to reproduce the exothermic and endothermic effects during respectively forward and backward phase transformation. The numerical tests have thus demonstrated that the model can appropriately reproduce the coupled SMA behavior in different loading conditions and rates. Moreover, the algorithm has proved effective and robust. Further developments are being considered, such as the extension of the formulation to the finite-strain setting and the study of the boundary value problem.Keywords: generalized gradient flow, GENERIC formalism, shape memory alloys, thermomechanical coupling
Procedia PDF Downloads 221305 Cross-Country Mitigation Policies and Cross Border Emission Taxes
Authors: Massimo Ferrari, Maria Sole Pagliari
Abstract:
Pollution is a classic example of economic externality: agents who produce it do not face direct costs from emissions. Therefore, there are no direct economic incentives for reducing pollution. One way to address this market failure would be directly taxing emissions. However, because emissions are global, governments might as well find it optimal to wait let foreign countries to tax emissions so that they can enjoy the benefits of lower pollution without facing its direct costs. In this paper, we first document the empirical relation between pollution and economic output with static and dynamic regression methods. We show that there is a negative relation between aggregate output and the stock of pollution (measured as the stock of CO₂ emissions). This relationship is also highly non-linear, increasing at an exponential rate. In the second part of the paper, we develop and estimate a two-country, two-sector model for the US and the euro area. With this model, we aim at analyzing how the public sector should respond to higher emissions and what are the direct costs that these policies might have. In the model, there are two types of firms, brown firms (which produce a polluting technology) and green firms. Brown firms also produce an externality, CO₂ emissions, which has detrimental effects on aggregate output. As brown firms do not face direct costs from polluting, they do not have incentives to reduce emissions. Notably, emissions in our model are global: the stock of CO₂ in the economy affects all countries, independently from where it is produced. This simplified economy captures the main trade-off between emissions and production, generating a classic market failure. According to our results, the current level of emission reduces output by between 0.4 and 0.75%. Notably, these estimates lay in the upper bound of the distribution of those delivered by studies in the early 2000s. To address market failure, governments should step in introducing taxes on emissions. With the tax, brown firms pay a cost for polluting hence facing the incentive to move to green technologies. Governments, however, might also adopt a beggar-thy-neighbour strategy. Reducing emissions is costly, as moves production away from the 'optimal' production mix of brown and green technology. Because emissions are global, a government could just wait for the other country to tackle climate change, ripping the benefits without facing any costs. We study how this strategic game unfolds and show three important results: first, cooperation is first-best optimal from a global prospective; second, countries face incentives to deviate from the cooperating equilibria; third, tariffs on imported brown goods (the only retaliation policy in case of deviation from the cooperation equilibrium) are ineffective because the exchange rate would move to compensate. We finally study monetary policy under when costs for climate change rise and show that the monetary authority should react stronger to deviations of inflation from its target.Keywords: climate change, general equilibrium, optimal taxation, monetary policy
Procedia PDF Downloads 160304 Stakeholder-Driven Development of a One Health Platform to Prevent Non-Alimentary Zoonoses
Authors: A. F. G. Van Woezik, L. M. A. Braakman-Jansen, O. A. Kulyk, J. E. W. C. Van Gemert-Pijnen
Abstract:
Background: Zoonoses pose a serious threat to public health and economies worldwide, especially as antimicrobial resistance grows and newly emerging zoonoses can cause unpredictable outbreaks. In order to prevent and control emerging and re-emerging zoonoses, collaboration between veterinary, human health and public health domains is essential. In reality however, there is a lack of cooperation between these three disciplines and uncertainties exist about their tasks and responsibilities. The objective of this ongoing research project (ZonMw funded, 2014-2018) is to develop an online education and communication One Health platform, “eZoon”, for the general public and professionals working in veterinary, human health and public health domains to support the risk communication of non-alimentary zoonoses in the Netherlands. The main focus is on education and communication in times of outbreak as well as in daily non-outbreak situations. Methods: A participatory development approach was used in which stakeholders from veterinary, human health and public health domains participated. Key stakeholders were identified using business modeling techniques previously used for the design and implementation of antibiotic stewardship interventions and consisted of a literature scan, expert recommendations, and snowball sampling. We used a stakeholder salience approach to rank stakeholders according to their power, legitimacy, and urgency. Semi-structured interviews were conducted with stakeholders (N=20) from all three disciplines to identify current problems in risk communication and stakeholder values for the One Health platform. Interviews were transcribed verbatim and coded inductively by two researchers. Results: The following key values were identified (but were not limited to): (a) need for improved awareness of veterinary and human health of each other’s fields, (b) information exchange between veterinary and human health, in particularly at a regional level; (c) legal regulations need to match with daily practice; (d) professionals and general public need to be addressed separately using tailored language and information; (e) information needs to be of value to professionals (relevant, important, accurate, and have financial or other important consequences if ignored) in order to be picked up; and (f) need for accurate information from trustworthy, centrally organised sources to inform the general public. Conclusion: By applying a participatory development approach, we gained insights from multiple perspectives into the main problems of current risk communication strategies in the Netherlands and stakeholder values. Next, we will continue the iterative development of the One Health platform by presenting key values to stakeholders for validation and ranking, which will guide further development. We will develop a communication platform with a serious game in which professionals at the regional level will be trained in shared decision making in time-critical outbreak situations, a smart Question & Answer (Q&A) system for the general public tailored towards different user profiles, and social media to inform the general public adequately during outbreaks.Keywords: ehealth, one health, risk communication, stakeholder, zoonosis
Procedia PDF Downloads 286303 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data
Authors: Kai Warsoenke, Maik Mackiewicz
Abstract:
To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.Keywords: automotive production, machine learning, process optimization, smart tolerancing
Procedia PDF Downloads 117302 Crafting Robust Business Model Innovation Path with Generative Artificial Intelligence in Start-up SMEs
Authors: Ignitia Motjolopane
Abstract:
Small and medium enterprises (SMEs) play an important role in economies by contributing to economic growth and employment. In the fourth industrial revolution, the convergence of technologies and the changing nature of work created pressures on economies globally. Generative artificial intelligence (AI) may support SMEs in exploring, exploiting, and transforming business models to align with their growth aspirations. SMEs' growth aspirations fall into four categories: subsistence, income, growth, and speculative. Subsistence-oriented firms focus on meeting basic financial obligations and show less motivation for business model innovation. SMEs focused on income, growth, and speculation are more likely to pursue business model innovation to support growth strategies. SMEs' strategic goals link to distinct business model innovation paths depending on whether SMEs are starting a new business, pursuing growth, or seeking profitability. Integrating generative artificial intelligence in start-up SME business model innovation enhances value creation, user-oriented innovation, and SMEs' ability to adapt to dynamic changes in the business environment. The existing literature may lack comprehensive frameworks and guidelines for effectively integrating generative AI in start-up reiterative business model innovation paths. This paper examines start-up business model innovation path with generative artificial intelligence. A theoretical approach is used to examine start-up-focused SME reiterative business model innovation path with generative AI. Articulating how generative AI may be used to support SMEs to systematically and cyclically build the business model covering most or all business model components and analyse and test the BM's viability throughout the process. As such, the paper explores generative AI usage in market exploration. Moreover, market exploration poses unique challenges for start-ups compared to established companies due to a lack of extensive customer data, sales history, and market knowledge. Furthermore, the paper examines the use of generative AI in developing and testing viable value propositions and business models. In addition, the paper looks into identifying and selecting partners with generative AI support. Selecting the right partners is crucial for start-ups and may significantly impact success. The paper will examine generative AI usage in choosing the right information technology, funding process, revenue model determination, and stress testing business models. Stress testing business models validate strong and weak points by applying scenarios and evaluating the robustness of individual business model components and the interrelation between components. Thus, the stress testing business model may address these uncertainties, as misalignment between an organisation and its environment has been recognised as the leading cause of company failure. Generative AI may be used to generate business model stress-testing scenarios. The paper is expected to make a theoretical and practical contribution to theory and approaches in crafting a robust business model innovation path with generative artificial intelligence in start-up SMEs.Keywords: business models, innovation, generative AI, small medium enterprises
Procedia PDF Downloads 71301 Policy Evaluation of Republic Act 9502 “Universally Accessible Cheaper and Quality Medicines Act of 2008”
Authors: Trina Isabel D. Santiago, Juan Raphael M. Perez, Maria Angelica O. Soriano, Teresita B. Suing, Jumee F. Tayaban
Abstract:
To achieve universal healthcare for everyone, the World Health Organization has emphasized the importance of National Medicines Policies for increased accessibility and utilization of high-quality and affordable medications. In the Philippines, significant challenges have been identified surrounding the sustainability of essential medicines, resulting in limited access such as high cost and dominance and market dominance and monopoly of multinational companies (MNCs) in the Philippine pharmaceutical industry. These identified challenges have been addressed by several initiatives, such as the Philippine National Drug Policy and Generics Act of 1988 (Republic Act 6675), to attempt to reduce drug prices. Despite these efforts, the concerns with drug accessibility and affordability continue to persist; hence, Republic Act 9502 was enacted. This paper attempts to review RA 9502 in the pursuit of making medicines more affordable for Filipinos, analyze and critique the problems and challenges associated with the law, and provide recommendations to address identified problems and challenges. A literature search and review, as well as an analysis of the law, has been done to evaluate the policy. RA 9502 recognizes the importance of market competition in drug price reduction and quality medicine accessibility. Contentious issues prior to enactment of the law include 1) parallel importation, pointing out that the drug price will depend on the global market price, 2) contrasting approaches in the drafting of the law as the House version focused on medicine price control while the Senate version prioritized market competition, and 3) MNCs opposing the amendments with concerns on discrimination, constitutional violations, and noncompliance with international treaty obligations. There are also criticisms and challenges with the implementation of the law in terms of content or modeling, interpretation and implementation, and other external factors or hindrances. The law has been criticized for its narrow scope as it only covers specific essential medicines with no cooperation with the national health insurance program. Moreover, the law has sections taking advantage of the TRIPS flexibilities, which disallow smaller countries to reap the benefits of flexibilities. The sanctions and penalties have an insignificant role in implementation as they only ask for a small portion of the income of MNCs. Proposed recommendations for policy improvement include aligning existing legislation through strengthened price regulation and expanded law coverage, strengthening penalties to promote law adherence, and promoting research and development to encourage and support local initiatives. Through these comprehensive recommendations, the issues surrounding the policy can be addressed, and the goal of enhancing the affordability and accessibility of medicines in the country can be achieved.Keywords: drug accessibility, drug affordability, price regulation, Republic Act 9502
Procedia PDF Downloads 47300 Nurturing Scientific Minds: Enhancing Scientific Thinking in Children (Ages 5-9) through Experiential Learning in Kids Science Labs (STEM)
Authors: Aliya K. Salahova
Abstract:
Scientific thinking, characterized by purposeful knowledge-seeking and the harmonization of theory and facts, holds a crucial role in preparing young minds for an increasingly complex and technologically advanced world. This abstract presents a research study aimed at fostering scientific thinking in early childhood, focusing on children aged 5 to 9 years, through experiential learning in Kids Science Labs (STEM). The study utilized a longitudinal exploration design, spanning 240 weeks from September 2018 to April 2023, to evaluate the effectiveness of the Kids Science Labs program in developing scientific thinking skills. Participants in the research comprised 72 children drawn from local schools and community organizations. Through a formative psychology-pedagogical experiment, the experimental group engaged in weekly STEM activities carefully designed to stimulate scientific thinking, while the control group participated in daily art classes for comparison. To assess the scientific thinking abilities of the participants, a registration table with evaluation criteria was developed. This table included indicators such as depth of questioning, resource utilization in research, logical reasoning in hypotheses, procedural accuracy in experiments, and reflection on research processes. The data analysis revealed dynamic fluctuations in the number of children at different levels of scientific thinking proficiency. While the development was not uniform across all participants, a main leading factor emerged, indicating that the Kids Science Labs program and formative experiment exerted a positive impact on enhancing scientific thinking skills in children within this age range. The study's findings support the hypothesis that systematic implementation of STEM activities effectively promotes and nurtures scientific thinking in children aged 5-9 years. Enriching education with a specially planned STEM program, tailoring scientific activities to children's psychological development, and implementing well-planned diagnostic and corrective measures emerged as essential pedagogical conditions for enhancing scientific thinking abilities in this age group. The results highlight the significant and positive impact of the systematic-activity approach in developing scientific thinking, leading to notable progress and growth in children's scientific thinking abilities over time. These findings have promising implications for educators and researchers, emphasizing the importance of incorporating STEM activities into educational curricula to foster scientific thinking from an early age. This study contributes valuable insights to the field of science education and underscores the potential of STEM-based interventions in shaping the future scientific minds of young children.Keywords: Scientific thinking, education, STEM, intervention, Psychology, Pedagogy, collaborative learning, longitudinal study
Procedia PDF Downloads 61299 Selection and Preparation of High Performance, Natural and Cost-Effective Hydrogel as a Bio-Ink for 3D Bio-Printing and Organ on Chip Applications
Authors: Rawan Ashraf, Ahmed E. Gomaa, Gehan Safwat, Ayman Diab
Abstract:
Background: Three-dimensional (3D) bio-printing has become a versatile and powerful method for generating a variety of biological constructs, including bone or extracellular matrix scaffolds endo- or epithelial, muscle tissue, as well as organoids. Aim of the study: Fabricate a low cost DIY 3D bio-printer to produce 3D bio-printed products such as anti-microbial packaging or multi-organs on chips. We demonstrate the alignment between two types of 3D printer technology (3D Bio-printer and DLP) on Multi-organ-on-a-chip (multi-OoC) devices fabrication. Methods: First, Design and Fabrication of the Syringe Unit for Modification of an Off-the-Shelf 3D Printer, then Preparation of Hydrogel based on natural polymers Sodium Alginate and Gelatin, followed by acquisition of the cell suspension, then modeling the desired 3D structure. Preparation for 3D printing, then Cell-free and cell-laden hydrogels went through the printing process at room temperature under sterile conditions and finally post printing curing process and studying the printed structure regards physical and chemical characteristics. The hard scaffold of the Organ on chip devices was designed and fabricated using the DLP-3D printer, following similar approaches as the Microfluidics system fabrication. Results: The fabricated Bio-Ink was based onHydrogel polymer mix of sodium alginate and gelatin 15% to 0.5%, respectively. Later the 3D printing process was conducted using a higher percentage of alginate-based hydrogels because of it viscosity and the controllable crosslinking, unlike the thermal crosslinking of Gelatin. The hydrogels were colored to simulate the representation of two types of cells. The adaption of the hard scaffold, whether for the Microfluidics system or the hard-tissues, has been acquired by the DLP 3D printers with fabricated natural bioactive essential oils that contain antimicrobial activity, followed by printing in Situ three complex layers of soft-hydrogel as a cell-free Bio-Ink to simulate the real-life tissue engineering process. The final product was a proof of concept for a rapid 3D cell culturing approaches that uses an engineered hard scaffold along with soft-tissues, thus, several applications were offered as products of the current prototype, including the Organ-On-Chip as a successful integration between DLP and 3D bioprinter. Conclusion: Multiple designs for the organ-on-a-chip (multi-OoC) devices have been acquired in our study with main focus on the low cost fabrication of such technology and the potential to revolutionize human health research and development. We describe circumstances in which multi-organ models are useful after briefly examining the requirement for full multi-organ models with a systemic component. Following that, we took a look at the current multi-OoC platforms, such as integrated body-on-a-chip devices and modular techniques that use linked organ-specific modules.Keywords: 3d bio-printer, hydrogel, multi-organ on chip, bio-inks
Procedia PDF Downloads 175298 Metal-Semiconductor Transition in Ultra-Thin Titanium Oxynitride Films Deposited by ALD
Authors: Farzan Gity, Lida Ansari, Ian M. Povey, Roger E. Nagle, James C. Greer
Abstract:
Titanium nitride (TiN) films have been widely used in variety of fields, due to its unique electrical, chemical, physical and mechanical properties, including low electrical resistivity, chemical stability, and high thermal conductivity. In microelectronic devices, thin continuous TiN films are commonly used as diffusion barrier and metal gate material. However, as the film thickness decreases below a few nanometers, electrical properties of the film alter considerably. In this study, the physical and electrical characteristics of 1.5nm to 22nm thin films deposited by Plasma-Enhanced Atomic Layer Deposition (PE-ALD) using Tetrakis(dimethylamino)titanium(IV), (TDMAT) chemistry and Ar/N2 plasma on 80nm SiO2 capped in-situ by 2nm Al2O3 are investigated. ALD technique allows uniformly-thick films at monolayer level in a highly controlled manner. The chemistry incorporates low level of oxygen into the TiN films forming titanium oxynitride (TiON). Thickness of the films is characterized by Transmission Electron Microscopy (TEM) which confirms the uniformity of the films. Surface morphology of the films is investigated by Atomic Force Microscopy (AFM) indicating sub-nanometer surface roughness. Hall measurements are performed to determine the parameters such as carrier mobility, type and concentration, as well as resistivity. The >5nm-thick films exhibit metallic behavior; however, we have observed that thin film resistivity is modulated significantly by film thickness such that there are more than 5 orders of magnitude increment in the sheet resistance at room temperature when comparing 5nm and 1.5nm films. Scattering effects at interfaces and grain boundaries could play a role in thickness-dependent resistivity in addition to quantum confinement effect that could occur at ultra-thin films: based on our measurements the carrier concentration is decreased from 1.5E22 1/cm3 to 5.5E17 1/cm3, while the mobility is increased from < 0.1 cm2/V.s to ~4 cm2/V.s for the 5nm and 1.5nm films, respectively. Also, measurements at different temperatures indicate that the resistivity is relatively constant for the 5nm film, while for the 1.5nm film more than 2 orders of magnitude reduction has been observed over the range of 220K to 400K. The activation energy of the 2.5nm and 1.5nm films is 30meV and 125meV, respectively, indicating that the TiON ultra-thin films are exhibiting semiconducting behaviour attributing this effect to a metal-semiconductor transition. By the same token, the contact is no longer Ohmic for the thinnest film (i.e., 1.5nm-thick film); hence, a modified lift-off process was developed to selectively deposit thicker films allowing us to perform electrical measurements with low contact resistance on the raised contact regions. Our atomic scale simulations based on molecular dynamic-generated amorphous TiON structures with low oxygen content confirm our experimental observations indicating highly n-type thin films.Keywords: activation energy, ALD, metal-semiconductor transition, resistivity, titanium oxynitride, ultra-thin film
Procedia PDF Downloads 294297 Supplementing Aerial-Roving Surveys with Autonomous Optical Cameras: A High Temporal Resolution Approach to Monitoring and Estimating Effort within a Recreational Salmon Fishery in British Columbia, Canada
Authors: Ben Morrow, Patrick O'Hara, Natalie Ban, Tunai Marques, Molly Fraser, Christopher Bone
Abstract:
Relative to commercial fisheries, recreational fisheries are often poorly understood and pose various challenges for monitoring frameworks. In British Columbia (BC), Canada, Pacific salmon are heavily targeted by recreational fishers while also being a key source of nutrient flow and crucial prey for a variety of marine and terrestrial fauna, including endangered Southern Resident killer whales (Orcinus orca). Although commercial fisheries were historically responsible for the majority of salmon retention, recreational fishing now comprises both greater effort and retention. The current monitoring scheme for recreational salmon fisheries involves aerial-roving creel surveys. However, this method has been identified as costly and having low predictive power as it is often limited to sampling fragments of fluid and temporally dynamic fisheries. This study used imagery from two shore-based autonomous cameras in a highly active recreational fishery around Sooke, BC, and evaluated their efficacy in supplementing existing aerial-roving surveys for monitoring a recreational salmon fishery. This study involved continuous monitoring and high temporal resolution (over one million images analyzed in a single fishing season), using a deep learning-based vessel detection algorithm and a custom image annotation tool to efficiently thin datasets. This allowed for the quantification of peak-season effort from a busy harbour, species-specific retention estimates, high levels of detected fishing events at a nearby popular fishing location, as well as the proportion of the fishery management area represented by cameras. Then, this study demonstrated how it could substantially enhance the temporal resolution of a fishery through diel activity pattern analyses, scaled monthly to visualize clusters of activity. This work also highlighted considerable off-season fishing detection, currently unaccounted for in the existing monitoring framework. These results demonstrate several distinct applications of autonomous cameras for providing enhanced detail currently unavailable in the current monitoring framework, each of which has important considerations for the managerial allocation of resources. Further, the approach and methodology can benefit other studies that apply shore-based camera monitoring, supplement aerial-roving creel surveys to improve fine-scale temporal understanding, inform the optimal timing of creel surveys, and improve the predictive power of recreational stock assessments to preserve important and endangered fish species.Keywords: cameras, monitoring, recreational fishing, stock assessment
Procedia PDF Downloads 122296 Vision and Challenges of Developing VR-Based Digital Anatomy Learning Platforms and a Solution Set for 3D Model Marking
Authors: Gizem Kayar, Ramazan Bakir, M. Ilkay Koşar, Ceren U. Gencer, Alperen Ayyildiz
Abstract:
Anatomy classes are crucial for general education of medical students, whereas learning anatomy is quite challenging and requires memorization of thousands of structures. In traditional teaching methods, learning materials are still based on books, anatomy mannequins, or videos. This results in forgetting many important structures after several years. However, more interactive teaching methods like virtual reality, augmented reality, gamification, and motion sensors are becoming more popular since such methods ease the way we learn and keep the data in mind for longer terms. During our study, we designed a virtual reality based digital head anatomy platform to investigate whether a fully interactive anatomy platform is effective to learn anatomy and to understand the level of teaching and learning optimization. The Head is one of the most complicated human anatomy structures, with thousands of tiny, unique structures. This makes the head anatomy one of the most difficult parts to understand during class sessions. Therefore, we developed a fully interactive digital tool with 3D model marking, quiz structures, 2D/3D puzzle structures, and VR support so as to integrate the power of VR and gamification. The project has been developed in Unity game engine with HTC Vive Cosmos VR headset. The head anatomy 3D model has been selected with full skeletal, muscular, integumentary, head, teeth, lymph, and vein system. The biggest issue during the development was the complexity of our model and the marking of it in the 3D world system. 3D model marking requires to access to each unique structure in the counted subsystems which means hundreds of marking needs to be done. Some parts of our 3D head model were monolithic. This is why we worked on dividing such parts to subparts which is very time-consuming. In order to subdivide monolithic parts, one must use an external modeling tool. However, such tools generally come with high learning curves, and seamless division is not ensured. Second option was to integrate tiny colliders to all unique items for mouse interaction. However, outside colliders which cover inner trigger colliders cause overlapping, and these colliders repel each other. Third option is using raycasting. However, due to its own view-based nature, raycasting has some inherent problems. As the model rotate, view direction changes very frequently, and directional computations become even harder. This is why, finally, we studied on the local coordinate system. By taking the pivot point of the model into consideration (back of the nose), each sub-structure is marked with its own local coordinate with respect to the pivot. After converting the mouse position to the world position and checking its relation with the corresponding structure’s local coordinate, we were able to mark all points correctly. The advantage of this method is its applicability and accuracy for all types of monolithic anatomical structures.Keywords: anatomy, e-learning, virtual reality, 3D model marking
Procedia PDF Downloads 100295 State, Public Policies, and Rights: Public Expenditure and Social and Welfare Policies in America, as Opposed to Argentina
Authors: Mauro Cristeche
Abstract:
This paper approaches the intervention of the American State in the social arena and the modeling of the rights system from the Argentinian experience, by observing the characteristics of its federal budgetary system, the evolution of social public spending and welfare programs in recent years, labor and poverty statistics, and the changes on the labor market structure. The analysis seeks to combine different methodologies and sources: in-depth interviews with specialists, analysis of theoretical and mass-media material, and statistical sources. Among the results, it could be mentioned that the tendency to state interventionism (what has been called ‘nationalization of social life’) is quite evident in the United States, and manifests itself in multiple forms. The bibliography consulted, and the experts interviewed pointed out this increase of the state presence in historical terms (beyond short-term setbacks) in terms of increase of public spending, fiscal pressure, public employment, protective and control mechanisms, the extension of welfare policies to the poor sectors, etc. In fact, despite the significant differences between both countries, the United States and Argentina have common patterns of behavior in terms of the aforementioned phenomena. On the other hand, dissimilarities are also important. Some of them are determined by each country's own political history. The influence of political parties on the economic model seems more decisive in the United States than in Argentina, where the tendency to state interventionism is more stable. The centrality of health spending is evident in America, while in Argentina that discussion is more concentrated in the social security system and public education. The biggest problem of the labor market in the United States is the disqualification as a consequence of the technological development while in Argentina it is a result of its weakness. Another big difference is the huge American public spending on Defense. Then, the more federal character of the American State is also a factor of differential analysis against a centralized Argentine state. American public employment (around 10%) is comparatively quite lower than the Argentinian (around 18%). The social statistics show differences, but inequality and poverty have been growing as a trend in the last decades in both countries. According to public rates, poverty represents 14% in The United States and 33% in Argentina. American public spending is important (welfare spending and total public spending represent around 12% and 34% of GDP, respectively), but a bit lower than Latin-American or European average). In both cases, the tendency to underemployment and disqualification unemployment does not assume a serious gravity. Probably one of the most important aspects of the analysis is that private initiative and public intervention are much more intertwined in the United States, which makes state intervention more ‘fuzzy’, while in Argentina the difference is clearer. Finally, the power of its accumulation of capital and, more specifically, of the industrial and services sectors in the United States, which continues to be the engine of the economy, express great differences with Argentina, supported by its agro-industrial power and its public sector.Keywords: state intervention, welfare policies, labor market, system of rights, United States of America
Procedia PDF Downloads 131294 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics
Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere
Abstract:
Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciencesKeywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet
Procedia PDF Downloads 137293 Design and Integration of an Energy Harvesting Vibration Absorber for Rotating System
Authors: F. Infante, W. Kaal, S. Perfetto, S. Herold
Abstract:
In the last decade the demand of wireless sensors and low-power electric devices for condition monitoring in mechanical structures has been strongly increased. Networks of wireless sensors can potentially be applied in a huge variety of applications. Due to the reduction of both size and power consumption of the electric components and the increasing complexity of mechanical systems, the interest of creating dense nodes sensor networks has become very salient. Nevertheless, with the development of large sensor networks with numerous nodes, the critical problem of powering them is drawing more and more attention. Batteries are not a valid alternative for consideration regarding lifetime, size and effort in replacing them. Between possible alternative solutions for durable power sources useable in mechanical components, vibrations represent a suitable source for the amount of power required to feed a wireless sensor network. For this purpose, energy harvesting from structural vibrations has received much attention in the past few years. Suitable vibrations can be found in numerous mechanical environments including automotive moving structures, household applications, but also civil engineering structures like buildings and bridges. Similarly, a dynamic vibration absorber (DVA) is one of the most used devices to mitigate unwanted vibration of structures. This device is used to transfer the primary structural vibration to the auxiliary system. Thus, the related energy is effectively localized in the secondary less sensitive structure. Then, the additional benefit of harvesting part of the energy can be obtained by implementing dedicated components. This paper describes the design process of an energy harvesting tuned vibration absorber (EHTVA) for rotating systems using piezoelectric elements. The energy of the vibration is converted into electricity rather than dissipated. The device proposed is indeed designed to mitigate torsional vibrations as with a conventional rotational TVA, while harvesting energy as a power source for immediate use or storage. The resultant rotational multi degree of freedom (MDOF) system is initially reduced in an equivalent single degree of freedom (SDOF) system. The Den Hartog’s theory is used for evaluating the optimal mechanical parameters of the initial DVA for the SDOF systems defined. The performance of the TVA is operationally assessed and the vibration reduction at the original resonance frequency is measured. Then, the design is modified for the integration of active piezoelectric patches without detuning the TVA. In order to estimate the real power generated, a complex storage circuit is implemented. A DC-DC step-down converter is connected to the device through a rectifier to return a fixed output voltage. Introducing a big capacitor, the energy stored is measured at different frequencies. Finally, the electromechanical prototype is tested and validated achieving simultaneously reduction and harvesting functions.Keywords: energy harvesting, piezoelectricity, torsional vibration, vibration absorber
Procedia PDF Downloads 147292 Tip60 Histone Acetyltransferase Activators as Neuroepigenetic Therapeutic Modulators for Alzheimer’s Disease
Authors: Akanksha Bhatnagar, Sandhya Kortegare, Felice Elefant
Abstract:
Context: Alzheimer's disease (AD) is a neurodegenerative disorder that is characterized by progressive cognitive decline and memory loss. The cause of AD is not fully understood, but it is thought to be caused by a combination of genetic, environmental, and lifestyle factors. One of the hallmarks of AD is the loss of neurons in the hippocampus, a brain region that is important for memory and learning. This loss of neurons is thought to be caused by a decrease in histone acetylation, which is a process that regulates gene expression. Research Aim: The research aim of the study was to develop mall molecule compounds that can enhance the activity of Tip60, a histone acetyltransferase that is important for memory and learning. Methodology/Analysis: The researchers used in silico structural modeling and a pharmacophore-based virtual screening approach to design and synthesize small molecule compounds strongly predicted to target and enhance Tip60’s HAT activity. The compounds were then tested in vitro and in vivo to assess their ability to enhance Tip60 activity and rescue cognitive deficits in AD models. Findings: The researchers found that several of the compounds were able to enhance Tip60 activity and rescue cognitive deficits in AD models. The compounds were also developed to cross the blood-brain barrier, which is an important factor for the development of potential AD therapeutics. Theoretical Importance: The findings of this study suggest that Tip60 HAT activators have the potential to be developed as therapeutic agents for AD. The compounds are specific to Tip60, which suggests that they may have fewer side effects than other HDAC inhibitors. Additionally, the compounds are able to cross the blood-brain barrier, which is a major hurdle for the development of AD therapeutics. Data Collection: The study collected data from a variety of sources, including in vitro assays and animal models. The in vitro assays assessed the ability of compounds to enhance Tip60 activity using histone acetyltransferase (HAT) enzyme assays and chromatin immunoprecipitation assays. Animal models were used to assess the ability of the compounds to rescue cognitive deficits in AD models using a variety of behavioral tests, including locomotor ability, sensory learning, and recognition tasks. The human clinical trials will be used to assess the safety and efficacy of the compounds in humans. Questions: The question addressed by this study was whether Tip60 HAT activators could be developed as therapeutic agents for AD. Conclusions: The findings of this study suggest that Tip60 HAT activators have the potential to be developed as therapeutic agents for AD. The compounds are specific to Tip60, which suggests that they may have fewer side effects than other HDAC inhibitors. Additionally, the compounds are able to cross the blood-brain barrier, which is a major hurdle for the development of AD therapeutics. Further research is needed to confirm the safety and efficacy of these compounds in humans.Keywords: Alzheimer's disease, cognition, neuroepigenetics, drug discovery
Procedia PDF Downloads 75291 Offshore Facilities Load Out: Case Study of Jacket Superstructure Loadout by Strand Jacking Skidding Method
Authors: A. Rahim Baharudin, Nor Arinee binti Mat Saaud, Muhammad Afiq Azman, Farah Adiba A. Sani
Abstract:
Objectives: This paper shares the case study on the engineering analysis, data analysis, and real-time data comparison for qualifying the stand wires' minimum breaking load and safe working load upon loadout operation for a new project and, at the same time, eliminate the risk due to discrepancies and unalignment of COMPANY Technical Standards to Industry Standards and Practices. This paper demonstrates “Lean Construction” for COMPANY’s Project by sustaining fit-for-purpose Technical Requirements of Loadout Strand Wire Factor of Safety (F.S). The case study utilizes historical engineering data from a few loadout operations by skidding methods from different projects. It is also demonstrating and qualifying the skidding wires' minimum breaking load and safe working load used for loadout operation for substructure and other facilities for the future. Methods: Engineering analysis and comparison of data were taken as referred to the international standard and internal COMPANY standard requirements. Data was taken from nine (9) previous projects for both topsides and jacket facilities executed at the several local fabrication yards where load out was conducted by three (3) different service providers with emphasis on four (4) basic elements: i) Industry Standards for Loadout Engineering and Operation Reference: COMPANY internal standard was referred to superseded documents of DNV-OS-H201 and DNV/GL 0013/ND. DNV/GL 0013/ND and DNVGL-ST-N001 do not mention any requirements of Strand Wire F.S of 4.0 for Skidding / Pulling Operations. ii) Reference to past Loadout Engineering and Execution Package: Reference was made to projects delivered by three (3) major offshore facilities operators. Strand Wire F.S observed ranges from 2.0 MBL (Min) to 2.5 MBL (Max). No Loadout Operation using the requirements of 4.0 MBL was sighted from the reference. iii) Strand Jack Equipment Manufacturer Datasheet Reference: Referring to Strand Jack Equipment Manufactured Datasheet by different loadout service providers, it is shown that the Designed F.S for the equipment is also ranging between 2.0 ~ 2.5. Eight (8) Strand Jack Datasheet Model was referred to, ranging from 15 Mt to 850 Mt Capacity; however, there are NO observations of designed F.S 4.0 sighted. iv) Site Monitoring on Actual Loadout Data and Parameter: Max Load on Strand Wire was captured during 2nd Breakout, which is during Static Condition of 12.9 MT / Strand Wire (67.9% Utilization). Max Load on Strand Wire for Dynamic Conditions during Step 8 and Step 12 is 9.4 Mt / Strand Wire (49.5% Utilization). Conclusion: This analysis and study demonstrated the adequacy of strand wires supplied by the service provider were technically sufficient in terms of strength, and via engineering analysis conducted, the minimum breaking load and safe working load utilized and calculated for the projects were satisfied and operated safely for the projects. It is recommended from this study that COMPANY’s technical requirements are to be revised for future projects’ utilization.Keywords: construction, load out, minimum breaking load, safe working load, strand jacking, skidding
Procedia PDF Downloads 112290 Perception of Corporate Social Responsibility and Enhancing Compassion at Work through Sense of Meaningfulness
Authors: Nikeshala Weerasekara, Roshan Ajward
Abstract:
Contemporary business environment, given the circumstance of stringent scrutiny toward corporate behavior, organizations are under pressure to develop and implement solid overarching Corporate Social Responsibility (CSR) strategies. In that milieu, in order to differentiate themselves from competitors and maintain stakeholder confidence banks spend millions of dollars on CSR programmes. However, knowledge on how non-western bank employees perceive such activities is inconclusive. At the same time recently only researchers have shifted their focus on positive effects of compassion at work or the organizational conditions under which it arises. Nevertheless, mediation mechanisms between CSR and compassion at work have not been adequately examined leaving a vacuum to be explored. Despite finding a purpose in work that is greater than extrinsic outcomes of the work is important to employees, meaningful work has not been examined adequately. Thus, in addition to examining the direct relationship between CSR and compassion at work, this study examined the mediating capability of meaningful work between these variables. Specifically, the researcher explored how CSR enables employees to sense work as meaningful which in turn would enhance their level of compassion at work. Hypotheses were developed to examine the direct relationship between CSR and compassion at work and the mediating effect of meaningful work on the relationship between CSR and compassion at work. Both Social Identity Theory (SIT) and Social Exchange Theory (SET) were used to theoretically support the relationships. The sample comprised of 450 respondents covering different levels of the bank. A convenience sampling strategy was used to secure responses from 13 local licensed commercial banks in Sri Lanka. Data was collected using a structured questionnaire which was developed based on a comprehensive review of literature and refined using both expert opinions and a pilot survey. Structural equation modeling using Smart Partial Least Square (PLS) was utilized for data analysis. Findings indicate a positive and significant (p < .05) relationship between CSR and compassion at work. Also, it was found that meaningful work partially mediates the relationship between CSR and compassion at work. As per the findings it is concluded that bank employees’ perception of CSR engagement not only directly influence compassion at work but also impact such through meaningful work as well. This implies that employees consider working for a socially responsible bank since it creates greater meaningfulness of work to retain with the organization, which in turn trigger higher level of compassion at work. By utilizing both SIT and SET in explaining relationships between CSR and compassion at work it amounts to theoretical significance of the study. Enhance existing literature on CSR and compassion at work. Also, adds insights on mediating capability of psychologically related variables such as meaningful work. This study is expected to have significant policy implications in terms of increasing compassion at work where managers must understand the importance of including CSR activities into their strategy in order to thrive. Finally, it provides evidence of suitability of using Smart PLS to test models with mediating relationships involving non normal data.Keywords: compassion at work, corporate social responsibility, employee commitment, meaningful work, positive affect
Procedia PDF Downloads 127289 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method
Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola
Abstract:
The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization
Procedia PDF Downloads 389288 The Practical Application of Sensory Awareness in Developing Healthy Communication, Emotional Regulation, and Emotional Introspection
Authors: Node Smith
Abstract:
Developmental psychology has long focused on modeling consciousness, often neglecting practical application and clinical utility. This paper aims to bridge this gap by exploring the practical application of physical and sensory tracking and awareness in fostering essential skills for conscious development. Higher conscious development requires practical skills such as self-agency, the ability to hold multiple perspectives, and genuine altruism. These are not personality characteristics but areas of skillfulness that address many cultural deficiencies impacting our world. They are intertwined with individual as well as collective conscious development. Physical, sensory tracking and awareness are crucial for developing these skills and offer the added benefit of cultivating healthy communication, emotional regulation, and introspection. Unlike skills such as throwing a baseball, which can be developed through practice or innate ability, the ability to introspect, track physical sensations, and observe oneself objectively is essential for advancing consciousness. Lacking these skills leads to cultural and individual anxiety, helplessness, and a lack of agency, manifesting as blame-shifting and irresponsibility. The inability to hold multiple perspectives stifles altruism, as genuine consideration for a global community requires accepting other perspectives without conditions. Physical and sensory tracking enhances self-awareness by grounding individuals in their bodily experiences. This grounding is critical for emotional regulation, allowing individuals to identify and process emotions in real-time, preventing overwhelm and fostering balance. Techniques like mindfulness meditation and body scan exercises attune individuals to their physical sensations, providing insights into their emotional states. Sensory awareness also facilitates healthy communication by fostering empathy and active listening. When individuals are in tune with their physical sensations, they become more present in interactions, picking up on subtle cues and responding thoughtfully. This presence reduces misunderstandings and conflicts, promoting more effective communication. The ability to introspect and observe oneself objectively is key to emotional introspection. This skill allows individuals to reflect on their thoughts, feelings, and behaviors, identify patterns, recognize areas for growth, and make conscious choices aligned with their values and goals. In conclusion, physical and sensory tracking and awareness are vital for developing the skills necessary for higher consciousness development. By fostering self-agency, emotional regulation, and the ability to hold multiple perspectives, these practices contribute to healthier communication, deeper emotional introspection, and a more altruistic and connected global community. Integrating these practices into developmental psychology and therapeutic interventions holds significant promise for both individual and societal transformation.Keywords: conscious development, emotional introspection, emotional regulation, self-agency, stages of development
Procedia PDF Downloads 47287 Radiofrequency and Near-Infrared Responsive Core-Shell Multifunctional Nanostructures Using Lipid Templates for Cancer Theranostics
Authors: Animesh Pan, Geoffrey D. Bothun
Abstract:
With the development of nanotechnology, research in multifunctional delivery systems has a new pace and dimension. An incipient challenge is to design an all-in-one delivery system that can be used for multiple purposes, including tumor targeting therapy, radio-frequency (RF-), near-infrared (NIR-), light-, or pH-induced controlled release, photothermal therapy (PTT), photodynamic therapy (PDT), and medical diagnosis. In this regard, various inorganic nanoparticles (NPs) are known to show great potential as the 'functional components' because of their fascinating and tunable physicochemical properties and the possibility of multiple theranostic modalities from individual NPs. Magnetic, luminescent, and plasmonic properties are the three most extensively studied and, more importantly biomedically exploitable properties of inorganic NPs. Although successful attempts of combining any two of them above mentioned functionalities have been made, integrating them in one system has remained challenge. Keeping those in mind, controlled designs of complex colloidal nanoparticle system are one of the most significant challenges in nanoscience and nanotechnology. Therefore, systematic and planned studies providing better revelation are demanded. We report a multifunctional delivery platform-based liposome loaded with drug, iron-oxide magnetic nanoparticles (MNPs), and a gold shell on the surface of liposomes, were synthesized using a lipid with polyelectrolyte (layersomes) templating technique. MNPs and the anti-cancer drug doxorubicin (DOX) were co-encapsulated inside liposomes composed by zwitterionic phophatidylcholine and anionic phosphatidylglycerol using reverse phase evaporation (REV) method. The liposomes were coated with positively charge polyelectrolyte (poly-L-lysine) to enrich the interface with gold anion, exposed to a reducing agent to form a gold nanoshell, and then capped with thio-terminated polyethylene glycol (SH-PEG2000). The core-shell nanostructures were characterized by different techniques like; UV-Vis/NIR scanning spectrophotometer, dynamic light scattering (DLS), transmission electron microscope (TEM). This multifunctional system achieves a variety of functions, such as radiofrequency (RF)-triggered release, chemo-hyperthermia, and NIR laser-triggered for photothermal therapy. Herein, we highlight some of the remaining major design challenges in combination with preliminary studies assessing therapeutic objectives. We demonstrate an efficient loading and delivery system to significant cell death of human cancer cells (A549) with therapeutic capabilities. Coupled with RF and NIR excitation to the doxorubicin-loaded core-shell nanostructure helped in securing targeted and controlled drug release to the cancer cells. The present core-shell multifunctional system with their multimodal imaging and therapeutic capabilities would be eminent candidates for cancer theranostics.Keywords: cancer thernostics, multifunctional nanostructure, photothermal therapy, radiofrequency targeting
Procedia PDF Downloads 128286 Seafloor and Sea Surface Modelling in the East Coast Region of North America
Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk
Abstract:
Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.Keywords: seafloor, sea surface height, bathymetry, satellite altimetry
Procedia PDF Downloads 81285 Traditional Rainwater Harvesting Systems: A Sustainable Solution for Non-Urban Populations in the Mediterranean
Authors: S. Fares, K. Mellakh, A. Hmouri
Abstract:
The StorMer project aims to set up a network of researchers to study traditional hydraulic rainwater harvesting systems in the Mediterranean basin, a region suffering from the major impacts of climate change and limited natural water resources. The arid and semi-arid Mediterranean basin has a long history of pioneering water management practices. The region has developed various ancient traditional water management systems, such as cisterns and qanats, to sustainably manage water resources under historical conditions of scarcity. Therefore, the StorMer project brings together Spain, France, Italy, Greece, Jordan and Morocco to explore traditional rainwater harvesting practices and systems in the Mediterranean region and to develop accurate modeling to simulate the performance and sustainability of these technologies under present-day climatic conditions. The ultimate goal of this project was to resuscitate and valorize these practices in the context of contemporary challenges. This project was intended to establish a Mediterranean network to serve as a basis for a more ambitious project. The ultimate objective was to analyze traditional hydraulic systems and create a prototype hydraulic ecosystem using a coupled environmental approach and traditional and ancient know-how, with the aim of reinterpreting them in the light of current techniques. The combination of ‘traditional’ and ‘modern knowledge/techniques’ is expected to lead to proposals for innovative hydraulic systems. The pandemic initially slowed our progress, but in the end it forced us to carry out the fieldwork in Morocco and Saudi Arabia, and so restart the project. With the participation of colleagues from chronologically distant fields (archaeology, sociology), we are now prepared to share our observations and propose the next steps. This interdisciplinary approach should give us a global vision of the project's objectives and challenges. A diachronic approach is needed to tackle the question of the long-term adaptation of societies in a Mediterranean context that has experienced several periods of water stress. The next stage of the StorMer project is the implementation of pilots in non-urbanized regions. These pilots will test the implementation of traditional systems and will be maintained and evaluated in terms of effectiveness, cost and acceptance. Based on these experiences, larger projects will be proposed and could provide information for regional water management policies. One of the most important lessons learned from this project is the highly social nature of managing traditional rainwater harvesting systems. Unlike modern, centralized water infrastructures, these systems often require the involvement of communities, which assume ownership and responsibility for them. This kind of community engagement leads to greater maintenance and, therefore, sustainability of the systems. Knowledge of the socio-cultural characteristics of these communities means that the systems can be adapted to the needs of each location, ensuring greater acceptance and efficiency.Keywords: oasis, rainfall harvesting, arid regions, Mediterranean
Procedia PDF Downloads 41284 Community Music in Puerto Rico
Authors: Francisco Luis Reyes
Abstract:
The multiple-case study explores the intricacies of three Puerto Rican Community Music (CM) initiatives. This research concentrates on the teaching and learning dynamics of three of the nation’s traditional musical genres, Plena, Bomba, and Música Jíbara, which have survived for centuries through oral transmission and enculturation in community settings. Accordingly, this research focuses on how music education is carried out in Puerto Rican CM initiatives that foster and preserve the country’s traditional music. This study examines the CM initiatives of La Junta, in Santurce (Plena), Taller Tambuyé in Rio Piedras (Bomba), and Decimanía (Música Jíbara), an initiative that stems from the municipality of Hatillo. In terms of procedure, 45–60-minute semi-structured interviews were conducted with organizers and administrators of the CM initiatives to gain insight into the educational philosophy of each project. Following this, a second series of 45–60-minute semi-structured interviews were undertaken with CM educators to collect data on their musical development, teaching practices, and relationship with learners. Subsequently, four weeks were spent observing/participating in each of the three CM initiatives. In addition to participant observations in these projects, five CM learners from each locale were recruited for two one-on-one semi-structured interviews at the beginning and end of the data collection period. The initial interview centered on the participants’ rationale for joining the CM initiative whereas the exit interview focused on participants’ experience within it. Alumni from each of the CM initiatives partook in 45–60-minute semi-structured interviews to investigate their understanding of what it means to be a member of each musical community. Finally, observations and documentation of additional activities hosted/promoted by each initiative, such as festivals, concerts, social gatherings, and workshops, were undertaken. These three initiatives were chosen because of their robust and dynamic practices in fostering the musical expressions of Puerto Rico. Data collection consisted of participant observation, narrative inquiry, historical research, philosophical inquiry, and semi-structured interviews. Data analysis for this research involved relying on theoretical propositions, which entails comparing the results—from each case and as a collective— to the arguments that led to the basis of the research (e.g., literature review, research questions, hypothesis). Comparisons to the theoretical propositions were made through pattern matching, which requires comparing predicted patterns from the literature review to findings from each case. Said process led to the development of an analytic outlook of each CM case and a cross-case synthesis. The purpose of employing said data analysis methodology is to present robust findings about CM practices in Puerto Rico and elucidate similarities and differences between the cases that comprise this research and the relevant literature. Furthermore, through the use of Sound Links’ Nine Domains of Community Music, comparisons to other community projects are made in order to point out parallels and highlight particularities in Puerto Rico.Keywords: community music, Puerto Rico, music learning, traditional music
Procedia PDF Downloads 29283 Assessing Moisture Adequacy over Semi-arid and Arid Indian Agricultural Farms using High-Resolution Thermography
Authors: Devansh Desai, Rahul Nigam
Abstract:
Crop water stress (W) at a given growth stage starts to set in as moisture availability (M) to roots falls below 75% of maximum. It has been found that ratio of crop evapotranspiration (ET) and reference evapotranspiration (ET0) is an indicator of moisture adequacy and is strongly correlated with ‘M’ and ‘W’. The spatial variability of ET0 is generally less over an agricultural farm of 1-5 ha than ET, which depends on both surface and atmospheric conditions, while the former depends only on atmospheric conditions. Solutions from surface energy balance (SEB) and thermal infrared (TIR) remote sensing are now known to estimate latent heat flux of ET. In the present study, ET and moisture adequacy index (MAI) (=ET/ET0) have been estimated over two contrasting western India agricultural farms having rice-wheat system in semi-arid climate and arid grassland system, limited by moisture availability. High-resolution multi-band TIR sensing observations at 65m from ECOSTRESS (ECOsystemSpaceborne Thermal Radiometer Experiment on Space Station) instrument on-board International Space Station (ISS) were used in an analytical SEB model, STIC (Surface Temperature Initiated Closure) to estimate ET and MAI. The ancillary variables used in the ET modeling and MAI estimation were land surface albedo, NDVI from close-by LANDSAT data at 30m spatial resolution, ET0 product at 4km spatial resolution from INSAT 3D, meteorological forcing variables from short-range weather forecast on air temperature and relative humidity from NWP model. Farm-scale ET estimates at 65m spatial resolution were found to show low RMSE of 16.6% to 17.5% with R2 >0.8 from 18 datasets as compared to reported errors (25 – 30%) from coarser-scale ET at 1 to 8 km spatial resolution when compared to in situ measurements from eddy covariance systems. The MAI was found to show lower (<0.25) and higher (>0.5) magnitudes in the contrasting agricultural farms. The study showed the potential need of high-resolution high-repeat spaceborne multi-band TIR payloads alongwith optical payload in estimating farm-scale ET and MAI for estimating consumptive water use and water stress. A set of future high-resolution multi-band TIR sensors are planned on-board Indo-French TRISHNA, ESA’s LSTM, NASA’s SBG space-borne missions to address sustainable irrigation water management at farm-scale to improve crop water productivity. These will provide precise and fundamental variables of surface energy balance such as LST (Land Surface Temperature), surface emissivity, albedo and NDVI. A synchronization among these missions is needed in terms of observations, algorithms, product definitions, calibration-validation experiments and downstream applications to maximize the potential benefits.Keywords: thermal remote sensing, land surface temperature, crop water stress, evapotranspiration
Procedia PDF Downloads 70282 Co-Movement between Financial Assets: An Empirical Study on Effects of the Depreciation of Yen on Asia Markets
Authors: Yih-Wenn Laih
Abstract:
In recent times, the dependence and co-movement among international financial markets have become stronger than in the past, as evidenced by commentaries in the news media and the financial sections of newspapers. Studying the co-movement between returns in financial markets is an important issue for portfolio management and risk management. The realization of co-movement helps investors to identify the opportunities for international portfolio management in terms of asset allocation and pricing. Since the election of the new Prime Minister, Shinzo Abe, in November 2012, the yen has weakened against the US dollar from the 80 to the 120 level. The policies, known as “Abenomics,” are to encourage private investment through a more aggressive mix of monetary and fiscal policy. Given the close economic relations and competitions among Asia markets, it is interesting to discover the co-movement relations, affected by the depreciation of yen, between stock market of Japan and 5 major Asia stock markets, including China, Hong Kong, Korea, Singapore, and Taiwan. Specifically, we devote ourselves to measure the co-movement of stock markets between Japan and each one of the 5 Asia stock markets in terms of rank correlation coefficients. To compute the coefficients, return series of each stock market is first fitted by a skewed-t GARCH (generalized autoregressive conditional heteroscedasticity) model. Secondly, to measure the dependence structure between matched stock markets, we employ the symmetrized Joe-Clayton (SJC) copula to calculate the probability density function of paired skewed-t distributions. The joint probability density function is then utilized as the scoring scheme to optimize the sequence alignment by dynamic programming method. Finally, we compute the rank correlation coefficients (Kendall's and Spearman's ) between matched stock markets based on their aligned sequences. We collect empirical data of 6 stock indexes from Taiwan Economic Journal. The data is sampled at a daily frequency covering the period from January 1, 2013 to July 31, 2015. The empirical distributions of returns indicate fatter tails than the normal distribution. Therefore, the skewed-t distribution and SJC copula are appropriate for characterizing the data. According to the computed Kendall’s τ, Korea has the strongest co-movement relation with Japan, followed by Taiwan, China, and Singapore; the weakest is Hong Kong. On the other hand, the Spearman’s ρ reveals that the strength of co-movement between markets with Japan in decreasing order are Korea, China, Taiwan, Singapore, and Hong Kong. We explore the effects of “Abenomics” on Asia stock markets by measuring the co-movement relation between Japan and five major Asia stock markets in terms of rank correlation coefficients. The matched markets are aligned by a hybrid method consisting of GARCH, copula and sequence alignment. Empirical experiments indicate that Korea has the strongest co-movement relation with Japan. The strength of China and Taiwan are better than Singapore. The Hong Kong market has the weakest co-movement relation with Japan.Keywords: co-movement, depreciation of Yen, rank correlation, stock market
Procedia PDF Downloads 231281 Inclusion Body Refolding at High Concentration for Large-Scale Applications
Authors: J. Gabrielczyk, J. Kluitmann, T. Dammeyer, H. J. Jördening
Abstract:
High-level expression of proteins in bacteria often causes production of insoluble protein aggregates, called inclusion bodies (IB). They contain mainly one type of protein and offer an easy and efficient way to get purified protein. On the other hand, proteins in IB are normally devoid of function and therefore need a special treatment to become active. Most refolding techniques aim at diluting the solubilizing chaotropic agents. Unfortunately, optimal refolding conditions have to be found empirically for every protein. For large-scale applications, a simple refolding process with high yields and high final enzyme concentrations is still missing. The constructed plasmid pASK-IBA63b containing the sequence of fructosyltransferase (FTF, EC 2.4.1.162) from Bacillus subtilis NCIMB 11871 was transformed into E. coli BL21 (DE3) Rosetta. The bacterium was cultivated in a fed-batch bioreactor. The produced FTF was obtained mainly as IB. For refolding experiments, five different amounts of IBs were solubilized in urea buffer with protein concentration of 0.2-8.5 g/L. Solubilizates were refolded with batch or continuous dialysis. The refolding yield was determined by measuring the protein concentration of the clear supernatant before and after the dialysis. Particle size was measured by dynamic light scattering. We tested the solubilization properties of fructosyltransferase IBs. The particle size measurements revealed that the solubilization of the aggregates is achieved at urea concentration of 5M or higher and confirmed by absorption spectroscopy. All results confirm previous investigations that refolding yields are dependent upon initial protein concentration. In batch dialysis, the yields dropped from 67% to 12% and 72% to 19% for continuous dialysis, in relation to initial concentrations from 0.2 to 8.5 g/L. Often used additives such as sucrose and glycerol had no effect on refolding yields. Buffer screening indicated a significant increase in activity but also temperature stability of FTF with citrate/phosphate buffer. By adding citrate to the dialysis buffer, we were able to increase the refolding yields to 82-47% in batch and 90-74% in the continuous process. Further experiments showed that in general, higher ionic strength of buffers had major impact on refolding yields; doubling the buffer concentration increased the yields up to threefold. Finally, we achieved corresponding high refolding yields by reducing the chamber volume by 75% and the amount of buffer needed. The refolded enzyme had an optimal activity of 12.5±0.3 x104 units/g. However, detailed experiments with native FTF revealed a reaggregation of the molecules and loss in specific activity depending on the enzyme concentration and particle size. For that reason, we actually focus on developing a process of simultaneous enzyme refolding and immobilization. The results of this study show a new approach in finding optimal refolding conditions for inclusion bodies at high concentrations. Straightforward buffer screening and increase of the ionic strength can optimize the refolding yield of the target protein by 400%. Gentle removal of chaotrope with continuous dialysis increases the yields by an additional 65%, independent of the refolding buffer applied. In general time is the crucial parameter for successful refolding of solubilized proteins.Keywords: dialysis, inclusion body, refolding, solubilization
Procedia PDF Downloads 294280 Multiscale Modelization of Multilayered Bi-Dimensional Soils
Authors: I. Hosni, L. Bennaceur Farah, N. Saber, R Bennaceur
Abstract:
Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.Keywords: multiscale, bidimensional, wavelets, backscattering, multilayer, SPM, air pockets
Procedia PDF Downloads 125279 Performance Evaluation of Various Displaced Left Turn Intersection Designs
Authors: Hatem Abou-Senna, Essam Radwan
Abstract:
With increasing traffic and limited resources, accommodating left-turning traffic has been a challenge for traffic engineers as they seek balance between intersection capacity and safety; these are two conflicting goals in the operation of a signalized intersection that are mitigated through signal phasing techniques. Hence, to increase the left-turn capacity and reduce the delay at the intersections, the Florida Department of Transportation (FDOT) moves forward with a vision of optimizing intersection control using innovative intersection designs through the Transportation Systems Management & Operations (TSM&O) program. These alternative designs successfully eliminate the left-turn phase, which otherwise reduces the conventional intersection’s (CI) efficiency considerably, and divide the intersection into smaller networks that would operate in a one-way fashion. This study focused on the Crossover Displaced Left-turn intersections (XDL), also known as Continuous Flow Intersections (CFI). The XDL concept is best suited for intersections with moderate to high overall traffic volumes, especially those with very high or unbalanced left turn volumes. There is little guidance on determining whether partial XDL intersections are adequate to mitigate the overall intersection condition or full XDL is always required. The primary objective of this paper was to evaluate the overall intersection performance in the case of different partial XDL designs compared to a full XDL. The XDL alternative was investigated for 4 different scenarios; partial XDL on the east-west approaches, partial XDL on the north-south approaches, partial XDL on the north and east approaches and full XDL on all 4 approaches. Also, the impact of increasing volume on the intersection performance was considered by modeling the unbalanced volumes with 10% increment resulting in 5 different traffic scenarios. The study intersection, located in Orlando Florida, is experiencing recurring congestion in the PM peak hour and is operating near capacity with volume to a capacity ratio closer to 1.00 due to the presence of two heavy conflicting movements; southbound and westbound. The results showed that a partial EN XDL alternative proved to be effective and compared favorably to a full XDL alternative followed by the partial EW XDL alternative. The analysis also showed that Full, EW and EN XDL alternatives outperformed the NS XDL and the CI alternatives with respect to the throughput, delay and queue lengths. Significant throughput improvements were remarkable at the higher volume level with percent increase in capacity of 25%. The percent reduction in delay for the critical movements in the XDL scenarios compared to the CI scenario ranged from 30-45%. Similarly, queue lengths showed percent reduction in the XDL scenarios ranging from 25-40%. The analysis revealed how partial XDL design can improve the overall intersection performance at various demands, reduce the costs associated with full XDL and proved to outperform the conventional intersection. However, partial XDL serving low volumes or only one of the critical movements while other critical movements are operating near or above capacity do not provide significant benefits when compared to the conventional intersection.Keywords: continuous flow intersections, crossover displaced left-turn, microscopic traffic simulation, transportation system management and operations, VISSIM simulation model
Procedia PDF Downloads 310278 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems
Authors: Ibram Khalafalla Roshdy Shokry
Abstract:
This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA
Procedia PDF Downloads 26277 Online Monitoring and Control of Continuous Mechanosynthesis by UV-Vis Spectrophotometry
Authors: Darren A. Whitaker, Dan Palmer, Jens Wesholowski, James Flaherty, John Mack, Ahmad B. Albadarin, Gavin Walker
Abstract:
Traditional mechanosynthesis has been performed by either ball milling or manual grinding. However, neither of these techniques allow the easy application of process control. The temperature may change unpredictably due to friction in the process. Hence the amount of energy transferred to the reactants is intrinsically non-uniform. Recently, it has been shown that the use of Twin-Screw extrusion (TSE) can overcome these limitations. Additionally, TSE enables a platform for continuous synthesis or manufacturing as it is an open-ended process, with feedstocks at one end and product at the other. Several materials including metal-organic frameworks (MOFs), co-crystals and small organic molecules have been produced mechanochemically using TSE. The described advantages of TSE are offset by drawbacks such as increased process complexity (a large number of process parameters) and variation in feedstock flow impacting on product quality. To handle the above-mentioned drawbacks, this study utilizes UV-Vis spectrophotometry (InSpectroX, ColVisTec) as an online tool to gain real-time information about the quality of the product. Additionally, this is combined with real-time process information in an Advanced Process Control system (PharmaMV, Perceptive Engineering) allowing full supervision and control of the TSE process. Further, by characterizing the dynamic behavior of the TSE, a model predictive controller (MPC) can be employed to ensure the process remains under control when perturbed by external disturbances. Two reactions were studied; a Knoevenagel condensation reaction of barbituric acid and vanillin and, the direct amidation of hydroquinone by ammonium acetate to form N-Acetyl-para-aminophenol (APAP) commonly known as paracetamol. Both reactions could be carried out continuously using TSE, nuclear magnetic resonance (NMR) spectroscopy was used to confirm the percentage conversion of starting materials to product. This information was used to construct partial least squares (PLS) calibration models within the PharmaMV development system, which relates the percent conversion to product to the acquired UV-Vis spectrum. Once this was complete, the model was deployed within the PharmaMV Real-Time System to carry out automated optimization experiments to maximize the percentage conversion based on a set of process parameters in a design of experiments (DoE) style methodology. With the optimum set of process parameters established, a series of PRBS process response tests (i.e. Pseudo-Random Binary Sequences) around the optimum were conducted. The resultant dataset was used to build a statistical model and associated MPC. The controller maximizes product quality whilst ensuring the process remains at the optimum even as disturbances such as raw material variability are introduced into the system. To summarize, a combination of online spectral monitoring and advanced process control was used to develop a robust system for optimization and control of two TSE based mechanosynthetic processes.Keywords: continuous synthesis, pharmaceutical, spectroscopy, advanced process control
Procedia PDF Downloads 179