Search results for: flexible packaging
268 An In-Depth Comparison Study of Canadian and Danish's Entrepreneurship and Education System
Authors: Amna Khaliq
Abstract:
In this research paper, a comparison study has been undertaken between Canada and Denmark to analyze the education system between the countries in entrepreneurship. Denmark, a land of high wages and high taxes, and Canada, a land of immigrants and opportunities, have seen a positive relationship in entrepreneurs' growth. They are both considered one of the top ten countries to start a business and to have government support globally. However, education is entirely free to Danish students, including university degrees, compared to Canadians, which can further hurdle for Canadian millennials to grow in the business world—the business experience more growth with educated entrepreneurs with international backgrounds in new immigrants. Denmark has seen a gradual increase in female entrepreneurs over the decade but is still lower than OECD countries. Compassionate management and work-life balance are prioritized in Denmark, unlike in Canada. Danish are early adopters of technology and have excellent infrastructure to support the technology industry, whereas Canada is still a service-oriented and manufacturer-based country. 2018 has been the highest number of opening businesses for Canada and Denmark. Some companies offer high wages, hiring bonuses, flexible working hours, wellness, and mental health benefits during Pandemic to keep the companies running and keep their workers' morale high. Pandemic has taught consumers new patterns to shop online. It is essential now to use technology and automation to increase productivity in businesses. Only those companies will survive that are applying this strategy. The Pandemic has ultimately changed entrepreneurs' and employees' behavior in the business world. Along with Ph.D. professors, entrepreneurs should be allowed to teach at learning intuitions. Millennials turn out to be the most entrepreneurial generation in both countries. Entrepreneurship education will only be beneficial when students create businesses and learn from real-life experiences. Managing physical, mental, emotional, and psychological health while dealing with high pressure in entrepreneurship are soft skills learned through practical work.Keywords: entrepreneurship education, millennials, pandemic, Denmark, Canada
Procedia PDF Downloads 105267 Data Monetisation by E-commerce Companies: A Need for a Regulatory Framework in India
Authors: Anushtha Saxena
Abstract:
This paper examines the process of data monetisation bye-commerce companies operating in India. Data monetisation is collecting, storing, and analysing consumers’ data to use further the data that is generated for profits, revenue, etc. Data monetisation enables e-commerce companies to get better businesses opportunities, innovative products and services, a competitive edge over others to the consumers, and generate millions of revenues. This paper analyses the issues and challenges that are faced due to the process of data monetisation. Some of the issues highlighted in the paper pertain to the right to privacy, protection of data of e-commerce consumers. At the same time, data monetisation cannot be prohibited, but it can be regulated and monitored by stringent laws and regulations. The right to privacy isa fundamental right guaranteed to the citizens of India through Article 21 of The Constitution of India. The Supreme Court of India recognized the Right to Privacy as a fundamental right in the landmark judgment of Justice K.S. Puttaswamy (Retd) and Another v. Union of India . This paper highlights the legal issue of how e-commerce businesses violate individuals’ right to privacy by using the data collected, stored by them for economic gains and monetisation and protection of data. The researcher has mainly focused on e-commerce companies like online shopping websitesto analyse the legal issue of data monetisation. In the Internet of Things and the digital age, people have shifted to online shopping as it is convenient, easy, flexible, comfortable, time-consuming, etc. But at the same time, the e-commerce companies store the data of their consumers and use it by selling to the third party or generating more data from the data stored with them. This violatesindividuals’ right to privacy because the consumers do not know anything while giving their data online. Many times, data is collected without the consent of individuals also. Data can be structured, unstructured, etc., that is used by analytics to monetise. The Indian legislation like The Information Technology Act, 2000, etc., does not effectively protect the e-consumers concerning their data and how it is used by e-commerce businesses to monetise and generate revenues from that data. The paper also examines the draft Data Protection Bill, 2021, pending in the Parliament of India, and how this Bill can make a huge impact on data monetisation. This paper also aims to study the European Union General Data Protection Regulation and how this legislation can be helpful in the Indian scenarioconcerning e-commerce businesses with respect to data monetisation.Keywords: data monetization, e-commerce companies, regulatory framework, GDPR
Procedia PDF Downloads 120266 Matrix-Based Linear Analysis of Switched Reluctance Generator with Optimum Pole Angles Determination
Authors: Walid A. M. Ghoneim, Hamdy A. Ashour, Asmaa E. Abdo
Abstract:
In this paper, linear analysis of a Switched Reluctance Generator (SRG) model is applied on the most common configurations (4/2, 6/4 and 8/6) for both conventional short-pitched and fully-pitched designs, in order to determine the optimum stator/rotor pole angles at which the maximum output voltage is generated per unit excitation current. This study is focused on SRG analysis and design as a proposed solution for renewable energy applications, such as wind energy conversion systems. The world’s potential to develop the renewable energy technologies through dedicated scientific researches was the motive behind this study due to its positive impact on economy and environment. In addition, the problem of rare earth metals (Permanent magnet) caused by mining limitations, banned export by top producers and environment restrictions leads to the unavailability of materials used for rotating machines manufacturing. This challenge gave authors the opportunity to study, analyze and determine the optimum design of the SRG that has the benefit to be free from permanent magnets, rotor windings, with flexible control system and compatible with any application that requires variable-speed operation. In addition, SRG has been proved to be very efficient and reliable in both low-speed or high-speed applications. Linear analysis was performed using MATLAB simulations based on the (Modified generalized matrix approach) of Switched Reluctance Machine (SRM). About 90 different pole angles combinations and excitation patterns were simulated through this study, and the optimum output results for each case were recorded and presented in detail. This procedure has been proved to be applicable for any SRG configuration, dimension and excitation pattern. The delivered results of this study provide evidence for using the 4-phase 8/6 fully pitched SRG as the main optimum configuration for the same machine dimensions at the same angular speed.Keywords: generalized matrix approach, linear analysis, renewable applications, switched reluctance generator
Procedia PDF Downloads 198265 Assessing the Clinicians’ Perspectives on Formulation with Minoxidil, Finasteride, and Capixyl™ in Androgenetic Alopecia: A Nationwide Dermatologist Survey
Authors: Sharma Aseem, Dhurat Rachita, Pawar Varsha, Khalse Manisha
Abstract:
Introduction: Androgenetic alopecia (AGA) is a prevalent condition characterized by progressive hair thinning driven by genetic and androgen-related factors. The current FDA-approved treatments include oral finasteride and topical minoxidil, though many patients seek combination therapies to enhance results. This study aims to evaluate the effectiveness of a combination therapy involving Minoxidil, Finasteride, and Capixyl™ based on feedback from dermatologists. Methodology: A survey, validated by experts, was distributed to 29 leading dermatologists across India (in Tier 1 and 2 cities). The survey examined real-world clinical experiences, focusing on patient outcomes and the overall effectiveness of the mentioned formulation. Results: Among the surveyed dermatologists, 41.4% identified women aged 35-40 as the most frequently diagnosed with female pattern hair loss. The combination therapy with Minoxidil, Finasteride, and Capixyl™ was utilized by 34.5% of dermatologists for over 60 patients per month. The majority highlighted the benefits of this combination therapy, which acts via multiple mechanisms, such as vasodilation and dihydrotestosterone (DHT) receptor blockade, resulting in improved hair regrowth. Additionally, patients demonstrated better clinical outcomes, enhanced compliance, and fewer side effects. Demographically, younger patients, particularly those with AGA for less than 10 years, responded more positively to the treatment. Early intervention led to quicker and more significant results. Overall satisfaction among dermatologists was high, with 86.2% expressing positive feedback on the therapy. In terms of treatment outcomes, 51.7% of dermatologists observed visible results within 4-6 months, while 34.5% noticed a significant reduction in hair fall within 8-12 weeks. Improvements in scalp health were reported by 48.3%, and 51.7% saw an increased hair density within 3-4 months. Despite mild side effects such as scalp irritation, dryness, flaking, and occasional issues like folliculitis, headaches, itching, and redness, patient satisfaction remained high. Dermatologists reported that 93.1% of patients experienced faster and better hair regrowth with Capixyl™ compared to Minoxidil alone. Suggestions for improving the formulation included incorporating peptides like Saw Palmetto and enhancing product packaging to better meet patient needs. Discussion: The combination of Minoxidil, Finasteride, and Capixyl™ yielded positive clinical outcomes, especially in improving hair density, scalp health, and overall patient satisfaction. Dermatologists found that Capixyl™ peptides enhanced the therapeutic effect, promoting hair regrowth and improving compliance. While side effects were generally mild, there were suggestions to further improve the formulation by adding additional peptides like Saw Palmetto. Conclusion: The combination of Minoxidil and Finasteride fortified with Capixyl™ presents a promising therapeutic option for managing AGA. Dermatologists reported significant improvements in hair density, scalp health, and patient satisfaction. With its favorable efficacy and manageable side effects, this formulation proves to be a valuable addition to the treatment landscape for AGA.Keywords: androgenetic alopecia, combination therapy, minoxidil, finasteride, capixyl
Procedia PDF Downloads 13264 Creating and Questioning Research-Oriented Digital Outputs to Manuscript Metadata: A Case-Based Methodological Investigation
Authors: Diandra Cristache
Abstract:
The transition of traditional manuscript studies into the digital framework closely affects the methodological premises upon which manuscript descriptions are modeled, created, and questioned for the purpose of research. This paper intends to explore the issue by presenting a methodological investigation into the process of modeling, creating, and questioning manuscript metadata. The investigation is founded on a close observation of the Polonsky Greek Manuscripts Project, a collaboration between the Universities of Cambridge and Heidelberg. More than just providing a realistic ground for methodological exploration, along with a complete metadata set for computational demonstration, the case study also contributes to a broader purpose: outlining general methodological principles for making the most out of manuscript metadata by means of research-oriented digital outputs. The analysis mainly focuses on the scholarly approach to manuscript descriptions, in the specific instance where the act of metadata recording does not have a programmatic research purpose. Close attention is paid to the encounter of 'traditional' practices in manuscript studies with the formal constraints of the digital framework: does the shift in practices (especially from the straight narrative of free writing towards the hierarchical constraints of the TEI encoding model) impact the structure of metadata and its capability to respond specific research questions? It is argued that flexible structure of TEI and traditional approaches to manuscript description lead to a proliferation of markup: does an 'encyclopedic' descriptive approach ensure the epistemological relevance of the digital outputs to metadata? To provide further insight on the computational approach to manuscript metadata, the metadata of the Polonsky project are processed with techniques of distant reading and data networking, thus resulting in a new group of digital outputs (relational graphs, geographic maps). The computational process and the digital outputs are thoroughly illustrated and discussed. Eventually, a retrospective analysis evaluates how the digital outputs respond to the scientific expectations of research, and the other way round, how the requirements of research questions feed back into the creation and enrichment of metadata in an iterative loop.Keywords: digital manuscript studies, digital outputs to manuscripts metadata, metadata interoperability, methodological issues
Procedia PDF Downloads 140263 Airborne Particulate Matter Passive Samplers for Indoor and Outdoor Exposure Monitoring: Development and Evaluation
Authors: Kholoud Abdulaziz, Kholoud Al-Najdi, Abdullah Kadri, Konstantinos E. Kakosimos
Abstract:
The Middle East area is highly affected by air pollution induced by anthropogenic and natural phenomena. There is evidence that air pollution, especially particulates, greatly affects the population health. Many studies have raised a warning of the high concentration of particulates and their affect not just around industrial and construction areas but also in the immediate working and living environment. One of the methods to study air quality is continuous and periodic monitoring using active or passive samplers. Active monitoring and sampling are the default procedures per the European and US standards. However, in many cases they have been inefficient to accurately capture the spatial variability of air pollution due to the small number of installations; which eventually is attributed to the high cost of the equipment and the limited availability of users with expertise and scientific background. Another alternative has been found to account for the limitations of the active methods that is the passive sampling. It is inexpensive, requires no continuous power supply, and easy to assemble which makes it a more flexible option, though less accurate. This study aims to investigate and evaluate the use of passive sampling for particulate matter pollution monitoring in dry tropical climates, like in the Middle East. More specifically, a number of field measurements have be conducted, both indoors and outdoors, at Qatar and the results have been compared with active sampling equipment and the reference methods. The samples have been analyzed, that is to obtain particle size distribution, by applying existing laboratory techniques (optical microscopy) and by exploring new approaches like the white light interferometry to. Then the new parameters of the well-established model have been calculated in order to estimate the atmospheric concentration of particulates. Additionally, an extended literature review will investigate for new and better models. The outcome of this project is expected to have an impact on the public, as well, as it will raise awareness among people about the quality of life and about the importance of implementing research culture in the community.Keywords: air pollution, passive samplers, interferometry, indoor, outdoor
Procedia PDF Downloads 398262 The Challenges of Digital Crime Nowadays
Authors: Bendes Ákos
Abstract:
Digital evidence will be the most widely used type of evidence in the future. With the development of the modern world, more and more new types of crimes have evolved and transformed. For this reason, it is extremely important to examine these types of crimes in order to get a comprehensive picture of them, with which we can help the authorities work. In 1865, with early technologies, people were able to forge a picture of a quality that is not even recognized today. With the help of today's technology, authorities receive a lot of false evidence. Officials are not able to process such a large amount of data, nor do they have the necessary technical knowledge to get a real picture of the authenticity of the given evidence. The digital world has many dangers. Unfortunately, we live in an age where we must protect everything digitally: our phones, our computers, our cars, and all the smart devices that are present in our personal lives and this is not only a burden on us, since companies, state and public utilities institutions are also forced to do so. The training of specialists and experts is essential so that the authorities can manage the incoming digital evidence at some level. When analyzing evidence, it is important to be able to examine it from the moment it is created. Establishing authenticity is a very important issue during official procedures. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. Otherwise, they will not have sufficient probative value and in case of doubt, the court will always decide in favor of the defendant. One of the most common problems in the world of digital data and evidence is doubt, which is why it is extremely important to examine the above-mentioned problems. The most effective way to avoid digital crimes is to prevent them, for which proper education and knowledge are essential. The aim is to present the dangers inherent in the digital world and the new types of digital crimes. After the comparison of the Hungarian investigative techniques with international practice, modernizing proposals will be given. A sufficiently stable yet flexible legislation is needed that can monitor the rapid changes in the world and not regulate afterward but rather provide an appropriate framework. It is also important to be able to distinguish between digital and digitalized evidence, as the degree of probative force differs greatly. The aim of the research is to promote effective international cooperation and uniform legal regulation in the world of digital crimes.Keywords: digital crime, digital law, cyber crime, international cooperation, new crimes, skepticism
Procedia PDF Downloads 63261 Kirigami Designs for Enhancing the Electromechanical Performance of E-Textiles
Authors: Braden M. Li, Inhwan Kim, Jesse S. Jur
Abstract:
One of the fundamental challenges in the electronic textile (e-textile) industry is the mismatch in compliance between the rigid electronic components integrated onto soft textile platforms. To address these problems, various printing technologies using conductive inks have been explored in an effort to improve the electromechanical performance without sacrificing the innate properties of the printed textile. However, current printing methods deposit densely layered coatings onto textile surfaces with low through-plane wetting resulting in poor electromechanical properties. This work presents an inkjet printing technique in conjunction with unique Kirigami cut designs to address these issues for printed smart textiles. By utilizing particle free reactive silver inks, our inkjet process produces conformal and micron thick silver coatings that surround individual fibers of the printed smart textile. This results in a highly conductive (0.63 Ω sq-1) printed e-textile while also maintaining the innate properties of the textile material including stretchability, flexibility, breathability and fabric hand. Kirigami is the Japanese art of paper cutting. By utilizing periodic cut designs, Kirigami imparts enhanced flexibility and delocalization of stress concentrations. Kirigami cut design parameters (i.e., cut spacing and length) were correlated to both the mechanical and electromechanical properties of the printed textiles. We demonstrate that designs using a higher cut-out ratio exponentially softens the textile substrate. Thus, our designs achieve a 30x improvement in the overall stretchability, 1000x decrease in elastic modulus, and minimal resistance change over strain regimes of 100-200% when compared to uncut designs. We also show minimal resistance change of our Kirigami inspired printed devices after being stretched to 100% for 1000 cycles. Lastly, we demonstrate a Kirigami-inspired electrocardiogram (ECG) monitoring system that improves stretchability without sacrificing signal acquisition performance. Overall this study suggests fundamental parameters affecting the performance of e-textiles and their scalability in the wearable technology industryKeywords: kirigami, inkjet printing, flexible electronics, reactive silver ink
Procedia PDF Downloads 143260 Monetary Policy and Assets Prices in Nigeria: Testing for the Direction of Relationship
Authors: Jameelah Omolara Yaqub
Abstract:
One of the main reasons for the existence of central bank is that it is believed that central banks have some influence on private sector decisions which will enable the Central Bank to achieve some of its objectives especially that of stable price and economic growth. By the assumption of the New Keynesian theory that prices are fully flexible in the short run, the central bank can temporarily influence real interest rate and, therefore, have an effect on real output in addition to nominal prices. There is, therefore, the need for the Central Bank to monitor, respond to, and influence private sector decisions appropriately. This thus shows that the Central Bank and the private sector will both affect and be affected by each other implying considerable interdependence between the sectors. The interdependence may be simultaneous or not depending on the level of information, readily available and how sensitive prices are to agents’ expectations about the future. The aim of this paper is, therefore, to determine whether the interdependence between asset prices and monetary policy are simultaneous or not and how important is this relationship. Studies on the effects of monetary policy have largely used VAR models to identify the interdependence but most have found small effects of interaction. Some earlier studies have ignored the possibility of simultaneous interdependence while those that have allowed for simultaneous interdependence used data from developed economies only. This study, therefore, extends the literature by using data from a developing economy where information might not be readily available to influence agents’ expectation. In this study, the direction of relationship among variables of interest will be tested by carrying out the Granger causality test. Thereafter, the interaction between asset prices and monetary policy in Nigeria will be tested. Asset prices will be represented by the NSE index as well as real estate prices while monetary policy will be represented by money supply and the MPR respectively. The VAR model will be used to analyse the relationship between the variables in order to take account of potential simultaneity of interdependence. The study will cover the period between 1980 and 2014 due to data availability. It is believed that the outcome of the research will guide monetary policymakers especially the CBN to effectively influence the private sector decisions and thereby achieve its objectives of price stability and economic growth.Keywords: asset prices, granger causality, monetary policy rate, Nigeria
Procedia PDF Downloads 220259 Copper Phthalocyanine Nanostructures: A Potential Material for Field Emission Display
Authors: Uttam Kumar Ghorai, Madhupriya Samanta, Subhajit Saha, Swati Das, Nilesh Mazumder, Kalyan Kumar Chattopadhyay
Abstract:
Organic semiconductors have gained potential interest in the last few decades for their significant contributions in the various fields such as solar cell, non-volatile memory devices, field effect transistors and light emitting diodes etc. The most important advantages of using organic materials are mechanically flexible, light weight and low temperature depositing techniques. Recently with the advancement of nanoscience and technology, one dimensional organic and inorganic nanostructures such as nanowires, nanorods, nanotubes have gained tremendous interests due to their very high aspect ratio and large surface area for electron transport etc. Among them, self-assembled organic nanostructures like Copper, Zinc Phthalocyanine have shown good transport property and thermal stability due to their π conjugated bonds and π-π stacking respectively. Field emission properties of inorganic and carbon based nanostructures are reported in literatures mostly. But there are few reports in case of cold cathode emission characteristics of organic semiconductor nanostructures. In this work, the authors report the field emission characteristics of chemically and physically synthesized Copper Phthalocyanine (CuPc) nanostructures such as nanowires, nanotubes and nanotips. The as prepared samples were characterized by X-Ray diffraction (XRD), Ultra Violet Visible Spectrometer (UV-Vis), Fourier Transform Infra-red Spectroscopy (FTIR), and Field Emission Scanning Electron Microscope (FESEM) and Transmission Electron Microscope (TEM). The field emission characteristics were measured in our home designed field emission set up. The registered turn-on field and local field enhancement factor are found to be less than 5 V/μm and greater than 1000 respectively. The field emission behaviour is also stable for 200 minute. The experimental results are further verified by theoretically using by a finite displacement method as implemented in ANSYS Maxwell simulation package. The obtained results strongly indicate CuPc nanostructures to be the potential candidate as an electron emitter for field emission based display device applications.Keywords: organic semiconductor, phthalocyanine, nanowires, nanotubes, field emission
Procedia PDF Downloads 501258 Wearable Antenna for Diagnosis of Parkinson’s Disease Using a Deep Learning Pipeline on Accelerated Hardware
Authors: Subham Ghosh, Banani Basu, Marami Das
Abstract:
Background: The development of compact, low-power antenna sensors has resulted in hardware restructuring, allowing for wireless ubiquitous sensing. The antenna sensors can create wireless body-area networks (WBAN) by linking various wireless nodes across the human body. WBAN and IoT applications, such as remote health and fitness monitoring and rehabilitation, are becoming increasingly important. In particular, Parkinson’s disease (PD), a common neurodegenerative disorder, presents clinical features that can be easily misdiagnosed. As a mobility disease, it may greatly benefit from the antenna’s nearfield approach with a variety of activities that can use WBAN and IoT technologies to increase diagnosis accuracy and patient monitoring. Methodology: This study investigates the feasibility of leveraging a single patch antenna mounted (using cloth) on the wrist dorsal to differentiate actual Parkinson's disease (PD) from false PD using a small hardware platform. The semi-flexible antenna operates at the 2.4 GHz ISM band and collects reflection coefficient (Γ) data from patients performing five exercises designed for the classification of PD and other disorders such as essential tremor (ET) or those physiological disorders caused by anxiety or stress. The obtained data is normalized and converted into 2-D representations using the Gabor wavelet transform (GWT). Data augmentation is then used to expand the dataset size. A lightweight deep-learning (DL) model is developed to run on the GPU-enabled NVIDIA Jetson Nano platform. The DL model processes the 2-D images for feature extraction and classification. Findings: The DL model was trained and tested on both the original and augmented datasets, thus doubling the dataset size. To ensure robustness, a 5-fold stratified cross-validation (5-FSCV) method was used. The proposed framework, utilizing a DL model with 1.356 million parameters on the NVIDIA Jetson Nano, achieved optimal performance in terms of accuracy of 88.64%, F1-score of 88.54, and recall of 90.46%, with a latency of 33 seconds per epoch.Keywords: antenna, deep-learning, GPU-hardware, Parkinson’s disease
Procedia PDF Downloads 7257 qPCR Method for Detection of Halal Food Adulteration
Authors: Gabriela Borilova, Monika Petrakova, Petr Kralik
Abstract:
Nowadays, European producers are increasingly interested in the production of halal meat products. Halal meat has been increasingly appearing in the EU's market network and meat products from European producers are being exported to Islamic countries. Halal criteria are mainly related to the origin of muscle used in production, and also to the way products are obtained and processed. Although the EU has legislatively addressed the question of food authenticity, the circumstances of previous years when products with undeclared horse or poultry meat content appeared on EU markets raised the question of the effectiveness of control mechanisms. Replacement of expensive or not-available types of meat for low-priced meat has been on a global scale for a long time. Likewise, halal products may be contaminated (falsified) by pork or food components obtained from pigs. These components include collagen, offal, pork fat, mechanically separated pork, emulsifier, blood, dried blood, dried blood plasma, gelatin, and others. These substances can influence sensory properties of the meat products - color, aroma, flavor, consistency and texture or they are added for preservation and stabilization. Food manufacturers sometimes access these substances mainly due to their dense availability and low prices. However, the use of these substances is not always declared on the product packaging. Verification of the presence of declared ingredients, including the detection of undeclared ingredients, are among the basic control procedures for determining the authenticity of food. Molecular biology methods, based on DNA analysis, offer rapid and sensitive testing. The PCR method and its modification can be successfully used to identify animal species in single- and multi-ingredient raw and processed foods and qPCR is the first choice for food analysis. Like all PCR-based methods, it is simple to implement and its greatest advantage is the absence of post-PCR visualization by electrophoresis. qPCR allows detection of trace amounts of nucleic acids, and by comparing an unknown sample with a calibration curve, it can also provide information on the absolute quantity of individual components in the sample. Our study addresses a problem that is related to the fact that the molecular biological approach of most of the work associated with the identification and quantification of animal species is based on the construction of specific primers amplifying the selected section of the mitochondrial genome. In addition, the sections amplified in conventional PCR are relatively long (hundreds of bp) and unsuitable for use in qPCR, because in DNA fragmentation, amplification of long target sequences is quite limited. Our study focuses on finding a suitable genomic DNA target and optimizing qPCR to reduce variability and distortion of results, which is necessary for the correct interpretation of quantification results. In halal products, the impact of falsification of meat products by the addition of components derived from pigs is all the greater that it is not just about the economic aspect but above all about the religious and social aspect. This work was supported by the Ministry of Agriculture of the Czech Republic (QJ1530107).Keywords: food fraud, halal food, pork, qPCR
Procedia PDF Downloads 247256 The Persistence of Abnormal Return on Assets: An Exploratory Analysis of the Differences between Industries and Differences between Firms by Country and Sector
Authors: José Luis Gallizo, Pilar Gargallo, Ramon Saladrigues, Manuel Salvador
Abstract:
This study offers an exploratory statistical analysis of the persistence of annual profits across a sample of firms from different European Union (EU) countries. To this end, a hierarchical Bayesian dynamic model has been used which enables the annual behaviour of those profits to be broken down into a permanent structural and a transitory component, while also distinguishing between general effects affecting the industry as a whole to which each firm belongs and specific effects affecting each firm in particular. This breakdown enables the relative importance of those fundamental components to be more accurately evaluated by country and sector. Furthermore, Bayesian approach allows for testing different hypotheses about the homogeneity of the behaviour of the above components with respect to the sector and the country where the firm develops its activity. The data analysed come from a sample of 23,293 firms in EU countries selected from the AMADEUS data-base. The period analysed ran from 1999 to 2007 and 21 sectors were analysed, chosen in such a way that there was a sufficiently large number of firms in each country sector combination for the industry effects to be estimated accurately enough for meaningful comparisons to be made by sector and country. The analysis has been conducted by sector and by country from a Bayesian perspective, thus making the study more flexible and realistic since the estimates obtained do not depend on asymptotic results. In general terms, the study finds that, although the industry effects are significant, more important are the firm specific effects. That importance varies depending on the sector or the country in which the firm carries out its activity. The influence of firm effects accounts for around 81% of total variation and display a significantly lower degree of persistence, with adjustment speeds oscillating around 34%. However, this pattern is not homogeneous but depends on the sector and country analysed. Industry effects depends also on sector and country analysed have a more marginal importance, being significantly more persistent, with adjustment speeds oscillating around 7-8% with this degree of persistence being very similar for most of sectors and countries analysed.Keywords: dynamic models, Bayesian inference, MCMC, abnormal returns, persistence of profits, return on assets
Procedia PDF Downloads 401255 Free Fibular Flaps in Management of Sternal Dehiscence
Authors: H. N. Alyaseen, S. E. Alalawi, T. Cordoba, É. Delisle, C. Cordoba, A. Odobescu
Abstract:
Sternal dehiscence is defined as the persistent separation of sternal bones that are often complicated with mediastinitis. Etiologies that lead to sternal dehiscence vary, with cardiovascular and thoracic surgeries being the most common. Early diagnosis in susceptible patients is crucial to the management of such cases, as they are associated with high mortality rates. A recent meta-analysis of more than four hundred thousand patients concluded that deep sternal wound infections were the leading cause of mortality and morbidity in patients undergoing cardiac procedures. Long-term complications associated with sternal dehiscence include increased hospitalizations, cardiac infarctions, and renal and respiratory failures. Numerous osteosynthesis methods have been described in the literature. Surgical materials offer enough rigidity to support the sternum and can be flexible enough to allow physiological breathing movements of the chest; however, these materials fall short when managing patients with extensive bone loss, osteopenia, or general poor bone quality, for such cases, flaps offer a better closure system. Early utilization of flaps yields better survival rates compared to delayed closure or to patients treated with sternal rewiring and closed drainage. The utilization of pectoralis major flaps, rectus abdominus, and latissimus muscle flaps have all been described in the literature as great alternatives. Flap selection depends on a variety of factors, mainly the size of the sternal defect, infection, and the availability of local tissues. Free fibular flaps are commonly harvested flaps utilized in reconstruction around the body. In cases regarding sternal reconstruction with free fibular flaps, the literature exclusively discussed the flap applied vertically to the chest wall. We present a different technique applying the free fibular triple barrel flap oriented in a transverse manner, in parallel to the ribs. In our experience, this method could have enhanced results and improved prognosis as it contributes to the normal circumferential shape of the chest wall.Keywords: sternal dehiscence, management, free fibular flaps, novel surgical techniques
Procedia PDF Downloads 93254 Adaptive Assemblies: A Scalable Solution for Atlanta's Affordable Housing Crisis
Authors: Claudia Aguilar, Amen Farooq
Abstract:
Among other cities in the United States, the city of Atlanta is experiencing levels of growth that surpass anything we have witnessed in the last century. With the surge of population influx, the available housing is practically bursting at the seams. Supply is low, and demand is high. In effect, the average one-bedroom apartment runs for 1,800 dollars per month. The city is desperately seeking new opportunities to provide affordable housing at an expeditious rate. This has been made evident by the recent updates to the city’s zoning. With the recent influx in the housing market, young professionals, in particular millennials, are desperately looking for alternatives to stay within the city. To remedy Atlanta’s affordable housing crisis, the city of Atlanta is planning to introduce 40 thousand of new affordable housing units by 2026. To achieve the urgent need for more affordable housing, the architectural response needs to adapt to overcome this goal. A method that has proven successful in modern housing is to practice modular means of development. A method that has been constrained to the dimensions of the max load for an eighteen-wheeler. This approach has diluted the architect’s ability to produce site-specific, informed design and rather contributes to the “cookie cutter” stigma that the method has been labeled with. This thesis explores the design methodology for modular housing by revisiting its constructability and adaptability. This research focuses on a modular housing type that could break away from the constraints of transport and deliver adaptive reconfigurable assemblies. The adaptive assemblies represent an integrated design strategy for assembling the future of affordable dwelling units. The goal is to take advantage of a component-based system and explore a scalable solution to modular housing. This proposal aims specifically to design a kit of parts that are made to be easily transported and assembled but also gives the ability to customize the use of components to benefit all unique conditions. The benefits of this concept could include decreased construction time, cost, on-site labor, and disruption while providing quality housing with affordable and flexible options.Keywords: adaptive assemblies, modular architecture, adaptability, constructibility, kit of parts
Procedia PDF Downloads 85253 Plastic Deformation Behavior of a Pre-Bored Pile Filler Material Due to Lateral Cyclic Loading in Sandy Soil
Authors: A. Y. Purnama, N. Yasufuku
Abstract:
The bridge structure is a building that has to be maintained, especially for the elastomeric bearing. The girder of the bridge needs to be lifted upward to maintain this elastomeric bearing, that needs high cost. Nowadays, integral abutment bridges are becoming popular. The integral abutment bridge is less costly because the elastomeric bearings are eliminated, which reduces the construction cost and maintenance costs. However, when this elastomeric bearing removed, the girder movement due to environmental thermal forces directly support by pile foundation, and it needs to be considered in the design. In case of pile foundation in a stiff soil, in the top area of the pile cannot move freely due to the fixed condition by soil stiffness. Pre-bored pile system can be used to increase the flexibility of pile foundation using a pre-bored hole that filled with elastic materials, but the behavior of soil-pile interaction and soil response due to this system is still rarely explained. In this paper, an experimental study using small-scale laboratory model test conducted in a half size model. Single flexible pile model embedded in sandy soil with the pre-bored ring, which filled with the filler material. The testing box made from an acrylic glass panel as observation area of the pile shaft to monitor the displacement of the pile during the lateral loading. The failure behavior of the soil inside the pre-bored ring and around the pile shaft was investigated to determine the point of pile rotation and the movement of this point due to the pre-bored ring system along the pile shaft. Digital images were used to capture the deformations of the soil and pile foundation during the loading from the acrylic glass on the side of the testing box. The results were presented in the form of lateral load resistance charts against the pile shaft displacement. The failure pattern result also established due to the cyclic lateral loading. The movement of the rotational point was measured due to the pre-bored system filled with appropriate filler material. Based on the findings, design considerations for pre-bored pile system due to cyclic lateral loading can be introduced.Keywords: failure behavior, pre-bored pile system, cyclic lateral loading, sandy soil
Procedia PDF Downloads 233252 Simulation of Elastic Bodies through Discrete Element Method, Coupled with a Nested Overlapping Grid Fluid Flow Solver
Authors: Paolo Sassi, Jorge Freiria, Gabriel Usera
Abstract:
In this work, a finite volume fluid flow solver is coupled with a discrete element method module for the simulation of the dynamics of free and elastic bodies in interaction with the fluid and between themselves. The open source fluid flow solver, caffa3d.MBRi, includes the capability to work with nested overlapping grids in order to easily refine the grid in the region where the bodies are moving. To do so, it is necessary to implement a recognition function able to identify the specific mesh block in which the device is moving in. The set of overlapping finer grids might be displaced along with the set of bodies being simulated. The interaction between the bodies and the fluid is computed through a two-way coupling. The velocity field of the fluid is first interpolated to determine the drag force on each object. After solving the objects displacements, subject to the elastic bonding among them, the force is applied back onto the fluid through a Gaussian smoothing considering the cells near the position of each object. The fishnet is represented as lumped masses connected by elastic lines. The internal forces are derived from the elasticity of these lines, and the external forces are due to drag, gravity, buoyancy and the load acting on each element of the system. When solving the ordinary differential equations system, that represents the motion of the elastic and flexible bodies, it was found that the Runge Kutta solver of fourth order is the best tool in terms of performance, but requires a finer grid than the fluid solver to make the system converge, which demands greater computing power. The coupled solver is demonstrated by simulating the interaction between the fluid, an elastic fishnet and a set of free bodies being captured by the net as they are dragged by the fluid. The deformation of the net, as well as the wake produced in the fluid stream are well captured by the method, without requiring the fluid solver mesh to adapt for the evolving geometry. Application of the same strategy to the simulation of elastic structures subject to the action of wind is also possible with the method presented, and one such application is currently under development.Keywords: computational fluid dynamics, discrete element method, fishnets, nested overlapping grids
Procedia PDF Downloads 416251 Evaluation to Assess the Impact of Newcastle Infant Partnership Approach
Authors: Samantha Burns, Melissa Brown, Judith Rankin
Abstract:
Background: As a specialised intervention, NEWPIP provides a service which supports both parents and their babies from conception to two years, who are experiencing issues which may affect the quality of their relationship and development of the infant. This evaluation of the NEWPIP approach was undertaken in response to the need for rich, in-depth data to understand the lived experiences of the parents who experienced the service to improve the service. NEWPIP is currently one of 34 specialised parent–infant relationship teams across England. This evaluation contributes to increasing understanding of the impact and effectiveness of this specialised service to inform future practice. Aim: The aim of this evaluation was to explore the perspectives and experiences of parents or caregivers (service users), to assess the impact of the NEWPIP service on the parents themselves and the relationship with their baby. Methods: The exploratory nature of the aim and focus on service users’ experience and perspectives provided scope for a qualitative approach for this evaluation. This consisted of 10 semi-structured interviews with parents who had received the service within the last two years. Recruitment involved both purposive and convenience sampling. The interviews took place between February 2021 – March 2021, lasting between 30-90 minutes and were guided by open-ended questions from a topic guide. The interviews adopted a narrative approach to enable the parents to share their lived experiences. The researchers transcribed the interviews and analysed the data thematically by using a coding method which is grounded in the data. Results: The analysis and findings from the data gathered illuminated an approach which supports parents to build a better bond with their baby and provides a safe space for parents to heal through their relationships. While the parents shared their experiences, the interviews were intended to receive feedback, so questions were asked about what could be improved and what recommendations could be offered to Children North East. Guided by the voice of the parents, this evaluation provides recommendations to support the future of the NEWPIP approach. Conclusions: The NEWPIP approach appears to successfully provide early and flexible support for new parents, increasing a parent’s confidence in their ability to not only cope but thrive as a new parent.Keywords: maternal health, mental health, parent infant relationship, therapy
Procedia PDF Downloads 192250 Physiological and Psychological Influence on Office Workers during Demand Response
Authors: Megumi Nishida, Naoya Motegi, Takurou Kikuchi, Tomoko Tokumura
Abstract:
In recent years, power system has been changed and flexible power pricing system such as demand response has been sought in Japan. The demand response system is simple in the household sector and the owner, decision-maker, can gain the benefits of power saving. On the other hand, the execution of the demand response in the office building is more complex than household because various people such as owners, building administrators and occupants are involved in making decisions. While the owners benefit from the demand saving, the occupants are forced to be exposed to demand-saved environment certain benefits. One of the reasons is that building systems are usually centralized control and each occupant cannot choose either participate demand response event or not, and contribution of each occupant to demand response is unclear to provide incentives. However, the recent development of IT and building systems enables the personalized control of office environment where each occupant can control the lighting level or temperature around him or herself. Therefore, it can be possible to have a system which each occupant can make a decision of demand response participation in office building. This study investigates the personal behavior upon demand response requests, under the condition where each occupant can adjust their brightness individually in their workspace. Once workers participate in the demand response, their task lights are automatically turned off. The participation rates in the demand response events are compared between four groups which are divided by different motivation, the presence or absence of incentives and the way of participation. The result shows that there are the significant differences of participation rates in demand response event between four groups. The way of participation has a large effect on the participation rate. ‘Opt-out’ group, where the occupants are automatically enrolled in a demand response event if they don't express non-participation, will have the highest participation rate in the four groups. The incentive has also an effect on the participation rate. This study also reports that the impact of low illumination office environment on the occupants, such as stress or fatigue. The electrocardiogram and the questionnaire are used to investigate the autonomic nervous activity and subjective symptoms about the fatigue of the occupants. There is no big difference between dim workspace during demand response event and bright workspace in autonomic nervous activity and fatigue.Keywords: demand response, illumination, questionnaire, electrocardiogram
Procedia PDF Downloads 351249 Overcoming Challenges of Teaching English as a Foreign Language in Technical Classrooms: A Case Study at TVTC College of Technology
Authors: Sreekanth Reddy Ballarapu
Abstract:
The perception of the whole process of teaching and learning is undergoing a drastic and radical change. More and more student-centered, pragmatic, and flexible approaches are gradually replacing teacher-centered lecturing and structural-syllabus instruction. The issue of teaching English as a Foreign language is no exception in this regard. The traditional Present-Practice-Produce (P-P-P) method of teaching English is overtaken by Task-Based Teaching which is a subsidiary branch of Communicative Language Teaching. At this juncture this article strongly tries to convey that - Task-based learning, has an advantage over other traditional methods of teaching. All teachers of English must try to customize their texts into productive tasks, apply them, and evaluate the students as well as themselves. Task Based Learning is a double edged tool which can enhance the performance of both the teacher and the taught. The sample for this case study is a class of 35 students from Semester III - Network branch at TVTC College of Technology, Adhum - Kingdom of Saudi Arabia. The students are high school passed out and aged between 19-21years.For the present study the prescribed textbook Technical English 1 by David Bonamy was used and a number of language tasks were chalked out during the pre- task stage and the learners were made to participate voluntarily and actively. The Action Research methodology was adopted within the dual framework of Communicative Language Teaching and Task-Based Learning. The different tools such as questionnaires, feedback and interviews were used to collect data. This study provides information about various techniques of Communicative Language Teaching and Task Based Learning and focuses primarily on the advantages of using a Task Based Learning approach. This article presents in detail the objectives of the study, the planning and implementation of the action research, the challenges encountered during the execution of the plan, and the pedagogical outcome of this project. These research findings serve two purposes: first, it evaluates the effectiveness of Task Based Learning and, second, it empowers the teacher's professionalism in designing and implementing the tasks. In the end, the possibility of scope for further research is presented in brief.Keywords: action research, communicative language teaching, task based learning, perception
Procedia PDF Downloads 238248 Epigenetic Modification Observed in Yeast Chromatin Remodeler Ino80p
Authors: Chang-Hui Shen, Michelle Esposito, Andrew J. Shen, Michael Adejokun, Diana Laterman
Abstract:
The packaging of DNA into nucleosomes is critical to genomic compaction, yet it can leave gene promoters inaccessible to activator proteins or transcription machinery and thus prevents transcriptional initiation. Both chromatin remodelers and histone acetylases (HATs) are the two main transcription co-activators that can reconfigure chromatin structure for transcriptional activation. Ino80p is the core component of the INO80 remodeling complex. Recently, it was shown that Ino80p dissociates from the yeast INO1 promoter after induction. However, when certain HATs were deleted or mutated, Ino80p accumulated at the promoters during gene activation. This suggests a link between HATs’ presence and Ino80p’s dissociation. However, it has yet to be demonstrated that Ino80p can be acetylated. To determine if Ino80p can be acetylated, wild-type Saccharomyces cerevisiae cells carrying Ino80p engineered with a double FLAG tag (MATa INO80-FLAG his3∆200 leu2∆0 met15∆0 trp1∆63 ura3∆0) were grown to mid log phase, as were non-tagged wild type (WT) (MATa his3∆200 leu2∆0 met15∆0 trp1∆63 ura3∆0) and ino80∆ (MATa ino80∆::TRP1 his3∆200 leu2∆0 met15∆0 trp1∆63 ura3∆0) cells as controls. Cells were harvested, and the cell lysates were subjected to immunoprecipitation (IP) with α-FLAG resin to isolate Ino80p. These eluted IP samples were subjected to SDS-PAGE and Western blot analysis. Subsequently, the blots were probed with the α-FLAG and α-acetyl lysine antibodies, respectively. For the blot probed with α-FLAG, one prominent band was shown in the INO80-FLAG cells, but no band was detected in the IP samples from the WT and ino80∆ cells. For the blot probed with the α-acetyl lysine antibody, we detected acetylated Ino80p in the INO80-FLAG strain while no bands were observed in the control strains. As such, our results showed that Ino80p can be acetylated. This acetylation can explain the co-activator’s recruitment patterns observed in current gene activation models. In yeast INO1, it has been shown that Ino80p is recruited to the promoter during repression, and then dissociates from the promoter once de-repression begins. Histone acetylases, on the other hand, have the opposite pattern of recruitment, as they have an increased presence at the promoter as INO1 de-repression commences. This Ino80p recruitment pattern significantly changes when HAT mutant strains are studied. It was observed that instead of dissociating, Ino80p accumulates at the promoter in the absence of functional HATs, such as Gcn5p or Esa1p, under de-repressing processes. As such, Ino80p acetylation may be required for its proper dissociation from the promoters. The remodelers’ dissociation mechanism may also have a wide range of implications with respect to transcriptional initiation, elongation, or even repression as it allows for increased spatial access to the promoter for the various transcription factors and regulators that need to bind in that region. Our findings here suggest a previously uncharacterized interaction between Ino80p and other co-activators recruited to promoters. As such, further analysis of Ino80p acetylation not only will provide insight into the role of epigenetic modifications in transcriptional activation, but also gives insight into the interactions occurring between co-activators at gene promoters during gene regulation.Keywords: acetylation, chromatin remodeler, epigenetic modification, Ino80p
Procedia PDF Downloads 170247 Stimulating Team Creativity: A Study on Creative-Oriented Integrated Design Companies in Taiwan
Authors: Yueh Hsiu Giffen Cheng, Teng Jung Wang
Abstract:
According to the study of British national advisory council on creative and cultural education(NACCCE, what the present and the future need awesome innovative and creative people from the perspective of commercial human resources. Therefore, we can know from above, creativity plays an important role in today’s enterprise indeed. Besides, many companies are aimed at developing team work as their main goal, so “creativity” and “teamwork” become more and more important factors to succeed and team creativity also turn into an important issue gradually. Then, the study takes in-depth interviews of design companies’ leaders and uses self-designed questionnaire regarding affecting team creativity to conduct cross-analysis. The results show that for those creative-oriented integrated design companies, their design strategies don’t begin until data collection and their scripts are usually the best way to inspire creativity. Besides, passing down a legacy of experiences are their common educational training. Most important of all, their organizational resources and leaders can assist all the team to learn and grow effectively and the good interaction between the leader and the member can also bring work flexibility and efficiency. In short, the leader’s expectation of members’ performance can cause them to encourage each other to progress. Moreover, the analysis of questionnaire indicates that members who are open-minded and leaders who have transformational leadership style can both help to establish a good team interaction. Furthermore, abundant resources and training system are also good approaches to establish a harmonious relationship. Finally, through integrating the outcomes of interviews and questionnaires, we can infer that those integrated design companies’ circumstances of design progress are mainly from their leaders’ guidance. In addition, the analysis of design problems are focused on their creative strategies and their scripts and sketches can also inspire their creativity. In sum, the feature of all team is influenced by 4 factors: leaders who have transformational leadership style, open-minded members, flexible working environment, resources and interactive relationship. Ultimately, the study hopes that the result above can apply to the design-related industries or help general companies elevate the team creativity.Keywords: creativity, team creativity, integrated design companies, design process
Procedia PDF Downloads 356246 Elaboration of Ceramic Metal Accident Tolerant Fuels by Additive Manufacturing
Authors: O. Fiquet, P. Lemarignier
Abstract:
Additive manufacturing may find numerous applications in the nuclear industry, for the same reason as for other industries, to enlarge design possibilities and performances and develop fabrication methods as a flexible route for future innovation. Additive Manufacturing applications in the design of structural metallic components for reactors are already developed at a high Technology Readiness Level (TRL). In the case of a Pressured Water Reactor using uranium oxide fuel pellets, which are ceramics, the transposition of already optimized Additive Manufacturing (AM) processes to UO₂ remains a challenge, and the progress remains slow because, to our best knowledge, only a few laboratories have the capability of developing processes applicable to UO₂. After the Fukushima accident, numerous research fields emerged with the study of ATF (Accident tolerant Fuel) fuel concepts, which aimed to improve fuel behaviour. One item concerns the increase of the pellet thermal performance by, for example, the addition of high thermal conductivity material into fissile UO₂. This additive phase may be metallic, and the end product will constitute a CERMET composite. Innovative designs of an internal metallic framework are proposed based on predictive calculations. However, because the well-known reference pellet manufacturing methods impose many limitations, manufacturing such a composite remains an arduous task. Therefore, the AM process appears as a means of broadening the design possibilities of CERMET manufacturing. If the external form remains a standard cylindrical fuel pellet, the internal metallic design remains to be optimized based on process capabilities. This project also considers the limitation to a maximum of 10% volume of metal, which is a constraint neutron physics considerations impose. The AM technique chosen for this development is robocasting because of its simplicity and low-cost equipment. It remains, however, a challenge to adapt a ceramic 3D printing process for the fabrication of UO₂ fuel. The investigation starts with surrogate material, and the optimization of slurry feedstock is based on alumina. The paper will present the first printing of Al2O3-Mo CERMET and the expected transition from ceramic-based alumina to UO₂ CERMET.Keywords: nuclear, fuel, CERMET, robocasting
Procedia PDF Downloads 68245 DNA Nano Wires: A Charge Transfer Approach
Authors: S. Behnia, S. Fathizadeh, A. Akhshani
Abstract:
In the recent decades, DNA has increasingly interested in the potential technological applications that not directly related to the coding for functional proteins that is the expressed in form of genetic information. One of the most interesting applications of DNA is related to the construction of nanostructures of high complexity, design of functional nanostructures in nanoelectronical devices, nanosensors and nanocercuits. In this field, DNA is of fundamental interest to the development of DNA-based molecular technologies, as it possesses ideal structural and molecular recognition properties for use in self-assembling nanodevices with a definite molecular architecture. Also, the robust, one-dimensional flexible structure of DNA can be used to design electronic devices, serving as a wire, transistor switch, or rectifier depending on its electronic properties. In order to understand the mechanism of the charge transport along DNA sequences, numerous studies have been carried out. In this regard, conductivity properties of DNA molecule could be investigated in a simple, but chemically specific approach that is intimately related to the Su-Schrieffer-Heeger (SSH) model. In SSH model, the non-diagonal matrix element dependence on intersite displacements is considered. In this approach, the coupling between the charge and lattice deformation is along the helix. This model is a tight-binding linear nanoscale chain established to describe conductivity phenomena in doped polyethylene. It is based on the assumption of a classical harmonic interaction between sites, which is linearly coupled to a tight-binding Hamiltonian. In this work, the Hamiltonian and corresponding motion equations are nonlinear and have high sensitivity to initial conditions. Then, we have tried to move toward the nonlinear dynamics and phase space analysis. Nonlinear dynamics and chaos theory, regardless of any approximation, could open new horizons to understand the conductivity mechanism in DNA. For a detailed study, we have tried to study the current flowing in DNA and investigated the characteristic I-V diagram. As a result, It is shown that there are the (quasi-) ohmic areas in I-V diagram. On the other hand, the regions with a negative differential resistance (NDR) are detectable in diagram.Keywords: DNA conductivity, Landauer resistance, negative dierential resistance, Chaos theory, mean Lyapunov exponent
Procedia PDF Downloads 425244 Evaluation of the Suitability of a Microcapsule-Based System for the Manufacturing of Self-Healing Low-Density Polyethylene
Authors: Małgorzata Golonka, Jadwiga Laska
Abstract:
Among self-healing materials, the most unexplored group are thermoplastic polymers. These polymers are used not only to produce packaging with a relatively short life but also to obtain coatings, insulation, casings, or parts of machines and devices. Due to its exceptional resistance to weather conditions, hydrophobicity, sufficient mechanical strength, and ease of extrusion, polyethylene is used in the production of polymer pipelines and as an insulating layer for steel pipelines. Polyethylene or PE coated steel pipelines can be used in difficult conditions such as underground or underwater installations. Both installation and use under such conditions are associated with high stresses and consequently the formation of microdamages in the structure of the material, loss of its integrity and final applicability. The ideal solution would be to include a self-healing system in the polymer material. In the presented study the behavior of resin-coated microcapsules in the extrusion process of low-density polyethylene was examined. Microcapsules are a convenient element of the repair system because they can be filled with appropriate reactive substances to ensure the repair process, but the main problem is their durability under processing conditions. Rapeseed oil, which has a relatively high boiling point of 240⁰C and low volatility, was used as the core material that simulates the reactive agents. The capsule shell, which is a key element responsible for its mechanical strength, was obtained by in situ polymerising urea-formaldehyde, melamine-urea-formaldehyde or melamine-formaldehyde resin on the surface of oil droplets dispersed in water. The strength of the capsules was compared based on the shell material, and in addition, microcapsules with single- and multilayer shells were obtained using different combinations of the chemical composition of the resins. For example, the first layer of appropriate tightness and stiffness was made of melamine-urea-formaldehyde resin, and the second layer was a melamine-formaldehyde reinforcing layer. The size, shape, distribution of capsule diameters and shell thickness were determined using digital optical microscopy and electron microscopy. The efficiency of encapsulation (i.e., the presence of rapeseed oil as the core) and the tightness of the shell were determined by FTIR spectroscopic examination. The mechanical strength and distribution of microcapsules in polyethylene were tested by extruding samples of crushed low-density polyethylene mixed with microcapsules in a ratio of 1 and 2.5% by weight. The extrusion process was carried out in a mini extruder at a temperature of 150⁰C. The capsules obtained had a diameter range of 70-200 µm. FTIR analysis confirmed the presence of rapeseed oil in both single- and multilayer shell microcapsules. Microscopic observations of cross sections of the extrudates confirmed the presence of both intact and cracked microcapsules. However, the melamine-formaldehyde resin shells showed higher processing strength compared to that of the melamine-urea-formaldehyde coating and the urea-formaldehyde coating. Capsules with a urea-formaldehyde shell work very well in resin coating systems and cement composites, i.e., in pressureless processing and moulding conditions. The addition of another layer of melamine-formaldehyde coating to both the melamine-urea-formaldehyde and melamine-formaldehyde resin layers significantly increased the number of microcapsules undamaged during the extrusion process. The properties of multilayer coatings were also determined and compared with each other using computer modelling.Keywords: self-healing polymers, polyethylene, microcapsules, extrusion
Procedia PDF Downloads 28243 Reliability Modeling of Repairable Subsystems in Semiconductor Fabrication: A Virtual Age and General Repair Framework
Authors: Keshav Dubey, Swajeeth Panchangam, Arun Rajendran, Swarnim Gupta
Abstract:
In the semiconductor capital equipment industry, effective modeling of repairable system reliability is crucial for optimizing maintenance strategies and ensuring operational efficiency. However, repairable system reliability modeling using a renewal process is not as popular in the semiconductor equipment industry as it is in the locomotive and automotive industries. Utilization of this approach will help optimize maintenance practices. This paper presents a structured framework that leverages both parametric and non-parametric approaches to model the reliability of repairable subsystems based on operational data, maintenance schedules, and system-specific conditions. Data is organized at the equipment ID level, facilitating trend testing to uncover failure patterns and system degradation over time. For non-parametric modeling, the Mean Cumulative Function (Mean Cumulative Function) approach is applied, offering a flexible method to estimate the cumulative number of failures over time without assuming an underlying statistical distribution. This allows for empirical insights into subsystem failure behavior based on historical data. On the parametric side, virtual age modeling, along with Homogeneous and Non-Homogeneous Poisson Process (Homogeneous Poisson Process and Non-Homogeneous Poisson Process) models, is employed to quantify the effect of repairs and the aging process on subsystem reliability. These models allow for a more structured analysis by characterizing repair effectiveness and system wear-out trends over time. A comparison of various Generalized Renewal Process (GRP) approaches highlights their utility in modeling different repair effectiveness scenarios. These approaches provide a robust framework for assessing the impact of maintenance actions on system performance and reliability. By integrating both parametric and non-parametric methods, this framework offers a comprehensive toolset for reliability engineers to better understand equipment behavior, assess the effectiveness of maintenance activities, and make data-driven decisions that enhance system availability and operational performance in semiconductor fabrication facilities.Keywords: reliability, maintainability, homegenous poission process, repairable system
Procedia PDF Downloads 19242 Self-serving Anchoring of Self-judgments
Authors: Elitza Z. Ambrus, Bjoern Hartig, Ryan McKay
Abstract:
Individuals’ self-judgments might be malleable and influenced by comparison with a random value. On the one hand, self-judgments reflect our self-image, which is typically considered to be stable in adulthood. Indeed, people also strive hard to maintain a fixed, positive moral image of themselves. On the other hand, research has shown the robustness of the so-called anchoring effect on judgments and decisions. The anchoring effect refers to the influence of a previously considered comparative value (anchor) on a consecutive absolute judgment and reveals that individuals’ estimates of various quantities are flexible and can be influenced by a salient random value. The present study extends the anchoring paradigm to the domain of the self. We also investigate whether participants are more susceptible to self-serving anchors, i.e., anchors that enhance participant’s self-image, especially their moral self-image. In a pre-reregistered study via the online platform Prolific, 249 participants (156 females, 89 males, 3 other and 1 who preferred not to specify their gender; M = 35.88, SD = 13.91) ranked themselves on eight personality characteristics. However, in the anchoring conditions, respondents were asked to first indicate whether they thought they would rank higher or lower than a given anchor value before providing their estimated rank in comparison to 100 other anonymous participants. A high and a low anchor value were employed to differentiate between anchors in a desirable (self-serving) direction and anchors in an undesirable (self-diminishing) direction. In the control treatment, there was no comparison question. Subsequently, participants provided their self-rankings on the eight personality traits with two personal characteristics for each combination of the factors desirable/undesirable and moral/non-moral. We found evidence of an anchoring effect for self-judgments. Moreover, anchoring was more efficient when people were anchored in a self-serving direction: the anchoring effect was enhanced when supporting a more favorable self-view and mitigated (even reversed) when implying a deterioration of the self-image. The self-serving anchoring was more pronounced for moral than for non-moral traits. The data also provided evidence in support of a better-than-average effect in general as well as a magnified better-than-average effect for moral traits. Taken together, these results suggest that self-judgments might not be as stable in adulthood as previously thought. In addition, considerations of constructing and maintaining a positive self-image might interact with the anchoring effect on self-judgments. Potential implications of our results concern the construction and malleability of self-judgments as well as the psychological mechanism shaping anchoring.Keywords: anchoring, better-than-average effect, self-judgments, self-serving anchoring
Procedia PDF Downloads 180241 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics
Authors: Varun Kumar, Chandra Shakher
Abstract:
Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy
Procedia PDF Downloads 498240 Prediction of Finned Projectile Aerodynamics Using a Lattice-Boltzmann Method CFD Solution
Authors: Zaki Abiza, Miguel Chavez, David M. Holman, Ruddy Brionnaud
Abstract:
In this paper, the prediction of the aerodynamic behavior of the flow around a Finned Projectile will be validated using a Computational Fluid Dynamics (CFD) solution, XFlow, based on the Lattice-Boltzmann Method (LBM). XFlow is an innovative CFD software developed by Next Limit Dynamics. It is based on a state-of-the-art Lattice-Boltzmann Method which uses a proprietary particle-based kinetic solver and a LES turbulent model coupled with the generalized law of the wall (WMLES). The Lattice-Boltzmann method discretizes the continuous Boltzmann equation, a transport equation for the particle probability distribution function. From the Boltzmann transport equation, and by means of the Chapman-Enskog expansion, the compressible Navier-Stokes equations can be recovered. However to simulate compressible flows, this method has a Mach number limitation because of the lattice discretization. Thanks to this flexible particle-based approach the traditional meshing process is avoided, the discretization stage is strongly accelerated reducing engineering costs, and computations on complex geometries are affordable in a straightforward way. The projectile that will be used in this work is the Army-Navy Basic Finned Missile (ANF) with a caliber of 0.03 m. The analysis will consist in varying the Mach number from M=0.5 comparing the axial force coefficient, normal force slope coefficient and the pitch moment slope coefficient of the Finned Projectile obtained by XFlow with the experimental data. The slope coefficients will be obtained using finite difference techniques in the linear range of the polar curve. The aim of such an analysis is to find out the limiting Mach number value starting from which the effects of high fluid compressibility (related to transonic flow regime) lead the XFlow simulations to differ from the experimental results. This will allow identifying the critical Mach number which limits the validity of the isothermal formulation of XFlow and beyond which a fully compressible solver implementing a coupled momentum-energy equations would be required.Keywords: CFD, computational fluid dynamics, drag, finned projectile, lattice-boltzmann method, LBM, lift, mach, pitch
Procedia PDF Downloads 421239 Nanobiosensor System for Aptamer Based Pathogen Detection in Environmental Waters
Authors: Nimet Yildirim Tirgil, Ahmed Busnaina, April Z. Gu
Abstract:
Environmental waters are monitored worldwide to protect people from infectious diseases primarily caused by enteric pathogens. All long, Escherichia coli (E. coli) is a good indicator for potential enteric pathogens in waters. Thus, a rapid and simple detection method for E. coli is very important to predict the pathogen contamination. In this study, to the best of our knowledge, as the first time we developed a rapid, direct and reusable SWCNTs (single walled carbon nanotubes) based biosensor system for sensitive and selective E. coli detection in water samples. We use a novel and newly developed flexible biosensor device which was fabricated by high-rate nanoscale offset printing process using directed assembly and transfer of SWCNTs. By simple directed assembly and non-covalent functionalization, aptamer (biorecognition element that specifically distinguish the E. coli O157:H7 strain from other pathogens) based SWCNTs biosensor system was designed and was further evaluated for environmental applications with simple and cost-effective steps. The two gold electrode terminals and SWCNTs-bridge between them allow continuous resistance response monitoring for the E. coli detection. The detection procedure is based on competitive mode detection. A known concentration of aptamer and E. coli cells were mixed and after a certain time filtered. The rest of free aptamers injected to the system. With hybridization of the free aptamers and their SWCNTs surface immobilized probe DNA (complementary-DNA for E. coli aptamer), we can monitor the resistance difference which is proportional to the amount of the E. coli. Thus, we can detect the E. coli without injecting it directly onto the sensing surface, and we could protect the electrode surface from the aggregation of target bacteria or other pollutants that may come from real wastewater samples. After optimization experiments, the linear detection range was determined from 2 cfu/ml to 10⁵ cfu/ml with higher than 0.98 R² value. The system was regenerated successfully with 5 % SDS solution over 100 times without any significant deterioration of the sensor performance. The developed system had high specificity towards E. coli (less than 20 % signal with other pathogens), and it could be applied to real water samples with 86 to 101 % recovery and 3 to 18 % cv values (n=3).Keywords: aptamer, E. coli, environmental detection, nanobiosensor, SWCTs
Procedia PDF Downloads 197