Search results for: actual cost and duration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8935

Search results for: actual cost and duration

505 Optimising Apparel Digital Production in Industrial Clusters

Authors: Minji Seo

Abstract:

Fashion stakeholders are becoming increasingly aware of technological innovation in manufacturing. In 2020, the COVID-19 pandemic caused transformations in working patterns, such as working remotely rather thancommuting. To enable smooth remote working, 3D fashion design software is being adoptedas the latest trend in design and production. The majority of fashion designers, however, are still resistantto this change. Previous studies on 3D fashion design software solely highlighted the beneficial and detrimental factors of adopting design innovations. They lacked research on the relationship between resistance factors and the adoption of innovation. These studies also fell short of exploringthe perspectives of users of these innovations. This paper aims to investigate the key drivers and barriers of employing 3D fashion design software as wellas to explore the challenges faced by designers.It also toucheson the governmental support for digital manufacturing in Seoul, South Korea, and London, the United Kingdom. By conceptualising local support, this study aims to provide a new path for industrial clusters to optimise digital apparel manufacturing. The study uses a mixture of quantitative and qualitative approaches. Initially, it reflects a survey of 350 samples, fashion designers, on innovation resistance factors of 3D fashion design software and the effectiveness of local support. In-depth interviews with 30 participants provide a better understanding of designers’ aspects of the benefits and obstacles of employing 3D fashion design software. The key findings of this research are the main barriers to employing 3D fashion design software in fashion production. The cultural characteristics and interviews resultsare used to interpret the survey results. The findings of quantitative data examine the main resistance factors to adopting design innovations. The dominant obstacles are: the cost of software and its complexity; lack of customers’ interest in innovation; lack of qualified personnel, and lack of knowledge. The main difference between Seoul and London is the attitudes towards government support. Compared to the UK’s fashion designers, South Korean designers emphasise that government support is highly relevant to employing 3D fashion design software. The top-down and bottom-up policy implementation approach distinguishes the perception of government support. Compared to top-down policy approaches in South Korea, British fashion designers based on employing bottom-up approaches are reluctant to receive government support. The findings of this research will contribute to generating solutions for local government and the optimisation of use of 3D fashion design software in fashion industrial clusters.

Keywords: digital apparel production, industrial clusters, innovation resistance, 3D fashion design software, manufacturing, innovation, technology, digital manufacturing, innovative fashion design process

Procedia PDF Downloads 100
504 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.

Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy

Procedia PDF Downloads 110
503 Seismic Reinforcement of Existing Japanese Wooden Houses Using Folded Exterior Thin Steel Plates

Authors: Jiro Takagi

Abstract:

Approximately 90 percent of the casualties in the near-fault-type Kobe earthquake in 1995 resulted from the collapse of wooden houses, although a limited number of collapses of this type of building were reported in the more recent off-shore-type Tohoku Earthquake in 2011 (excluding direct damage by the Tsunami). Kumamoto earthquake in 2016 also revealed the vulnerability of old wooden houses in Japan. There are approximately 24.5 million wooden houses in Japan and roughly 40 percent of them are considered to have the inadequate seismic-resisting capacity. Therefore, seismic strengthening of these wooden houses is an urgent task. However, it has not been quickly done for various reasons, including cost and inconvenience during the reinforcing work. Residents typically spend their money on improvements that more directly affect their daily housing environment (such as interior renovation, equipment renewal, and placement of thermal insulation) rather than on strengthening against extremely rare events such as large earthquakes. Considering this tendency of residents, a new approach to developing a seismic strengthening method for wooden houses is needed. The seismic reinforcement method developed in this research uses folded galvanized thin steel plates as both shear walls and the new exterior architectural finish. The existing finish is not removed. Because galvanized steel plates are aesthetic and durable, they are commonly used in modern Japanese buildings on roofs and walls. Residents could feel a physical change through the reinforcement, covering existing exterior walls with steel plates. Also, this exterior reinforcement can be installed with only outdoor work, thereby reducing inconvenience for residents since they would not be required to move out temporarily during construction. The Durability of the exterior is enhanced, and the reinforcing work can be done efficiently since perfect water protection is not required for the new finish. In this method, the entire exterior surface would function as shear walls and thus the pull-out force induced by seismic lateral load would be significantly reduced as compared with a typical reinforcement scheme of adding braces in selected frames. Consequently, reinforcing details of anchors to the foundations would be less difficult. In order to attach the exterior galvanized thin steel plates to the houses, new wooden beams are placed next to the existing beams. In this research, steel connections between the existing and new beams are developed, which contain a gap for the existing finish between the two beams. The thin steel plates are screwed to the new beams and the connecting vertical members. The seismic-resisting performance of the shear walls with thin steel plates is experimentally verified both for the frames and connections. It is confirmed that the performance is high enough for bracing general wooden houses.

Keywords: experiment, seismic reinforcement, thin steel plates, wooden houses

Procedia PDF Downloads 225
502 Pump-as-Turbine: Testing and Characterization as an Energy Recovery Device, for Use within the Water Distribution Network

Authors: T. Lydon, A. McNabola, P. Coughlan

Abstract:

Energy consumption in the water distribution network (WDN) is a well established problem equating to the industry contributing heavily to carbon emissions, with 0.9 kg CO2 emitted per m3 of water supplied. It is indicated that 85% of energy wasted in the WDN can be recovered by installing turbines. Existing potential in networks is present at small capacity sites (5-10 kW), numerous and dispersed across networks. However, traditional turbine technology cannot be scaled down to this size in an economically viable fashion, thus alternative approaches are needed. This research aims to enable energy recovery potential within the WDN by exploring the potential of pumps-as-turbines (PATs), to realise this potential. PATs are estimated to be ten times cheaper than traditional micro-hydro turbines, presenting potential to contribute to an economically viable solution. However, a number of technical constraints currently prohibit their widespread use, including the inability of a PAT to control pressure, difficulty in the selection of PATs due to lack of performance data and a lack of understanding on how PATs can cater for fluctuations as extreme as +/- 50% of the average daily flow, characteristic of the WDN. A PAT prototype is undergoing testing in order to identify the capabilities of the technology. Results of preliminary testing, which involved testing the efficiency and power potential of the PAT for varying flow and pressure conditions, in order to develop characteristic and efficiency curves for the PAT and a baseline understanding of the technologies capabilities, are presented here: •The limitations of existing selection methods which convert BEP from pump operation to BEP in turbine operation was highlighted by the failure of such methods to reflect the conditions of maximum efficiency of the PAT. A generalised selection method for the WDN may need to be informed by an understanding of impact of flow variations and pressure control on system power potential capital cost, maintenance costs, payback period. •A clear relationship between flow and efficiency rate of the PAT has been established. The rate of efficiency reductions for flows +/- 50% BEP is significant and more extreme for deviations in flow above the BEP than below, but not dissimilar to the reaction of efficiency of other turbines. •PAT alone is not sufficient to regulate pressure, yet the relationship of pressure across the PAT is foundational in exploring ways which PAT energy recovery systems can maintain required pressure level within the WDN. Efficiencies of systems of PAT energy recovery systems operating conditions of pressure regulation, which have been conceptualise in current literature, need to be established. Initial results guide the focus of forthcoming testing and exploration of PAT technology towards how PATs can form part of an efficiency energy recovery system.

Keywords: energy recovery, pump-as-turbine, water distribution network, water distribution network

Procedia PDF Downloads 260
501 Study on the Rapid Start-up and Functional Microorganisms of the Coupled Process of Short-range Nitrification and Anammox in Landfill Leachate Treatment

Authors: Lina Wu

Abstract:

The excessive discharge of nitrogen in sewage greatly intensifies the eutrophication of water bodies and poses a threat to water quality. Nitrogen pollution control has become a global concern. Currently, the problem of water pollution in China is still not optimistic. As a typical high ammonia nitrogen organic wastewater, landfill leachate is more difficult to treat than domestic sewage because of its complex water quality, high toxicity, and high concentration.Many studies have shown that the autotrophic anammox bacteria in nature can combine nitrous and ammonia nitrogen without carbon source through functional genes to achieve total nitrogen removal, which is very suitable for the removal of nitrogen from leachate. In addition, the process also saves a lot of aeration energy consumption than the traditional nitrogen removal process. Therefore, anammox plays an important role in nitrogen conversion and energy saving. The process composed of short-range nitrification and denitrification coupled an ammo ensures the removal of total nitrogen and improves the removal efficiency, meeting the needs of the society for an ecologically friendly and cost-effective nutrient removal treatment technology. Continuous flow process for treating late leachate [an up-flow anaerobic sludge blanket reactor (UASB), anoxic/oxic (A/O)–anaerobic ammonia oxidation reactor (ANAOR or anammox reactor)] has been developed to achieve autotrophic deep nitrogen removal. In this process, the optimal process parameters such as hydraulic retention time and nitrification flow rate have been obtained, and have been applied to the rapid start-up and stable operation of the process system and high removal efficiency. Besides, finding the characteristics of microbial community during the start-up of anammox process system and analyzing its microbial ecological mechanism provide a basis for the enrichment of anammox microbial community under high environmental stress. One research developed partial nitrification-Anammox (PN/A) using an internal circulation (IC) system and a biological aerated filter (BAF) biofilm reactor (IBBR), where the amount of water treated is closer to that of landfill leachate. However, new high-throughput sequencing technology is still required to be utilized to analyze the changes of microbial diversity of this system, related functional genera and functional genes under optimal conditions, providing theoretical and further practical basis for the engineering application of novel anammox system in biogas slurry treatment and resource utilization.

Keywords: nutrient removal and recovery, leachate, anammox, partial nitrification

Procedia PDF Downloads 49
500 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 113
499 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains

Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe

Abstract:

The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.

Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain

Procedia PDF Downloads 311
498 The Invaluable Contributions of Radiography and Radiotherapy in Modern Medicine

Authors: Sahar Heidary

Abstract:

Radiography and radiotherapy have emerged as crucial pillars of modern medical practice, revolutionizing diagnostics and treatment for a myriad of health conditions. This abstract highlights the pivotal role of radiography and radiotherapy in favor of healthcare and society. Radiography, a non-invasive imaging technique, has significantly advanced medical diagnostics by enabling the visualization of internal structures and abnormalities within the human body. With the advent of digital radiography, clinicians can obtain high-resolution images promptly, leading to faster diagnoses and informed treatment decisions. Radiography plays a pivotal role in detecting fractures, tumors, infections, and various other conditions, allowing for timely interventions and improved patient outcomes. Moreover, its widespread accessibility and cost-effectiveness make it an indispensable tool in healthcare settings worldwide. On the other hand, radiotherapy, a branch of medical science that utilizes high-energy radiation, has become an integral component of cancer treatment and management. By precisely targeting and damaging cancerous cells, radiotherapy offers a potent strategy to control tumor growth and, in many cases, leads to cancer eradication. Additionally, radiotherapy is often used in combination with surgery and chemotherapy, providing a multifaceted approach to combat cancer comprehensively. The continuous advancements in radiotherapy techniques, such as intensity-modulated radiotherapy and stereotactic radiosurgery, have further improved treatment precision while minimizing damage to surrounding healthy tissues. Furthermore, radiography and radiotherapy have demonstrated their worth beyond oncology. Radiography is instrumental in guiding various medical procedures, including catheter placement, joint injections, and dental evaluations, reducing complications and enhancing procedural accuracy. On the other hand, radiotherapy finds applications in non-cancerous conditions like benign tumors, vascular malformations, and certain neurological disorders, offering therapeutic options for patients who may not benefit from traditional surgical interventions. In conclusion, radiography and radiotherapy stand as indispensable tools in modern medicine, driving transformative improvements in patient care and treatment outcomes. Their ability to diagnose, treat, and manage a wide array of medical conditions underscores their favor in medical practice. As technology continues to advance, radiography and radiotherapy will undoubtedly play an ever more significant role in shaping the future of healthcare, ultimately saving lives and enhancing the quality of life for countless individuals worldwide.

Keywords: radiology, radiotherapy, medical imaging, cancer treatment

Procedia PDF Downloads 68
497 Assessment of Serum Osteopontin, Osteoprotegerin and Bone-Specific Alp as Markers of Bone Turnover in Patients with Disorders of Thyroid Function in Nigeria, Sub-Saharan Africa

Authors: Oluwabori Emmanuel Olukoyejo, Ogra Victor Ogra, Bosede Amodu, Tewogbade Adeoye Adedeji

Abstract:

Background: Disorders of thyroid function are the second most common endocrine disorders worldwide, with a direct relationship with metabolic bone diseases. These metabolic bone complications are often subtle but manifest as bone pains and an increased risk of fractures. The gold standard for diagnosis, Dual Energy X-ray Absorptiometry (DEXA), is limited in this environment due to unavailability, cumbersomeness and cost. However, bone biomarkers have shown prospects in assessing alterations in bone remodeling, which has not been studied in this environment. Aim: This study evaluates serum levels of bone-specific alkaline phosphatase (bone-specific ALP), osteopontin and osteoprotegerin biomarkers of bone turnover in patients with disorders of thyroid function. Methods: This is a cross-sectional study carried out over a period of one and a half years. Forty patients with thyroid dysfunctions, aged 20 to 50 years, and thirty-eight age and sex-matched healthy euthyroid controls were included in this study. Patients were further stratified into hyperthyroid and hypothyroid groups. Bone-specific ALP, osteopontin, and osteoprotegerin, alongside serum total calcium, ionized calcium and inorganic phosphate, were assayed for all patients and controls. A self-administered questionnaire was used to obtain data on sociodemographic and medical history. Then, 5 ml of blood was collected in a plain bottle and serum was harvested following clotting and centrifugation. Serum samples were assayed for B-ALP, osteopontin, and osteoprotegerin using the ELISA technique. Total calcium and ionized calcium were assayed using an ion-selective electrode, while the inorganic phosphate was assayed with automated photometry. Results: The hyperthyroid and hypothyroid patient groups had significantly increased median serum B-ALP (30.40 and 26.50) ng/ml and significantly lower median OPG (0.80 and 0.80) ng/ml than the controls (10.81 and 1.30) ng/ml respectively, p < 0.05. However, serum osteopontin in the hyperthyroid group was significantly higher and significantly lower in the hypothyroid group when compared with the controls (11.00 and 2.10 vs 3.70) ng/ml, respectively, p < 0.05. Both hyperthyroid and hypothyroid groups had significantly higher mean serum total calcium, ionized calcium and inorganic phosphate than the controls (2.49 ± 0.28, 1.27 ± 0.14 and 1.33 ± 0.33) mmol/l and (2.41 ± 0.04, 1.20 ± 0.04 and 1.15 ± 0.16) mmol/l vs (2.27 ± 0.11, 1.17 ± 0.06 and 1.08 ± 0.16) mmol/l respectively, p < 0.05. Conclusion: Patients with disorders of thyroid function have metabolic imbalances of all the studied bone markers, suggesting a higher bone turnover. The routine bone markers will be an invaluable tool for monitoring bone health in patients with thyroid dysfunctions, while the less readily available markers can be introduced as supplementary tools. Moreover, bone-specific ALP, osteopontin and osteoprotegerin were found to be the strongest independent predictors of metabolic bone markers’ derangements in patients with thyroid dysfunctions.

Keywords: metabolic bone diseases, biomarker, bone turnover, hyperthyroid, hypothyroid, euthyroid

Procedia PDF Downloads 34
496 Neuropsychiatric Outcomes of Intensive Music Therapy in Stroke Rehabilitation A Premilitary Investigation

Authors: Honey Bryant, Elvina Chu

Abstract:

Stroke is the leading cause of disability in adults in Canada and directly related to depression, anxiety, and sleep disorders; with an estimated annual cost of $50 billion in health care. Strokes not only impact the individual but society as a whole. Current stroke rehabilitation does not include Music Therapy, although it has success in clinical research in the use of stroke rehabilitation. This study examines the use of neurologic music therapy (NMT) in conjunction with stroke rehabilitation to improve sleep quality, reduce stress levels, and promote neurogenesis. Existing research on NMT in stroke is limited, which means any conclusive information gathered during this study will be significant. My novel hypotheses are a.) stroke patients will become less depressed and less anxious with improved sleep following NMT. b.) NMT will reduce stress levels and promote neurogenesis in stroke patients admitted for rehabilitation. c.) Beneficial effects of NMT will be sustained at least short-term following treatment. Participants were recruited from the in-patient stroke rehabilitation program at Providence Care Hospital in Kingston, Ontario, Canada. All participants-maintained stroke rehabilitation treatment as normal. The study was spilt into two groups, the first being Passive Music Listening (PML) and the second Neurologic Music Therapy (NMT). Each group underwent 10 sessions of intensive music therapy lasting 45 minutes for 10 consecutive days, excluding weekends. Psychiatric Assessments, Epworth Sleepiness Scale (ESS), Hospital Anxiety & Depression Rating Scale (HADS), and Music Engagement Questionnaire (MusEQ), were completed, followed by a general feedback interview. Physiological markers of stress were measured through blood pressure measurements and heart rate variability. Serum collections reviewed neurogenesis via Brain-derived neurotrophic factor (BDNF) and stress markers of cortisol levels. As this study is still on-going, a formal analysis of data has not been fully completed, although trends are following our hypotheses. A decrease in sleepiness and anxiety is seen upon the first cohort of PML. Feedback interviews have indicated most participants subjectively felt more relaxed and thought PML was useful in their recovery. If the hypothesis is supported, larger external funding which will allow for greater investigation of the use of NMT in stroke rehabilitation. As we know, NMT is not covered under Ontario Health Insurance Plan (OHIP), so there is limited scientific data surrounding its uses as a clinical tool. This research will provide detailed findings of the treatment of neuropsychiatric aspects of stroke. Concurrently, a passive music listening study is being designed to further review the use of PML in rehabilitation as well.

Keywords: music therapy, psychotherapy, neurologic music therapy, passive music listening, neuropsychiatry, counselling, behavioural, stroke, stroke rehabilitation, rehabilitation, neuroscience

Procedia PDF Downloads 112
495 Bending the Consciousnesses: Uncovering Environmental Issues Through Circuit Bending

Authors: Enrico Dorigatti

Abstract:

The growing pile of hazardous e-waste produced especially by those developed and wealthy countries gets relentlessly bigger, composed of the EEDs (Electric and Electronic Device) that are often thrown away although still well functioning, mainly due to (programmed) obsolescence. As a consequence, e-waste has taken, over the last years, the shape of a frightful, uncontrollable, and unstoppable phenomenon, mainly fuelled by market policies aiming to maximize sales—and thus profits—at any cost. Against it, governments and organizations put some efforts in developing ambitious frameworks and policies aiming to regulate, in some cases, the whole lifecycle of EEDs—from the design to the recycling. Incidentally, however, such regulations sometimes make the disposal of the devices economically unprofitable, which often translates into growing illegal e-waste trafficking—an activity usually undertaken by criminal organizations. It seems that nothing, at least in the near future, can stop the phenomenon of e-waste production and accumulation. But while, from a practical standpoint, a solution seems hard to find, much can be done regarding people's education, which translates into informing and promoting good practices such as reusing and repurposing. This research argues that circuit bending—an activity rooted in neo-materialist philosophy and post-digital aesthetic, and based on repurposing EEDs into novel music instruments and sound generators—could have a great potential in this. In particular, it asserts that circuit bending could expose ecological, environmental, and social criticalities related to the current market policies and economic model. Not only thanks to its practical side (e.g., sourcing and repurposing devices) but also to the artistic one (e.g., employing bent instruments for ecological-aware installations, performances). Currently, relevant literature and debate lack interest and information about the ecological aspects and implications of the practical and artistic sides of circuit bending. This research, therefore, although still at an early stage, aims to fill in this gap by investigating, on the one side, the ecologic potential of circuit bending and, on the other side, its capacity of sensitizing people, through artistic practice, about e-waste-related issues. The methodology will articulate in three main steps. Firstly, field research will be undertaken—with the purpose of understanding where and how to source, in an ecologic and sustainable way, (discarded) EEDs for circuit bending. Secondly, artistic installations and performances will be organized—to sensitize the audience about environmental concerns through sound art and music derived from bent instruments. Data, such as audiences' feedback, will be collected at this stage. The last step will consist in realising workshops to spread an ecologically-aware circuit bending practice. Additionally, all the data and findings collected will be made available and disseminated as resources.

Keywords: circuit bending, ecology, sound art, sustainability

Procedia PDF Downloads 170
494 Biodegradable Self-Supporting Nanofiber Membranes Prepared by Centrifugal Spinning

Authors: Milos Beran, Josef Drahorad, Ondrej Vltavsky, Martin Fronek, Jiri Sova

Abstract:

While most nanofibers are produced using electrospinning, this technique suffers from several drawbacks, such as the requirement for specialized equipment, high electrical potential, and electrically conductive targets. Consequently, recent years have seen the increasing emergence of novel strategies in generating nanofibers in a larger scale and higher throughput manner. The centrifugal spinning is simple, cheap and highly productive technology for nanofiber production. In principle, the drawing of solution filament into nanofibers using centrifugal spinning is achieved through the controlled manipulation of centrifugal force, viscoelasticity, and mass transfer characteristics of the spinning solutions. Engineering efforts of researches of the Food research institute Prague and the Czech Technical University in the field the centrifugal nozzleless spinning led to introduction of a pilot plant demonstrator NANOCENT. The main advantages of the demonstrator are lower investment cost - thanks to simpler construction compared to widely used electrospinning equipments, higher production speed, new application possibilities and easy maintenance. The centrifugal nozzleless spinning is especially suitable to produce submicron fibers from polymeric solutions in highly volatile solvents, such as chloroform, DCM, THF, or acetone. To date, submicron fibers have been prepared from PS, PUR and biodegradable polyesters, such as PHB, PLA, PCL, or PBS. The products are in form of 3D structures or nanofiber membranes. Unique self-supporting nanofiber membranes were prepared from the biodegradable polyesters in different mixtures. The nanofiber membranes have been tested for different applications. Filtration efficiencies for water solutions and aerosols in air were evaluated. Different active inserts were added to the solutions before the spinning process, such as inorganic nanoparticles, organic precursors of metal oxides, antimicrobial and wound healing compounds or photocatalytic phthalocyanines. Sintering can be subsequently carried out to remove the polymeric material and transfer the organic precursors to metal oxides, such as Si02, or photocatalytic Zn02 and Ti02, to obtain inorganic nanofibers. Electrospinning is more suitable technology to produce membranes for the filtration applications than the centrifugal nozzleless spinning, because of the formation of more homogenous nanofiber layers and fibers with smaller diameters. The self-supporting nanofiber membranes prepared from the biodegradable polyesters are especially suitable for medical applications, such as wound or burn healing dressings or tissue engineering scaffolds. This work was supported by the research grants TH03020466 of the Technology Agency of the Czech Republic.

Keywords: polymeric nanofibers, self-supporting nanofiber membranes, biodegradable polyesters, active inserts

Procedia PDF Downloads 163
493 Effect of Chemical Fertilizer on Plant Growth-Promoting Rhizobacteria in Wheat

Authors: Tessa E. Reid, Vanessa N. Kavamura, Maider Abadie, Adriana Torres-Ballesteros, Mark Pawlett, Ian M. Clark, Jim Harris, Tim Mauchline

Abstract:

The deleterious effect of chemical fertilizer on rhizobacterial diversity has been well documented using 16S rRNA gene amplicon sequencing and predictive metagenomics. Biofertilization is a cost-effective and sustainable alternative; improving strategies depends on isolating beneficial soil microorganisms. Although culturing is widespread in biofertilization, it is unknown whether the composition of cultured isolates closely mirrors native beneficial rhizobacterial populations. This study aimed to determine the relative abundance of culturable plant growth-promoting rhizobacteria (PGPR) isolates within total soil DNA and how potential PGPR populations respond to chemical fertilization in a commercial wheat variety. It was hypothesized that PGPR will be reduced in fertilized relative to unfertilized wheat. Triticum aestivum cv. Cadenza seeds were sown in a nutrient depleted agricultural soil in pots treated with and without nitrogen-phosphorous-potassium (NPK) fertilizer. Rhizosphere and rhizoplane samples were collected at flowering stage (10 weeks) and analyzed by culture-independent (amplicon sequence variance (ASV) analysis of total rhizobacterial DNA) and -dependent (isolation using growth media) techniques. Rhizosphere- and rhizoplane-derived microbiota culture collections were tested for plant growth-promoting traits using functional bioassays. In general, fertilizer addition decreased the proportion of nutrient-solubilizing bacteria (nitrate, phosphate, potassium, iron and, zinc) isolated from rhizocompartments in wheat, whereas salt tolerant bacteria were not affected. A PGPR database was created from isolate 16S rRNA gene sequences and searched against total soil DNA, revealing that 1.52% of total community ASVs were identified as culturable PGPR isolates. Bioassays identified a higher proportion of PGPR in non-fertilized samples (rhizosphere (49%) and rhizoplane (91%)) compared to fertilized samples (rhizosphere (21%) and rhizoplane (19%)) which constituted approximately 1.95% and 1.25% in non-fertilized and fertilized total community DNA, respectively. The analyses of 16S rRNA genes and deduced functional profiles provide an in-depth understanding of the responses of bacterial communities to fertilizer; this study suggests that rhizobacteria, which potentially benefit plants by mobilizing insoluble nutrients in soil, are reduced by chemical fertilizer addition. This knowledge will benefit the development of more targeted biofertilization strategies.

Keywords: bacteria, fertilizer, microbiome, rhizoplane, rhizosphere

Procedia PDF Downloads 305
492 Predictive Pathogen Biology: Genome-Based Prediction of Pathogenic Potential and Countermeasures Targets

Authors: Debjit Ray

Abstract:

Horizontal gene transfer (HGT) and recombination leads to the emergence of bacterial antibiotic resistance and pathogenic traits. HGT events can be identified by comparing a large number of fully sequenced genomes across a species or genus, define the phylogenetic range of HGT, and find potential sources of new resistance genes. In-depth comparative phylogenomics can also identify subtle genome or plasmid structural changes or mutations associated with phenotypic changes. Comparative phylogenomics requires that accurately sequenced, complete and properly annotated genomes of the organism. Assembling closed genomes requires additional mate-pair reads or “long read” sequencing data to accompany short-read paired-end data. To bring down the cost and time required of producing assembled genomes and annotating genome features that inform drug resistance and pathogenicity, we are analyzing the performance for genome assembly of data from the Illumina NextSeq, which has faster throughput than the Illumina HiSeq (~1-2 days versus ~1 week), and shorter reads (150bp paired-end versus 300bp paired end) but higher capacity (150-400M reads per run versus ~5-15M) compared to the Illumina MiSeq. Bioinformatics improvements are also needed to make rapid, routine production of complete genomes a reality. Modern assemblers such as SPAdes 3.6.0 running on a standard Linux blade are capable in a few hours of converting mixes of reads from different library preps into high-quality assemblies with only a few gaps. Remaining breaks in scaffolds are generally due to repeats (e.g., rRNA genes) are addressed by our software for gap closure techniques, that avoid custom PCR or targeted sequencing. Our goal is to improve the understanding of emergence of pathogenesis using sequencing, comparative genomics, and machine learning analysis of ~1000 pathogen genomes. Machine learning algorithms will be used to digest the diverse features (change in virulence genes, recombination, horizontal gene transfer, patient diagnostics). Temporal data and evolutionary models can thus determine whether the origin of a particular isolate is likely to have been from the environment (could it have evolved from previous isolates). It can be useful for comparing differences in virulence along or across the tree. More intriguing, it can test whether there is a direction to virulence strength. This would open new avenues in the prediction of uncharacterized clinical bugs and multidrug resistance evolution and pathogen emergence.

Keywords: genomics, pathogens, genome assembly, superbugs

Procedia PDF Downloads 196
491 The Superior Performance of Investment Bank-Affiliated Mutual Funds

Authors: Michelo Obrey

Abstract:

Traditionally, mutual funds have long been esteemed as stand-alone entities in the U.S. However, the prevalence of the fund families’ affiliation to financial conglomerates is eroding this striking feature. Mutual fund families' affiliation with financial conglomerates can potentially be an important source of superior performance or cost to the affiliated mutual fund investors. On the one hand, financial conglomerates affiliation offers the mutual funds access to abundant resources, better research quality, private material information, and business connections within the financial group. On the other hand, conflict of interest is bound to arise between the financial conglomerate relationship and fund management. Using a sample of U.S. domestic equity mutual funds from 1994 to 2017, this paper examines whether fund family affiliation to an investment bank help the affiliated mutual funds deliver superior performance through private material information advantage possessed by the investment banks or it costs affiliated mutual fund shareholders due to the conflict of interest. Robust to alternative risk adjustments and cross-section regression methodologies, this paper finds that the investment bank-affiliated mutual funds significantly outperform those of the mutual funds that are not affiliated with an investment bank. Interestingly the paper finds that the outperformance is confined to holding return, a return measure that captures the investment talent that is uninfluenced by transaction costs, fees, and other expenses. Further analysis shows that the investment bank-affiliated mutual funds specialize in hard-to-value stocks, which are not more likely to be held by unaffiliated funds. Consistent with the information advantage hypothesis, the paper finds that affiliated funds holding covered stocks outperform affiliated funds without covered stocks lending no support to the hypothesis that affiliated mutual funds attract superior stock-picking talent. Overall, the paper findings are consistent with the idea that investment banks maximize fee income by monopolistically exploiting their private information, thus strategically transferring performance to their affiliated mutual funds. This paper contributes to the extant literature on the agency problem in mutual fund families. It adds to this stream of research by showing that the agency problem is not only prevalent in fund families but also in financial organizations such as investment banks that have affiliated mutual fund families. The results show evidence of exploitation of synergies such as private material information sharing that benefit mutual fund investors due to affiliation with a financial conglomerate. However, this research has a normative dimension, allowing such incestuous behavior of insider trading and exploitation of superior information not only negatively affect the unaffiliated fund investors but also led to an unfair and unleveled playing field in the financial market.

Keywords: mutual fund performance, conflicts of interest, informational advantage, investment bank

Procedia PDF Downloads 187
490 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks

Authors: Afnan Al-Romi, Iman Al-Momani

Abstract:

The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.

Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN

Procedia PDF Downloads 322
489 Effect of Multi-Walled Carbon Nanotubes on Fuel Cell Membrane Performance

Authors: Rabindranath Jana, Biswajit Maity, Keka Rana

Abstract:

The most promising clean energy source is the fuel cell, since it does not generate toxic gases and other hazardous compounds. Again the direct methanol fuel cell (DMFC) is more user-friendly as it is easy to be miniaturized and suited as energy source for automobiles as well as domestic applications and portable devices. And unlike the hydrogen used for some fuel cells, methanol is a liquid that is easy to store and transport in conventional tanks. The most important part of a fuel cell is its membrane. Till now, an overall efficiency for a methanol fuel cell is reported to be about 20 ~ 25%. The lower efficiency of the cell may be due to the critical factors, e.g. slow reaction kinetics at the anode and methanol crossover. The oxidation of methanol is composed of a series of successive reactions creating formaldehyde and formic acid as intermediates that contribute to slow reaction rates and decreased cell voltage. Currently, the investigation of new anode catalysts to improve oxidation reaction rates is an active area of research as it applies to the methanol fuel cell. Surprisingly, there are very limited reports on nanostructured membranes, which are rather simple to manufacture with different tuneable compositions and are expected to allow only the proton permeation but not the methanol due to their molecular sizing effects and affinity to the membrane surface. We have developed a nanostructured fuel cell membrane from polydimethyl siloxane rubber (PDMS), ethylene methyl co-acrylate (EMA) and multi-walled carbon nanotubes (MWNTs). The effect of incorporating different proportions of f-MWNTs in polymer membrane has been studied. The introduction of f-MWNTs in polymer matrix modified the polymer structure, and therefore the properties of the device. The proton conductivity, measured by an AC impedance technique using open-frame and two-electrode cell and methanol permeability of the membranes was found to be dependent on the f-MWNTs loading. The proton conductivity of the membranes increases with increase in concentration of f-MWNTs concentration due to increased content of conductive materials. Measured methanol permeabilities at 60oC were found to be dependant on loading of f-MWNTs. The methanol permeability decreased from 1.5 x 10-6 cm²/s for pure film to 0.8 x 10-7 cm²/s for a membrane containing 0.5wt % f-MWNTs. This is due to increasing proportion of f-MWNTs, the matrix becomes more compact. From DSC melting curves it is clear that the polymer matrix with f-MWNTs is thermally stable. FT-IR studies show good interaction between EMA and f-MWNTs. XRD analysis shows good crystalline behavior of the prepared membranes. Significant cost savings can be achieved when using the blended films which contain less expensive polymers.

Keywords: fuel cell membrane, polydimethyl siloxane rubber, carbon nanotubes, proton conductivity, methanol permeability

Procedia PDF Downloads 411
488 A Mixed-Method Study Exploring Expressive Writing as a Brief Intervention Targeting Mental Health and Wellbeing in Higher Education Students: A Focus on the Qualitative Findings

Authors: Deborah Bailey-Rodriguez, Maria Paula Valdivieso Rueda, Gemma Reynolds

Abstract:

In recent years, the mental health of Higher Education (HE) students has been a growing concern. This has been further exacerbated by the stresses associated with the Covid-19 pandemic, placing students at even greater risk of developing mental health issues. Support available to students in HE tends to follow an established and traditional route. The demands for counseling services have grown, not only with the increase in student numbers but with the number of students seeking support for mental health issues, with 94% of HE institutions recently reporting an increase in the need for counseling services. One way of improving the well-being and mental health of HE students is through the use of brief interventions, such as expressive writing (EW). This intervention involves encouraging individuals to write continuously for at least 15-20 minutes for three to five sessions (often on consecutive days) about their deepest thoughts and feelings to explore significant personal experiences in a meaningful way. Given the brevity, simplicity and cost-effectiveness of EW, this intervention has considerable potential as an intervention for HE populations. The current study, therefore, employed a mixed-methods design to explore the effectiveness of EW in reducing anxiety, general stress, academic stress and depression in HE students while improving well-being. HE students at MDX were randomly assigned to one of three conditions: (1) The UniExp-EW group was required to write about their emotions and thoughts about any stressors they have faced that are directly relevant to their university experience (2) The NonUniExp-EW group was required to write about their emotions and thoughts about any stressors that are NOT directly relevant to their university experience, and (3) The Control group were required to write about how they spent their weekend, with no reference to thoughts or emotions, and without thinking about university. Participants were required to carry out the EW intervention for 15 minutes per day for four consecutive days. Baseline mental health and well-being measures were taken before the intervention via a battery of standardized questionnaires. Following completion of the intervention on day four, participants were required to complete the questionnaires a second time and again one week later. Participants were also invited to attend focus groups to discuss their experience of the intervention. This will allow an in-depth investigation into students’ perceptions of EW as an effective intervention to determine whether they would choose to use this intervention in the future. Preliminary findings will be discussed at the conference as well as a discussion of the important implications of the findings. The study is fundamental because if EW is an effective intervention for improving mental health and well-being in HE students, its brevity and simplicity mean it can be easily implemented and can be freely available to students. Improving the mental health and well-being of HE students can have knock-on implications for improving academic skills and career development.

Keywords: expressive writing, higher education, psychology in education, mixed-methods, mental health, academic stress

Procedia PDF Downloads 68
487 Political Economy and Human Rights Engaging in Conversation

Authors: Manuel Branco

Abstract:

This paper argues that mainstream economics is one of the reasons that can explain the difficulty in fully realizing human rights because its logic is intrinsically contradictory to human rights, most especially economic, social and cultural rights. First, its utilitarianism, both in its cardinal and ordinal understanding, contradicts human rights principles. Maximizing aggregate utility along the lines of cardinal utility is a theoretical exercise that consists in ensuring as much as possible that gains outweigh losses in society. In this process an individual may get worse off, though. If mainstream logic is comfortable with this, human rights' logic does not. Indeed, universality is a key principle in human rights and for this reason the maximization exercise should aim at satisfying all citizens’ requests when goods and services necessary to secure human rights are at stake. The ordinal version of utilitarianism, in turn, contradicts the human rights principle of indivisibility. Contrary to ordinal utility theory that ranks baskets of goods, human rights do not accept ranking when these goods and services are necessary to secure human rights. Second, by relying preferably on market logic to allocate goods and services, mainstream economics contradicts human rights because the intermediation of money prices and the purpose of profit may cause exclusion, thus compromising the principle of universality. Finally, mainstream economics sees human rights mainly as constraints to the development of its logic. According to this view securing human rights would, then, be considered a cost weighing on economic efficiency and, therefore, something to be minimized. Fully realizing human rights needs, therefore, a different approach. This paper discusses a human rights-based political economy. This political economy, among other characteristics should give up mainstream economics narrow utilitarian approach, give up its belief that market logic should guide all exchanges of goods and services between human beings, and finally give up its view of human rights as constraints on rational choice and consequently on good economic performance. Giving up mainstream’s narrow utilitarian approach means, first embracing procedural utility and human rights-aimed consequentialism. Second, a more radical break can be imagined; non-utilitarian, or even anti-utilitarian, approaches may emerge, then, as alternatives, these two standpoints being not necessarily mutually exclusive, though. Giving up market exclusivity means embracing decommodification. More specifically, this means an approach that takes into consideration the value produced outside the market and an allocation process no longer necessarily centered on money prices. Giving up the view of human rights as constraints means, finally, to consider human rights as an expression of wellbeing and a manifestation of choice. This means, in turn, an approach that uses indicators of economic performance other than growth at the macro level and profit at the micro level, because what we measure affects what we do.

Keywords: economic and social rights, political economy, economic theory, markets

Procedia PDF Downloads 150
486 Nutritional Education in Health Resort Institutions in the Face of Demographic and Epidemiological Changes in Poland

Authors: J. Woźniak-Holecka, T. Holecki, S. Jaruga

Abstract:

Spa treatment is an important area of the health care system in Poland due to the increasing needs of the population and the context of historical conditions for this form of therapy. It extends the range of financing possibilities of the outlets and increases the potential of spa services, which is very important in the context of demographic and epidemiological changes. The main advantages of spa treatment services include its relatively wide availability, low risk of side effects, good patient tolerance, long-lasting curative effect and a relatively low cost. In addition, patients should be provided with a proper diet and enable participation in health education and health promotion classes aimed at health problems consistent with the treatment profile. Challenges for global health care systems include a sharp increase in spending on benefits, dynamic development of health technologies and growing social expectations. This requires extending the competences of health resort facilities for health promotion. Within each type of health resort institutions in Poland, nutritional education services are implemented, aimed at creating and consolidating proper eating habits. Choosing the right diet can speed up recovery or become one of the methods to alleviate the symptoms of chronic diseases. During spa treatment patient learns the principles of rational nutrition and adequate dietotherapy to his diseases. The aim of the project is to assess the frequency and quality of nutritional education provided to patients in health resort facilities in a nationwide perspective. The material for the study will be data obtained as part of an in-depth interview conducted among Heads of Nutrition Departments of selected institutions. The use of nutritional education in a health resort may be an important goal of implementing the state health policy as a useful tool to reduce the risk of diet-related diseases. Recognizing nutritional education in health resort institutions as a type of full-value health service can be effective system support for health policy, including seniors, due to demographic changes currently occurring in the Polish population. Furthermore, it is necessary to increase the interest and motivation of patients to follow the recommendations of nutritional education, because it will bring tangible benefits for the long-term effects of therapy and care should be taken for the form and methodology of nutrition education implemented in health resort institutions. Finally it is necessary to construct an educational offer in terms of selected groups of patients with the highest health needs: the elderly and the disabled. In conclusion, it can be said that the system of nutritional education implemented in polish health resort institutions should be subjected to global changes and strong systemic correction.

Keywords: health care system, nutritional education, public health, spa and treatment

Procedia PDF Downloads 114
485 Domestic Trade, Misallocation and Relative Prices

Authors: Maria Amaia Iza Padilla, Ibai Ostolozaga

Abstract:

The objective of this paper is to analyze how transportation costs between regions within a country can affect not only domestic trade but also the allocation of resources in a given region, aggregate productivity, and relative domestic prices (tradable versus non-tradable). On the one hand, there is a vast literature that analyzes the transportation costs faced by countries when trading with the rest of the world. However, this paper focuses on the effect of transportation costs on domestic trade. Countries differ in their domestic road infrastructure and transport quality. There is also some literature that focuses on the effect of road infrastructure on the price difference between regions but not on relative prices at the aggregate level. On the other hand, this work is also related to the literature on resource misallocation. Finally, the paper is also related to the literature analyzing the effect of trade on the development of the manufacturing sector. Using the World Bank Enterprise Survey database, it is observed cross-country differences in the proportion of firms that consider transportation as an obstacle. From the International Comparison Program, we obtain a significant negative correlation between GDP per worker and relative prices (manufacturing sector prices relative to the service sector). Furthermore, there is a significant negative correlation between a country’s transportation quality and the relative price of manufactured goods with respect to the price of services in that country. This is consistent with the empirical evidence of a negative correlation between transportation quality and GDP per worker, on the one hand, and the negative correlation between GDP per worker and domestic relative prices, on the other. It is also shown that in a country, the share of manufacturing firms whose main market is at the local (regional) level is negatively related to the quality of the transportation infrastructure within the country. Similarly, this index is positively related to the share of manufacturing firms whose main market is national or international. The data also shows that those countries with a higher proportion of manufacturing firms operating locally have higher relative prices. With this information in hand, the paper attempts to quantify the effects of the allocation of resources between and within sectors. The higher the trade barriers caused by transportation costs, the less efficient allocation, which causes lower aggregate productivity. Second, it is built a two-sector model where regions within a country trade with each other. On the one hand, it is found that with respect to the manufacturing sector, those countries with less trade between their regions will be characterized by a smaller variety of goods, less productive manufacturing firms on average, and higher relative prices for manufactured goods relative to service sector prices. Thus, the decline in the relative price of manufactured goods in more advanced countries could also be explained by the degree of trade between regions. This trade allows for efficient intra-industry allocation (traders are more productive, and resources are allocated more efficiently)).

Keywords: misallocation, relative prices, TFP, transportation cost

Procedia PDF Downloads 84
484 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology

Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao

Abstract:

With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.

Keywords: optimisation, plate, sensor effectiveness, vibration control

Procedia PDF Downloads 230
483 Demographic Assessment and Evaluation of Degree of Lipid Control in High Risk Indian Dyslipidemia Patients

Authors: Abhijit Trailokya

Abstract:

Background: Cardiovascular diseases (CVD’s) are the major cause of morbidity and mortality in both developed and developing countries. Many clinical trials have demonstrated that low-density lipoprotein cholesterol (LDL-C) lowering, reduces the incidence of coronary and cerebrovascular events across a broad spectrum of patients at risk. Guidelines for the management of patients at risk have been established in Europe and North America. The guidelines have advocated progressively lower LDL-C targets and more aggressive use of statin therapy. In Indian patients, comprehensive data on dyslipidemia management and its treatment outcomes are inadequate. There is lack of information on existing treatment patterns, the patient’s profile being treated, and factors that determine treatment success or failure in achieving desired goals. Purpose: The present study was planned to determine the lipid control status in high-risk dyslipidemic patients treated with lipid-lowering therapy in India. Methods: This cross-sectional, non-interventional, single visit program was conducted across 483 sites in India where male and female patients with high-risk dyslipidemia aged 18 to 65 years who had visited for a routine health check-up to their respective physician at hospital or a healthcare center. Percentage of high-risk dyslipidemic patients achieving adequate LDL-C level (< 70 mg/dL) on lipid-lowering therapy and the association of lipid parameters with patient characteristics, comorbid conditions, and lipid lowering drugs were analysed. Results: 3089 patients were enrolled in the study; of which 64% were males. LDL-C data was available for 95.2% of the patients; only 7.7% of these patients achieved LDL-C levels < 70 mg/dL on lipid-lowering therapy, which may be due to inability to follow therapeutic plans, poor compliance, or inadequate counselling by physician. The physician’s lack of awareness about recent treatment guidelines also might contribute to patients’ poor adherence, not explaining adequately the benefit and risks of a medication, not giving consideration to the patient’s life style and the cost of medication. Statin was the most commonly used anti-dyslipidemic drug across population. The higher proportion of patients had the comorbid condition of CVD and diabetes mellitus across all dyslipidemic patients. Conclusion: As per the European Society of Cardiology guidelines the ideal LDL-C levels in high risk dyslipidemic patients should be less than 70%. In the present study, 7.7% of the patients achieved LDL-C levels < 70 mg/dL on lipid lowering therapy which is very less. Most of high risk dyslipidemic patients in India are on suboptimal dosage of statin. So more aggressive and high dosage statin therapy may be required to achieve target LDLC levels in high risk Indian dyslipidemic patients.

Keywords: cardiovascular disease, diabetes mellitus, dyslipidemia, LDL-C, lipid lowering drug, statins

Procedia PDF Downloads 200
482 Investigating Sediment-Bound Chemical Transport in an Eastern Mediterranean Perennial Stream to Identify Priority Pollution Sources on a Catchment Scale

Authors: Felicia Orah Rein Moshe

Abstract:

Soil erosion has become a priority global concern, impairing water quality and degrading ecosystem services. In Mediterranean climates, following a long dry period, the onset of rain occurs when agricultural soils are often bare and most vulnerable to erosion. Early storms transport sediments and sediment-bound pollutants into streams, along with dissolved chemicals. This results in loss of valuable topsoil, water quality degradation, and potentially expensive dredged-material disposal costs. Information on the provenance of fine sediment and priority sources of adsorbed pollutants represents a critical need for developing effective control strategies aimed at source reduction. Modifying sediment traps designed for marine systems, this study tested a cost-effective method to collect suspended sediments on a catchment scale to characterize stream water quality during first-flush storm events in a flashy Eastern Mediterranean coastal perennial stream. This study investigated the Kishon Basin, deploying sediment traps in 23 locations, including 4 in the mainstream and one downstream in each of 19 tributaries, enabling the characterization of sediment as a vehicle for transporting chemicals. Further, it enabled direct comparison of sediment-bound pollutants transported during the first-flush winter storms of 2020 from each of 19 tributaries, allowing subsequent ecotoxicity ranking. Sediment samples were successfully captured in 22 locations. Pesticides, pharmaceuticals, nutrients, and metal concentrations were quantified, identifying a total of 50 pesticides, 15 pharmaceuticals, and 22 metals, with 16 pesticides and 3 pharmaceuticals found in all 23 locations, demonstrating the importance of this transport pathway. Heavy metals were detected in only one tributary, identifying an important watershed pollution source with immediate potential influence on long-term dredging costs. Simultaneous sediment sampling at first flush storms enabled clear identification of priority tributaries and their chemical contributions, advancing a new national watershed monitoring approach, facilitating strategic plan development based on source reduction, and advancing the goal of improving the farm-stream interface, conserving soil resources, and protecting water quality.

Keywords: adsorbed pollution, dredged material, heavy metals, suspended sediment, water quality monitoring

Procedia PDF Downloads 106
481 Conditional Relation between Migration, Demographic Shift and Human Development in India

Authors: Rakesh Mishra, Rajni Singh, Mukunda Upadhyay

Abstract:

Since the last few decades, the prima facie of development has shifted towards the working population in India. There has been a paradigm shift in the development approach with the realization that the present demographic dividend has to be harnessed for sustainable development. Rapid urbanization and improved socioeconomic characteristics experienced within its territory has catalyzed various forms of migration into it, resulting in massive transference of workforce between its states. Workforce in any country plays a very crucial role in deciding development of both the places, from where they have out-migrated and the place they are residing currently. In India, people are found to be migrating from relatively less developed states to a well urbanized and developed state for satisfying their neediness. Linking migration to HDI at place of destination, the regression coefficient (β ̂) shows affirmative association between them, because higher the HDI of the place would be, higher would be chance of earning and hence likeliness of the migrants would be more to choose that place as a new destination and vice versa. So the push factor is compromised by the cost of rearing and provides negative impulse on the in migrants letting down their numbers to metro cities or megacities of the states but increasing their mobility to the suburban areas and vice versa. The main objective of the study is to check the role of migration in deciding the dividend of the place of destination as well as people at the place of their usual residence with special focus to highly urban states in India. Idealized scenario of Indian migrants refers to some new theories in making. On analyzing the demographic dividend of the places we got to know that Uttar Pradesh provides maximum dividend to Maharashtra, West Bengal and Delhi, and the demographic divided of migrants are quite comparable to the native’s shares in the demographic dividend in these places. On analyzing the data from National Sample Survey 64th round and Census of India-2001, we have observed that for males in rural areas, the share of unemployed person declined by 9 percentage points (from 45% before migration to 36 % after migration) and for females in rural areas the decline was nearly 12 percentage points (from 79% before migration to 67% after migration. It has been observed that the shares of unemployed males in both rural and urban areas, which were significant before migration, got reduced after migration while the share of unemployed females in the rural as well as in the urban areas remained almost negligible both for before and after migration. So increase in the number of employed after migration provides an indication of changes in the associated cofactors like health and education of the place of destination and arithmetically to the place from where they have migrated out. This paper presents the evidence on the patterns of prevailing migration dynamics and corresponding demographic benefits in India and its states, examines trends and effects, and discusses plausible explanations.

Keywords: migration, demographic shift, human development index, multilevel analysis

Procedia PDF Downloads 386
480 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 222
479 Modulation of Receptor-Activation Due to Hydrogen Bond Formation

Authors: Sourav Ray, Christoph Stein, Marcus Weber

Abstract:

A new class of drug candidates, initially derived from mathematical modeling of ligand-receptor interactions, activate the μ-opioid receptor (MOR) preferentially at acidic extracellular pH-levels, as present in injured tissues. This is of commercial interest because it may preclude the adverse effects of conventional MOR agonists like fentanyl, which include but are not limited to addiction, constipation, sedation, and apnea. Animal studies indicate the importance of taking the pH value of the chemical environment of MOR into account when designing new drugs. Hydrogen bonds (HBs) play a crucial role in stabilizing protein secondary structure and molecular interaction, such as ligand-protein interaction. These bonds may depend on the pH value of the chemical environment. For the MOR, antagonist naloxone and agonist [D-Ala2,N-Me-Phe4,Gly5-ol]-enkephalin (DAMGO) form HBs with ionizable residue HIS 297 at physiological pH to modulate signaling. However, such interactions were markedly reduced at acidic pH. Although fentanyl-induced signaling is also diminished at acidic pH, HBs with HIS 297 residue are not observed at either acidic or physiological pH for this strong agonist of the MOR. Molecular dynamics (MD) simulations can provide greater insight into the interaction between the ligand of interest and the HIS 297 residue. Amino acid protonation states are adjusted to the model difference in system acidity. Unbiased and unrestrained MD simulations were performed, with the ligand in the proximity of the HIS 297 residue. Ligand-receptor complexes were embedded in 1-palmitoyl-2-oleoyl-sn glycero-3-phosphatidylcholine (POPC) bilayer to mimic the membrane environment. The occurrence of HBs between the different ligands and the HIS 297 residue of MOR at acidic and physiological pH values were tracked across the various simulation trajectories. No HB formation was observed between fentanyl and HIS 297 residue at either acidic or physiological pH. Naloxone formed some HBs with HIS 297 at pH 5, but no such HBs were noted at pH 7. Interestingly, DAMGO displayed an opposite yet more pronounced HB formation trend compared to naloxone. Whereas a marginal number of HBs could be observed at even pH 5, HBs with HIS 297 were more stable and widely present at pH 7. The HB formation plays no and marginal role in the interaction of fentanyl and naloxone, respectively, with the HIS 297 residue of MOR. However, HBs play a significant role in the DAMGO and HIS 297 interaction. Post DAMGO administration, these HBs might be crucial for the remediation of opioid tolerance and restoration of opioid sensitivity. Although experimental studies concur with our observations regarding the influence of HB formation on the fentanyl and DAMGO interaction with HIS 297, the same could not be conclusively stated for naloxone. Therefore, some other supplementary interactions might be responsible for the modulation of the MOR activity by naloxone binding at pH 7 but not at pH 5. Further elucidation of the mechanism of naloxone action on the MOR could assist in the formulation of cost-effective naloxone-based treatment of opioid overdose or opioid-induced side effects.

Keywords: effect of system acidity, hydrogen bond formation, opioid action, receptor activation

Procedia PDF Downloads 174
478 Evaluation of Tensile Strength of Natural Fibres Reinforced Epoxy Composites Using Fly Ash as Filler Material

Authors: Balwinder Singh, Veerpaul Kaur Mann

Abstract:

A composite material is formed by the combination of two or more phases or materials. Natural minerals-derived Basalt fiber is a kind of fiber being introduced in the polymer composite industry due to its good mechanical properties similar to synthetic fibers and low cost, environment friendly. Also, there is a rising trend towards the use of industrial wastes as fillers in polymer composites with the aim of improving the properties of the composites. The mechanical properties of the fiber-reinforced polymer composites are influenced by various factors like fiber length, fiber weight %, filler weight %, filler size, etc. Thus, a detailed study has been done on the characterization of short-chopped Basalt fiber-reinforced polymer matrix composites using fly ash as filler. Taguchi’s L9 orthogonal array has been used to develop the composites by considering fiber length (6, 9 and 12 mm), fiber weight % (25, 30 and 35 %) and filler weight % (0, 5 and 10%) as input parameters with their respective levels and a thorough analysis on the mechanical characteristics (tensile strength and impact strength) has been done using ANOVA analysis with the help of MINITAB14 software. The investigation revealed that fiber weight is the most significant parameter affecting tensile strength, followed by fiber length and fiber weight %, respectively, while impact characterization showed that fiber length is the most significant factor, followed by fly ash weight, respectively. Introduction of fly ash proved to be beneficial in both the characterization with enhanced values upto 5% fly ash weight. The present study on the natural fibres reinforced epoxy composites using fly ash as filler material to study the effect of input parameters on the tensile strength in order to maximize tensile strength of the composites. Fabrication of composites based on Taguchi L9 orthogonal array design of experiments by using three factors fibre type, fibre weight % and fly ash % with three levels of each factor. The Optimization of composition of natural fibre reinforces composites using ANOVA for obtaining maximum tensile strength on fabricated composites revealed that the natural fibres along with fly ash can be successfully used with epoxy resin to prepare polymer matrix composites with good mechanical properties. Paddy- Paddy fibre gives high elasticity to the fibre composite due to presence of approximately hexagonal structure of cellulose present in paddy fibre. Coir- Coir fibre gives less tensile strength than paddy fibre as Coir fibre is brittle in nature when it pulls breakage occurs showing less tensile strength. Banana- Banana fibre has the least tensile strength in comparison to the paddy & coir fibre due to less cellulose content. Higher fibre weight leads to reduction in tensile strength due to increased nuclei of air pockets. Increasing fly ash content reduces tensile strength due to nonbonding of fly ash particles with natural fibre. Fly ash is also not very strong as compared to the epoxy resin leading to reduction in tensile strength.

Keywords: tensile strength and epoxy resin. basalt Fiber, taguchi, polymer matrix, natural fiber

Procedia PDF Downloads 47
477 A Sustainable Pt/BaCe₁₋ₓ₋ᵧZrₓGdᵧO₃ Catalyst for Dry Reforming of Methane-Derived from Recycled Primary Pt

Authors: Alessio Varotto, Lorenzo Freschi, Umberto Pasqual Laverdura, Anastasia Moschovi, Davide Pumiglia, Iakovos Yakoumis, Marta Feroci, Maria Luisa Grilli

Abstract:

Dry reforming of Methane (DRM) is considered one of the most valuable technologies for green-house gas valorization thanks to the fact that through this reaction, it is possible to obtain syngas, a mixture of H₂ and CO in an H₂/CO ratio suitable for utilization in the Fischer-Tropsch process of high value-added chemicals and fuels. Challenges of the DRM process are the reduction of costs due to the high temperature of the process and the high cost of precious metals of the catalyst, the metal particles sintering, and carbon deposition on the catalysts’ surface. The aim of this study is to demonstrate the feasibility of the synthesis of catalysts using a leachate solution containing Pt coming directly from the recovery of spent diesel oxidation catalysts (DOCs) without further purification. An unusual perovskite support for DRM, the BaCe₁₋ₓ₋ᵧZrₓGdᵧO₃ (BCZG) perovskite, has been chosen as the catalyst support because of its high thermal stability and capability to produce oxygen vacancies, which suppress the carbon deposition and enhance the catalytic activity of the catalyst. BCZG perovskite has been synthesized by a sol-gel modified Pechini process and calcinated in air at 1100 °C. BCZG supports have been impregnated with a Pt-containing leachate solution of DOC, obtained by a mild hydrometallurgical recovery process, as reported elsewhere by some of the authors of this manuscript. For comparison reasons, a synthetic solution obtained by digesting commercial Pt-black powder in aqua regia was used for BCZG support impregnation. Pt nominal content was 2% in both BCZG-based catalysts formed by real and synthetic solutions. The structure and morphology of catalysts were characterized by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM). Thermogravimetric Analysis (TGA) was used to study the thermal stability of the catalyst’s samples. Brunauer-Emmett-Teller (BET) analysis provided a high surface area of the catalysts. H₂-TPR (Temperature Programmed Reduction) analysis was used to study the consumption of hydrogen for reducibility, and it was associated with H₂-TPD characterization to study the dispersion of Pt on the surface of the support and calculate the number of active sites used by the precious metal. Dry reforming of methane (DRM) reaction, carried out in a fixed bed reactor, showed a high conversion efficiency of CO₂ and CH4. At 850°C, CO₂ and CH₄ conversion were close to 100% for the catalyst obtained with the aqua regia-based solution of commercial Pt-black, and ~70% (for CH₄) and ~80 % (for CO₂) in the case of real HCl-based leachate solution. H₂/CO ratios were ~0.9 and ~0.70 in the first and latter cases, respectively. As far as we know, this is the first pioneering work in which a BCGZ catalyst and a real Pt-containing leachate solution were successfully employed for DRM reaction.

Keywords: dry reforming of methane, perovskite, PGM, recycled Pt, syngas

Procedia PDF Downloads 35
476 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 32