Search results for: spatial point process
184 Regulation Effect of Intestinal Microbiota by Fermented Processing Wastewater of Yuba
Authors: Ting Wu, Feiting Hu, Xinyue Zhang, Shuxin Tang, Xiaoyun Xu
Abstract:
As a by-product of yuba, processing wastewater of Yuba (PWY) contains many bioactive components such as soybean isoflavones, soybean polysaccharides and soybean oligosaccharides, which is a good source of prebiotics and has a potential of high value utilization. The use of Lactobacillus plantarum to ferment PWY can be considered as a potential biogenic element, which can regulate the balance of intestinal microbiota. In this study, firstly, Lactobacillus plantarum was used to ferment PWY to improve its content of active components and antioxidant activity. Then, the health effect of fermented processing wastewater of yuba (FPWY) was measured in vitro. Finally, microencapsulation technology was used applied to improve the sustained release of FPWY and reduce the loss of active components in the digestion process, as well as to improving the activity of FPWY. The main results are as follows: (1) FPWY presented a good antioxidant capacity with DPPH free radical scavenging ability (0.83 ± 0.01 mmol Trolox/L), ABTS free radical scavenging ability (7.47 ± 0.35 mmol Trolox/L) and iron ion reducing ability (1.11 ± 0.07 mmol Trolox/L). Compared with non-fermented processing wastewater of yuba (NFPWY), there was no significant difference in the content of total soybean isoflavones, but the content of glucoside soybean isoflavones decreased, and aglyconic soybean isoflavones increased significantly. After fermentation, PWY can effectively reduce the soluble monosaccharides, disaccharides and oligosaccharides, such as glucose, fructose, galactose, trehalose, stachyose, maltose, raffinose and sucrose. (2) FPWY can significantly enhance the growth of beneficial bacteria such as Bifidobacterium, Ruminococcus and Akkermansia, significantly inhibit the growth of harmful bacteria E.coli, regulate the structure of intestinal microbiota, and significantly increase the content of short-chain fatty acids such as acetic acid, propionic acid, butyric acid, isovaleric acid. Higher amount of lactic acid in the gut can be further broken down into short chain fatty acids. (3) In order to improve the stability of soybean isoflavones in FPWY during digestion, sodium alginate and chitosan were used as wall materials for embedding. The FPWY freeze-dried powder was embedded by the method of acute-coagulation bath. The results show that when the core wall ratio is 3:1, the concentration of chitosan is 1.5%, the concentration of sodium alginate is 2.0%, and the concentration of calcium is 3%, the embossing rate is 53.20%. In the simulated in vitro digestion stage, the release rate of microcapsules reached 59.36% at the end of gastric digestion and 82.90% at the end of intestinal digestion. Therefore, the core materials with good sustained-release performance of microcapsules were almost all released. The structural analysis results of FPWY microcapsules show that the microcapsules have good mechanical properties. Its hardness, springness, cohesiveness, gumminess, chewiness and resilience were 117.75± 0.21 g, 0.76±0.02, 0.54±0.01, 63.28±0.71 g·sec, 48.03±1.37 g·sec, 0.31±0.01, respectively. Compared with the unembedded FPWY, the infrared spectrum results showed that the microcapsules had embedded effect on the FPWY freeze-dried powder.Keywords: processing wastewater of yuba, lactobacillus plantarum, intestinal microbiota, microcapsule
Procedia PDF Downloads 75183 International Solar Alliance: A Case for Indian Solar Diplomacy
Authors: Swadha Singh
Abstract:
International Solar Alliance is the foremost treaty-based global organization concerned with tapping the potential of sun-abundant nations between the Tropics of Cancer and Capricorn and enables co-operation among them. As a founding member of the International Solar Alliance, India exhibits its positioning as an upcoming leader in clean energy. India has set ambitious goals and targets to expand the share of solar in its energy mix and is playing a proactive role both at the regional and global levels. ISA aims to serve multiple goals- bring about scale commercialization of solar power, boost domestic manufacturing, and leverage solar diplomacy in African countries, amongst others. Against this backdrop, this paper attempts to examine the ways in which ISA as an intergovernmental organization under Indian leadership can leverage the cause of clean energy (solar) diplomacy and effectively shape partnerships and collaborations with other developing countries in terms of sharing solar technology, capacity building, risk mitigation, mobilizing financial investment and providing an aggregate market. A more specific focus of ISA is on the developing countries, which in the absence of a collective, are constrained by technology and capital scarcity, despite being naturally endowed with solar resources. Solar rich but finance-constrained economies face political risk, foreign exchange risk, and off-taker risk. Scholars argue that aligning India’s climate change discourse and growth prospects in its engagements, collaborations, and partnerships at the bilateral, multilateral and regional level can help promote trade, attract investments, and promote resilient energy transition both in India and in partner countries. For developing countries, coming together in an action-oriented way on issues of climate and clean energy is particularly important since it is developing and underdeveloped countries that face multiple and coalescing challenges such as the adverse impact of climate change, uneven and low access to reliable energy, and pressing employment needs. Investing in green recovery is agreed to be an assured way to create resilient value chains, create sustainable livelihoods, and help mitigate climate threats. If India is able to ‘green its growth’ process, it holds the potential to emerge as a climate leader internationally. It can use its experience in the renewable sector to guide other developing countries in balancing multiple similar objectives of development, energy security, and sustainability. The challenges underlying solar expansion in India have lessons to offer other developing countries, giving India an opportunity to assume a leadership role in solar diplomacy and expand its geopolitical influence through inter-governmental organizations such as ISA. It is noted that India has limited capacity to directly provide financial funds and support and is not a leading manufacturer of cheap solar equipment, as does China; however, India can nonetheless leverage its large domestic market to scale up the commercialization of solar power and offer insights and learnings to similarly placed abundant solar countries. The paper examines the potential of and limits placed on India’s solar diplomacy.Keywords: climate diplomacy, energy security, solar diplomacy, renewable energy
Procedia PDF Downloads 118182 Deep Learning for SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network
Procedia PDF Downloads 66181 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example
Authors: Alena Nesterenko, Svetlana Petrikova
Abstract:
Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation
Procedia PDF Downloads 204180 Poly (3,4-Ethylenedioxythiophene) Prepared by Vapor Phase Polymerization for Stimuli-Responsive Ion-Exchange Drug Delivery
Authors: M. Naveed Yasin, Robert Brooke, Andrew Chan, Geoffrey I. N. Waterhouse, Drew Evans, Darren Svirskis, Ilva D. Rupenthal
Abstract:
Poly(3,4-ethylenedioxythiophene) (PEDOT) is a robust conducting polymer (CP) exhibiting high conductivity and environmental stability. It can be synthesized by either chemical, electrochemical or vapour phase polymerization (VPP). Dexamethasone sodium phosphate (dexP) is an anionic drug molecule which has previously been loaded onto PEDOT as a dopant via electrochemical polymerisation; however this technique requires conductive surfaces from which polymerization is initiated. On the other hand, VPP produces highly organized biocompatible CP structures while polymerization can be achieved onto a range of surfaces with a relatively straight forward scale-up process. Following VPP of PEDOT, dexP can be loaded and subsequently released via ion-exchange. This study aimed at preparing and characterising both non-porous and porous VPP PEDOT structures including examining drug loading and release via ion-exchange. Porous PEDOT structures were prepared by first depositing a sacrificial polystyrene (PS) colloidal template on a substrate, heat curing this deposition and then spin coating it with the oxidant solution (iron tosylate) at 1500 rpm for 20 sec. VPP of both porous and non-porous PEDOT was achieved by exposing to monomer vapours in a vacuum oven at 40 mbar and 40 °C for 3 hrs. Non-porous structures were prepared similarly on the same substrate but without any sacrificial template. Surface morphology, compositions and behaviour were then characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM), x-ray photoelectron spectroscopy (XPS) and cyclic voltammetry (CV) respectively. Drug loading was achieved by 50 CV cycles in a 0.1 M dexP aqueous solution. For drug release, each sample was exposed to 20 mL of phosphate buffer saline (PBS) placed in a water bath operating at 37 °C and 100 rpm. Film was stimulated (continuous pulse of ± 1 V at 0.5 Hz for 17 mins) while immersed into PBS. Samples were collected at 1, 2, 6, 23, 24, 26 and 27 hrs and were analysed for dexP by high performance liquid chromatography (HPLC Agilent 1200 series). AFM and SEM revealed the honey comb nature of prepared porous structures. XPS data showed the elemental composition of the dexP loaded film surface, which related well with that of PEDOT and also showed that one dexP molecule was present per almost three EDOT monomer units. The reproducible electroactive nature was shown by several cycles of reduction and oxidation via CV. Drug release revealed success in drug loading via ion-exchange, with stimulated porous and non-porous structures exhibiting a proof of concept burst release upon application of an electrical stimulus. A similar drug release pattern was observed for porous and non-porous structures without any significant statistical difference, possibly due to the thin nature of these structures. To our knowledge, this is the first report to explore the potential of VPP prepared PEDOT for stimuli-responsive drug delivery via ion-exchange. The produced porous structures were ordered and highly porous as indicated by AFM and SEM. These porous structures exhibited good electroactivity as shown by CV. Future work will investigate porous structures as nano-reservoirs to increase drug loading while sealing these structures to minimize spontaneous drug leakage.Keywords: PEDOT for ion-exchange drug delivery, stimuli-responsive drug delivery, template based porous PEDOT structures, vapour phase polymerization of PEDOT
Procedia PDF Downloads 230179 Evaluation of Forensic Pathology Practice Outside Germany – Experiences From 20 Years of Second Look Autopsies in Cooperation with the Institute of Legal Medicine Munich
Authors: Michael Josef Schwerer, Oliver Peschel
Abstract:
Background: The sense and purpose of forensic postmortem examinations are undoubtedly the same in Institutes of Legal Medicine all over the world. Cause and manner of death must be determined, persons responsible for unnatural death must be brought to justice, and accidents demand changes in the respective scenarios to avoid future mishaps. The latter particularly concerns aircraft accidents, not only regarding consequences from criminal or civil law but also in pursuance of the International Civil Aviation Authority’s regulations, which demand lessons from mishap investigations to improve flight safety. Irrespective of the distinct circumstances of a given casualty or the respective questions in subsequent death investigations, a forensic autopsy is the basis for all further casework, the clue to otherwise hidden solutions, and the crucial limitation for final success when not all possible findings have been properly collected. This also implies that the targeted work of police forces and expert witnesses strongly depends on the quality of forensic pathology practice. Deadly events in foreign countries, which lead to investigations not only abroad but also in Germany, can be challenging in this context. Frequently, second-look autopsies after the repatriation of the deceased to Germany are requested by the legal authorities to ensure proper and profound documentation of all relevant findings. Aims and Methods: To validate forensic postmortem practice abroad, a retrospective study using the findings in the corresponding second-look autopsies in the Institute of Legal Medicine Munich over the last 20 years was carried out. New findings unreported in the previous autopsy were recorded and judged for their relevance to solving the respective case. Further, the condition of the corpse at the time of the second autopsy was rated to discuss artifacts mimicking evidence or the possibility of lost findings resulting from, e.g., decomposition. Recommendations for future handling of death cases abroad and efficient autopsy practice were pursued. Results and Discussion: Our re-evaluation confirmed a high quality of autopsy practice abroad in the vast majority of cases. However, in some casework, incomplete documentation of pathology findings was revealed along with either insufficient or misconducted dissection of organs. Further, some of the bodies showed missing parts of some organs, most probably resulting from sampling for histology studies during the first postmortem. For the aeromedical evaluation of a decedent’s health status prior to an aviation mishap, particularly lost or obscured findings in the heart, lungs, and brain impeded expert testimony. Moreover, incomplete fixation of the body or body parts for repatriation was seen in several cases. This particularly involved previously dissected organs deposited back into the body cavities at the end of the first autopsy. Conclusions and Recommendations: Detailed preparation in the first forensic autopsy avoids the necessity of a second-look postmortem in the majority of cases. To limit decomposition changes during repatriation from abroad, special care must be taken to include pre-dissected organs in the chemical fixation process, particularly when they are separated from the blood vessels and just deposited back into the body cavities.Keywords: autopsy practice, second-look autopsy, retrospective study, quality standards, decomposition changes, repatriation
Procedia PDF Downloads 47178 The Impact of the Media in the Implementation of Qatar’s Foreign Policy on the Public Opinion of the People of the Middle East (2011-2023)
Authors: Negar Vkilbashi, Hassan Kabiri
Abstract:
Modern diplomacy, in its general form, refers to the people and not the governments, and diplomacy tactics are more addressed to the people than to the governments. Media diplomacy and cyber diplomacy are also one of the sub-branches of public diplomacy and, in fact, the role of media in the process of influencing public opinion and directing foreign policy. Mass media, including written, radio and television, theater, satellite, internet, and news agencies, transmit information and demands. What the Qatari government tried to implement in the countries of the region during the Arab Spring and after was through its important media, Al Jazeera. The embargo on Qatar began in 2017, when Saudi Arabia, the United Arab Emirates, Bahrain, and Egypt imposed a land, sea, and air blockade against the country. The media tool constitutes the cornerstone of soft power in the field of foreign policy, which Qatari leaders have consistently resorted to over the past two decades. Undoubtedly, the role it played in covering the events of the Arab Spring has created geopolitical tensions. The United Arab Emirates and other neighboring countries sometimes criticize Al Jazeera for providing a platform for the Muslim Brotherhood, Hamas, and other Islamists to promote their ideology. In 2011, at the same time as the Arab Spring, Al Jazeera reached the peak of its popularity. Al Jazeera's live coverage of protests in Tunisia, Egypt, Yemen, Libya, and Syria helped create a unified narrative of the Arab Spring, with audiences tuning in every Friday to watch simultaneous protests across the Middle East. Al Jazeera operates in three groups: First, it is a powerful base in the hands of the government so that it can direct and influence Arab public opinion. Therefore, this network has been able to benefit from the unlimited financial support of the Qatar government to promote its desired policies and culture. Second, it has provided an attractive platform for politicians and scientific and intellectual elites, thus attracting their support and defense from the government and its rulers. Third, during the last years of Prince Hamad's reign, the Al Jazeera network formed a deterrent weapon to counter the media and political struggle campaigns. The importance of the research is that this network covers a wide range of people in the Middle East and, therefore, has a high influence on the decision-making of countries. On the other hand, Al Jazeera is influential as a tool of public diplomacy and soft power in Qatar's foreign policy, and by studying it, the results of its effectiveness in the past years can be examined. Using a qualitative method, this research analyzes the impact of the media on the implementation of Qatar's foreign policy on the public opinion of the people of the Middle East. Data collection has been done by the secondary method, that is, reading related books, magazine articles, newspaper reports and articles, and analytical reports of think tanks. The most important findings of the research are that Al Jazeera plays an important role in Qatar's foreign policy in Qatar's public diplomacy. So that, in 2011, 2017 and 2023, it played an important role in Qatar's foreign policy in various crises. Also, the people of Arab countries use Al-Jazeera as their first reference.Keywords: Al Jazeera, Qatar, media, diplomacy
Procedia PDF Downloads 78177 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 89176 Nanocomposite Effect Based on Silver Nanoparticles and Anemposis Californica Extract as Skin Restorer
Authors: Maria Zulema Morquecho Vega, Fabiola CarolinaMiranda Castro, Rafael Verdugo Miranda, Ignacio Yocupicio Villegas, Ana lidia Barron Raygoza, Martin enrique MArquez Cordova, Jose Alberto Duarte Moller
Abstract:
Background: Anemopsis californica, also called (tame grass) belongs to the Saururaceae family small, green plant. The blade is long and wide. Gives a white flower. The plant population is only found in humid, swampy habitats, it grows where there is water, along the banks of streams and water holes. In the winter, it dries up. The leaves, rhizomes, or roots of this plant have been used to treat a range of diseases. Some of its healing properties are used to treat wounds, cold and flu symptoms, spasmodic cough, infection, pain and inflammation, burns, swollen feet, as well as lung ailments, asthma, circulatory problems (varicose veins), rheumatoid arthritis, purifies blood, helps in urinary and digestive tract diseases, sores and healing, for headache, sore throat, diarrhea, kidney pain. The tea made from the leaves and roots is used to treat uterine cancer, womb cancer, relieves menstrual pain and stops excessive bleeding after childbirth. It is also used as a gynecological treatment for infections, hemorrhoids, candidiasis and vaginitis. Objective: To study the cytotoxicity of gels prepared with silver nanoparticles in AC extract combined with chitosan, collagen and hyaluronic acid as an alternative therapy for skin conditions. Methods: The Ag NPs were synthesized according to the following method. A 0.3 mg/mL solution is prepared in 10 ml of deionized water, adjust to pH 12 with NaOH, stirring is maintained constant magnetic and a temperature of 80 °C. Subsequently, 100 ul of a 0.1 M AgNO3 solution and kept stirring constantly for 15 min. Once the reaction is complete, measurements are performed by UV-Vis. A gel was prepared in a 5% solution of acetic acid with the respective nanoparticles and AC extract of silver in the extract of AC. Chitosan is added until the process begins to occur gel. At that time, collagen will be added in a ratio of 3 to 5 drops, and later, hyaluronic acid in 2% of the total compound formed. Finally, after resting for 24 hours, the cytotoxic effect of the gels was studied. in the presence of highly positive bacteria Staphylococcus aureus and highly negative for Escherichia coli. Cultures will be incubated for 24 hours in the presence of the compound and compared with the reference. Results: Silver nanoparticles obtained had a spherical shape and sizes among 20 and 30 nm. UV-Vis spectra confirm the presence of silver nanoparticles showing a surface plasmon around 420 nm. Finally, the test in presence of bacteria yield a good antibacterial property of this nanocompound and tests in people were successful. Conclusion: Gel prepared by biogenic synthesis shown beneficious effects in severe acne, acne vulgaris and wound healing with diabetic patients.Keywords: anemopsis californica, nanomedicina, biotechnology, biomedicine
Procedia PDF Downloads 114175 Heat Transfer Phenomena Identification of a Non-Active Floor in a Stack-Ventilated Building in Summertime: Empirical Study
Authors: Miguel Chen Austin, Denis Bruneau, Alain Sempey, Laurent Mora, Alain Sommier
Abstract:
An experimental study in a Plus Energy House (PEH) prototype was conducted in August 2016. It aimed to highlight the energy charge and discharge of a concrete-slab floor submitted to the day-night-cycles heat exchanges in the southwestern part of France and to identify the heat transfer phenomena that take place in both processes: charge and discharge. The main features of this PEH, significant to this study, are the following: (i) a non-active slab covering the major part of the entire floor surface of the house, which include a concrete layer 68 mm thick as upper layer; (ii) solar window shades located on the north and south facades along with a large eave facing south, (iii) large double-glazed windows covering the majority of the south facade, (iv) a natural ventilation system (NVS) composed by ten automatized openings with different dimensions: four are located on the south facade, four on the north facade and two on the shed roof (north-oriented). To highlight the energy charge and discharge processes of the non-active slab, heat flux and temperature measurement techniques were implemented, along with airspeed measurements. Ten “measurement-poles” (MP) were distributed all over the concrete-floor surface. Each MP represented a zone of measurement, where air and surface temperatures, and convection and radiation heat fluxes, were intended to be measured. The airspeed was measured only at two points over the slab surface, near the south facade. To identify the heat transfer phenomena that take part in the charge and discharge process, some relevant dimensionless parameters were used, along with statistical analysis; heat transfer phenomena were identified based on this analysis. Experimental data, after processing, had shown that two periods could be identified at a glance: charge (heat gain, positive values) and discharge (heat losses, negative values). During the charge period, on the floor surface, radiation heat exchanges were significantly higher compared with convection. On the other hand, convection heat exchanges were significantly higher than radiation, in the discharge period. Spatially, both, convection and radiation heat exchanges are higher near the natural ventilation openings and smaller far from them, as expected. Experimental correlations have been determined using a linear regression model, showing the relation between the Nusselt number with relevant parameters: Peclet, Rayleigh, and Richardson numbers. This has led to the determination of the convective heat transfer coefficient and its comparison with the convective heat coefficient resulting from measurements. Results have shown that forced and natural convection coexists during the discharge period; more accurate correlations with the Peclet number than with the Rayleigh number, have been found. This may suggest that forced convection is stronger than natural convection. Yet, airspeed levels encountered suggest that it is natural convection that should take place rather than forced convection. Despite this, Richardson number values encountered indicate otherwise. During the charge period, air-velocity levels might indicate that none air motion occurs, which might lead to heat transfer by diffusion instead of convection.Keywords: heat flux measurement, natural ventilation, non-active concrete slab, plus energy house
Procedia PDF Downloads 413174 Librarian Liaisons: Facilitating Multi-Disciplinary Research for Academic Advancement
Authors: Tracey Woods
Abstract:
In the ever-evolving landscape of academia, the traditional role of the librarian has undergone a remarkable transformation. Once considered as custodians of books and gatekeepers of information, librarians have the potential to take on the vital role of facilitators of cross and inter-disciplinary projects. This shift is driven by the growing recognition of the value of interdisciplinary collaboration in addressing complex research questions in pursuit of novel solutions to real-world problems. This paper shall explore the potential of the academic librarian’s role in facilitating innovative, multi-disciplinary projects, both recognising and validating the vital role that the librarian plays in a somewhat underplayed profession. Academic libraries support teaching, the strengthening of knowledge discourse, and, potentially, the development of innovative practices. As the role of the library gradually morphs from a quiet repository of books to a community-based information hub, a potential opportunity arises. The academic librarian’s role is to build knowledge across a wide span of topics, from the advancement of AI to subject-specific information, and, whilst librarians are generally not offered the research opportunities and funding that the traditional academic disciplines enjoy, they are often invited to help build research in support of the academic. This identifies that one of the primary skills of any 21st-century librarian must be the ability to collaborate and facilitate multi-disciplinary projects. In universities seeking to develop research diversity and academic performance, there is an increasing awareness of the need for collaboration between faculties to enable novel directions and advancements. This idea has been documented and discussed by several researchers; however, there is not a great deal of literature available from recent studies. Having a team based in the library that is adept at creating effective collaborative partnerships is valuable for any academic institution. This paper outlines the development of such a project, initiated within and around an identified library-specific need: the replication of fragile special collections for object-based learning. The research was developed as a multi-disciplinary project involving the faculties of engineering (digital twins lab), architecture, design, and education. Centred around methods for developing a fragile archive into a series of tactile objects furthers knowledge and understanding in both the role of the library as a facilitator of projects, chairing and supporting, alongside contributing to the research process and innovating ideas through the bank of knowledge found amongst the staff and their liaising capabilities. This paper shall present the method of project development from the initiation of ideas to the development of prototypes and dissemination of the objects to teaching departments for analysis. The exact replication of artefacts is also balanced with the adaptation and evolutionary speculations initiated by the design team when adapted as a teaching studio method. The dynamic response required from the library to generate and facilitate these multi-disciplinary projects highlights the information expertise and liaison skills that the librarian possesses. As academia embraces this evolution, the potential for groundbreaking discoveries and innovative solutions across disciplines becomes increasingly attainable.Keywords: Liaison librarian, multi-disciplinary collaborations, library innovations, librarian stakeholders
Procedia PDF Downloads 68173 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets
Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe
Abstract:
Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.Keywords: biomedical research, genomics, information systems, software
Procedia PDF Downloads 269172 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 176171 Analysis of Complex Business Negotiations: Contributions from Agency-Theory
Authors: Jan Van Uden
Abstract:
The paper reviews classical agency-theory and its contributions to the analysis of complex business negotiations and gives an approach for the modification of the basic agency-model in order to examine the negotiation specific dimensions of agency-problems. By illustrating fundamental potentials for the modification of agency-theory in context of business negotiations the paper highlights recent empirical research that investigates agent-based negotiations and inter-team constellations. A general theoretical analysis of complex negotiation would be based on a two-level approach. First, the modification of the basic agency-model in order to illustrate the organizational context of business negotiations (i.e., multi-agent issues, common-agencies, multi-period models and the concept of bounded rationality). Second, the application of the modified agency-model on complex business negotiations to identify agency-problems and relating areas of risk in the negotiation process. The paper is placed on the first level of analysis – the modification. The method builds on the one hand on insights from behavior decision research (BRD) and on the other hand on findings from agency-theory as normative directives to the modification of the basic model. Through neoclassical assumptions concerning the fundamental aspects of agency-relationships in business negotiations (i.e., asymmetric information, self-interest, risk preferences and conflict of interests), agency-theory helps to draw solutions on stated worst-case-scenarios taken from the daily negotiation routine. As agency-theory is the only universal approach able to identify trade-offs between certain aspects of economic cooperation, insights obtained provide a deeper understanding of the forces that shape business negotiation complexity. The need for a modification of the basic model is illustrated by highlighting selected issues of business negotiations from agency-theory perspective: Negotiation Teams require a multi-agent approach under the condition that often decision-makers as superior-agents are part of the team. The diversity of competences and decision-making authority is a phenomenon that overrides the assumptions of classical agency-theory and varies greatly in context of certain forms of business negotiations. Further, the basic model is bound to dyadic relationships preceded by the delegation of decision-making authority and builds on a contractual created (vertical) hierarchy. As a result, horizontal dynamics within the negotiation team playing an important role for negotiation success are therefore not considered in the investigation of agency-problems. Also, the trade-off between short-term relationships within the negotiation sphere and the long-term relationships of the corporate sphere calls for a multi-period perspective taking into account the sphere-specific governance-mechanisms already established (i.e., reward and monitoring systems). Within the analysis, the implementation of bounded rationality is closely related to findings from BRD to assess the impact of negotiation behavior on underlying principal-agent-relationships. As empirical findings show, the disclosure and reservation of information to the agent affect his negotiation behavior as well as final negotiation outcomes. Last, in context of business negotiations, asymmetric information is often intended by decision-makers acting as superior-agents or principals which calls for a bilateral risk-approach to agency-relations.Keywords: business negotiations, agency-theory, negotiation analysis, interteam negotiations
Procedia PDF Downloads 139170 Phage Therapy of Staphylococcal Pyoderma in Dogs
Authors: Jiri Nepereny, Vladimir Vrzal
Abstract:
Staphylococcus intermedius/pseudintermedius bacteria are commonly found on the skin of healthy dogs and can cause pruritic skin diseases under certain circumstances (trauma, allergy, immunodeficiency, ectoparasitosis, endocrinological diseases, glucocorticoid therapy, etc.). These can develop into complicated superficial or deep pyoderma, which represent a large group of problematic skin diseases in dogs. These are predominantly inflammations of a secondary nature, associated with the occurrence of coagulase-positive Staphylococcus spp. A major problem is increased itching, which greatly complicates the healing process. The aim of this work is to verify the efficacy of the developed preparation Bacteriophage SI (Staphylococcus intermedius). The tested preparation contains a lysate of bacterial cells of S. intermedius host culture including culture medium and live virions of specific phage. Sodium Merthiolate is added as a preservative in a safe concentration. Validation of the efficacy of the product was demonstrated by monitoring the therapeutic effect after application to indicated cases from clinical practice. The indication for inclusion of the patient into the trial was an adequate history and clinical examination accompanied by sample collection for bacteriological examination and isolation of the specific causative agent. Isolate identification was performed by API BioMérieux identification system (API ID 32 STAPH) and rep-PCR typing. The suitability of therapy for a specific case was confirmed by in vitro testing of the lytic ability of the bacteriophage to lyse the specific isolate = formation of specific plaques on the culture isolate on the surface of the solid culture medium. So far, a total of 32 dogs of different sexes, ages and breed affiliations with different symptoms of staphylococcal dermatitis have been included in the testing. Their previous therapy consisted of more or less successful systemic or local application of broad-spectrum antibiotics. The presence of S. intermedius/pseudintermedius has been demonstrated in 26 cases. The isolates were identified as a S. pseudintermedius, in all cases. Contaminant bacterial microflora was always present in the examined samples. The test product was applied subcutaneously in gradually increasing doses over a period of 1 month. After improvement in health status, maintenance therapy was followed by application of the product once a week for 3 months. Adverse effects associated with the administration of the product (swelling at the site of application) occurred in only 2 cases. In all cases, there was a significant reduction in clinical signs (healing of skin lesions and reduction of inflammation) after therapy and an improvement in the well-being of the treated animals. A major problem in the treatment of pyoderma is the frequent resistance of the causative agents to antibiotics, especially the increasing frequency of multidrug-resistant and methicillin-resistant S. pseudintermedius (MRSP) strains. Specific phagolysate using for the therapy of these diseases could solve this problem and to some extent replace or reduce the use of antibiotics, whose frequent and widespread application often leads to the emergence of resistance. The advantage of the therapeutic use of bacteriophages is their bactericidal effect, high specificity and safety. This work was supported by Project FV40213 from Ministry of Industry and Trade, Czech Republic.Keywords: bacteriophage, pyoderma, staphylococcus spp, therapy
Procedia PDF Downloads 170169 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level
Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni
Abstract:
In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.Keywords: tropocollagen, multiscale model, fibrils, knee ligaments
Procedia PDF Downloads 127168 Female Subjectivity in William Faulkner's Light in August
Authors: Azza Zagouani
Abstract:
Introduction: In the work of William Faulkner, characters often evade the boundaries and categories of patriarchal standards of order. Female characters like Lena Grove and Joanna Burden cross thresholds in attempts to gain liberation, while others fail to do so. They stand as non-conformists and refuse established patterns of feminine behavior, such as marriage and motherhood after. They refute submissiveness, domesticity and abstinence to reshape their own identities. The presence of independent and creative women represents new, unconventional images of female subjectivity. This paper will examine the structures of submission and oppression faced by Lena and Joanna, and will show how, in the end, they reshape themselves and their identities, and disrupt or even destroy patriarchal structures. Objectives: Participants will understand through the examples of Lena Grove and Joanna Burden that female subjectivities are constructions, and are constantly subject to change. Approaches: Two approaches will be used in the analysis of the subjectivity formation of Lena Grove and Joanna Burden. Following the arguments propounded by Judith Butler, We explore the ways in which Lena Grove maneuvers around the restrictions and the limitations imposed on her without any physical or psychological violence. She does this by properly performing the roles prescribed to her gendered body. Her repetitious performances of these roles are both the ones that are constructed to confine women and the vehicle for her travel. Her performance parodies the prescriptive roles and thereby reveals that they are cultural constructions. Second, We will explore the argument propounded by Kristeva that subjectivity is always in a state of development because we are always changing in context with changing circumstances. For example, in Light in August, Lena Grove changes the way she defines herself in light of the events of the novel. Also, Kristeva talks about stages of development: the semiotic stage and the symbolic stage. In Light in August, Joanna shows different levels of subjectivity as time passes. Early in the novel, Joanna is very connected to her upbringing. This suggests Kristeva’s concept of the semiotic, in which the daughter identifies closely to her parents. Kristeva relates the semiotic to a strong daughter/mother connection, but in the novel it is strong daughter/father/grandfather identification instead. Then as Joanna becomes sexually involved with Joe, she breaks off, and seems to go into an identity crisis. To me, this represents Kristeva’s move from the semiotic to the symbolic. When Joanna returns to a religious fanaticism, she is returning to a semiotic state. Detailed outline: At the outset of this paper, We will investigate the subjugation of women: social constraints, and the formation of the feminine identity in Light in August. Then, through the examples of Lena Grove’s attempt to cross the boundaries of community moralities and Joanna Burden’s refusal to submit to the standards of submissiveness, domesticity, and obstinance, We will reveal the tension between progressive conceptions of individual freedom and social constraints that limit this freedom. In the second part of the paper, We will underscore the rhetoric of femininity in Light in August: subjugation through naming. The implications of both female’s names offer a powerful contrast between the two different forms of subjectivity. Conclusion: Through Faulkner’s novel, We demonstrate that female subjectivity is an open-ended issue. The spiral shaping of its form maintains its characteristics as a process changing according to different circumstances.Keywords: female subjectivity, Faulkner’s light August, gender, sexuality, diversity
Procedia PDF Downloads 395167 Social Licence to Operate Methodology to Secure Commercial, Community and Regulatory Approval for Small and Large Scale Fisheries
Authors: Kelly S. Parkinson, Katherine Y. Teh-White
Abstract:
Futureye has a bespoke social licence to operate methodology which has successfully secured community approval and commercial return for fisheries which have faced regulatory and financial risk. This unique approach to fisheries management focuses on delivering improved social and environmental outcomes to support the fishing industry make steps towards achieving the United Nations SDGs. An SLO is the community’s implicit consent for a business or project to exist. An SLO must be earned and maintained alongside regulatory licences. In current and new operations, it helps you to anticipate and measure community concerns around your operations – leading to more predictable and sensible policy outcomes that will not jeopardise your commercial returns. Rising societal expectations and increasing activist sophistication mean the international fishing industry needs to resolve community concerns at each stage their supply chain. Futureye applied our tested social licence to operate (SLO) methodology to help Austral Fisheries who was being attacked by activists concerned about the sustainability of Patagonian Toothfish. Austral was Marine Stewardship Council certified, but pirates were making the overall catch unsustainable. Austral wanted to be carbon neutral. SLO provides a lens on the risk that helps industries and companies act before regulatory and political risk escalates. To do this assessment, we have a methodology that assesses the risk that we can then translate into a process to create a strategy. 1) Audience: we understand the drivers of change and the transmission of those drivers across all audience segments. 2) Expectation: we understand the level of social norming of changing expectations. 3) Outrage: we understand the technical and perceptual aspects of risk and the opportunities to mitigate these. 4) Inter-relationships: we understand the political, regulatory, and reputation system so that we can understand the levers of change. 5) Strategy: we understand whether the strategy will achieve a social licence through bringing the internal and external stakeholders on the journey. Futureye’s SLO methodologies helped Austral to understand risks and opportunities to enhance its resilience. Futureye reviewed the issues, assessed outrage and materiality and mapped SLO threats to the company. Austral was introduced to a new way that it could manage activism, climate action, and responsible consumption. As a result of Futureye’s work, Austral worked closely with Sea Shepherd who was campaigning against pirates illegally fishing Patagonian Toothfish as well as international governments. In 2016 Austral launched the world’s first carbon neutral fish which won Austral a thirteen percent premium for tender on the open market. In 2017, Austral received the prestigious Banksia Foundation Sustainability Leadership Award for seafood that is sustainable, healthy and carbon neutral. Austral’s position as a leader in sustainable development has opened doors for retailers all over the world. Futureye’s SLO methodology can identify the societal, political and regulatory risks facing fisheries and position them to proactively address the issues and become an industry leader in sustainability.Keywords: carbon neutral, fisheries management, risk communication, social licence to operate, sustainable development
Procedia PDF Downloads 119166 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology
Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey
Abstract:
In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography
Procedia PDF Downloads 84165 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates
Authors: Jennifer Buz, Alvin Spivey
Abstract:
The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation
Procedia PDF Downloads 124164 Illness-Related PTSD Among Type 1 Diabetes Patients
Authors: Omer Zvi Shaked, Amir Tirosh
Abstract:
Type 1 Diabetes (T1DM) is an incurable chronic illness with no known preventive measures. Excess to insulin therapy can lead to hypoglycemia with neuro-glycogenic symptoms such as shakiness, nausea, sweating, irritability, fatigue, excessive thirst or hunger, weakness, seizure, and coma. Severe Hypoglycemia (SH) is also considered a most aversive event since it may put patients at risk for injury and death, which matches the criteria of a traumatic event. SH has a ranging prevalence of 20%, which makes it a primary medical Issue. One of the results of SH is an intense emotional fear reaction resembling the form of post-traumatic stress symptoms (PTS), causing many patients to avoid insulin therapy and social activities in order to avoid the possibility of hypoglycemia. As a result, they are at risk for irreversible health deterioration and medical complications. Fear of Hypoglycemia (FOH) is, therefore, a major disturbance for T1DM patients. FOH differs from prevalent post-traumatic stress reactions to other forms of traumatic events since the threat to life continuously exists in the patient's body. That is, it is highly probable that orthodox interventions may not be sufficient for helping patients after SH to regain healthy social function and proper medical treatment. Accordingly, the current presentation will demonstrate the results of a study conducted among T1DM patients after SH. The study was designed in two stages. First, a preliminary qualitative phenomenological study among ten patients after SH was conducted. Analysis revealed that after SH, patients confuse between stress symptoms and Hypoglycemia symptoms, divide life before and after the event, report a constant sense of fear, a loss of freedom, a significant decrease in social functioning, a catastrophic thinking pattern, a dichotomous split between the self and the body, and internalization of illness identity, a loss of internal locus of control, a damaged self-representation, and severe loneliness for never being understood by others. The second stage was a two steps study of intervention among five patients after SH. The first part of the intervention included three months of therapeutic 3rd wave CBT therapy. The contents of the therapeutic process were: acceptance of fear and tolerance to stress; cognitive de-fusion combined with emotional self-regulation; the adoption of an active position relying on personal values; and self-compassion. Then, the intervention included a one-week practical real-time 24/7 support by trained medical personnel, alongside a gradual exposure to increased insulin therapy in a protected environment. The results of the intervention are a decrease in stress symptoms, increased social functioning, increased well-being, and decreased avoidance of medical treatment. The presentation will discuss the unique emotional state of T1DM patients after SH. Then, the presentation will discuss the effectiveness of the intervention for patients with chronic conditions after a traumatic event. The presentation will make evident the unique situation of illness-related PTSD. The presentation will also demonstrate the requirement for multi-professional collaboration between social work and medical care for populations with chronic medical conditions. Limitations of the study and recommendations for further research will be discussed.Keywords: type 1 diabetes, chronic illness, post-traumatic stress, illness-related PTSD
Procedia PDF Downloads 176163 Analysis of Potential Associations of Single Nucleotide Polymorphisms in Patients with Schizophrenia Spectrum Disorders
Authors: Tatiana Butkova, Nikolai Kibrik, Kristina Malsagova, Alexander Izotov, Alexander Stepanov, Anna Kaysheva
Abstract:
Relevance. The genetic risk of developing schizophrenia is determined by two factors: single nucleotide polymorphisms and gene copy number variations. The search for serological markers for early diagnosis of schizophrenia is driven by the fact that the first five years of the disease are accompanied by significant biological, psychological, and social changes. It is during this period that pathological processes are most amenable to correction. The aim of this study was to analyze single nucleotide polymorphisms (SNPs) that are hypothesized to potentially influence the onset and development of the endogenous process. Materials and Methods It was analyzed 73 single nucleotide polymorphism variants. The study included 48 patients undergoing inpatient treatment at "Psychiatric Clinical Hospital No. 1" in Moscow, comprising 23 females and 25 males. Inclusion criteria: - Patients aged 18 and above. - Diagnosis according to ICD-10: F20.0, F20.2, F20.8, F21.8, F25.1, F25.2. - Voluntary informed consent from patients. Exclusion criteria included: - The presence of concurrent somatic or neurological pathology, neuroinfections, epilepsy, organic central nervous system damage of any etiology, and regular use of medication. - Substance abuse and alcohol dependence. - Women who were pregnant or breastfeeding. Clinical and psychopathological assessment was complemented by psychometric evaluation using the PANSS scale at the beginning and end of treatment. The duration of observation during therapy was 4-6 weeks. Total DNA extraction was performed using QIAamp DNA. Blood samples were processed on Illumina HiScan and genotyped for 652,297 markers on the Infinium Global Chips Screening Array-24v2.0 using the IMPUTE2 program with parameters Ne=20,000 and k=90. Additional filtration was performed based on INFO>0.5 and genotype probability>0.5. Quality control of the obtained DNA was conducted using agarose gel electrophoresis, with each tested sample having a volume of 100 µL. Results. It was observed that several SNPs exhibited gender dependence. We identified groups of single nucleotide polymorphisms with a membership of 80% or more in either the female or male gender. These SNPs included rs2661319, rs2842030, rs4606, rs11868035, rs518147, rs5993883, and rs6269.Another noteworthy finding was the limited combination of SNPs sufficient to manifest clinical symptoms leading to hospitalization. Among all 48 patients, each of whom was analyzed for deviations in 73 SNPs, it was discovered that the combination of involved SNPs in the manifestation of pronounced clinical symptoms of schizophrenia was 19±3 out of 73 possible. In study, the frequency of occurrence of single nucleotide polymorphisms also varied. The most frequently observed SNPs were rs4849127 (in 90% of cases), rs1150226 (86%), rs1414334 (75%), rs10170310 (73%), rs2857657, and rs4436578 (71%). Conclusion. Thus, the results of this study provide additional evidence that these genes may be associated with the development of schizophrenia spectrum disorders. However, it's impossible cannot rule out the hypothesis that these polymorphisms may be in linkage disequilibrium with other functionally significant polymorphisms that may actually be involved in schizophrenia spectrum disorders. It has been shown that missense SNPs by themselves are likely not causative of the disease but are in strong linkage disequilibrium with non-functional SNPs that may indeed contribute to disease predisposition.Keywords: gene polymorphisms, genotyping, single nucleotide polymorphisms, schizophrenia.
Procedia PDF Downloads 78162 Dietary Diversification and Nutritional Education: A Strategy to Improve Child Food Security Status in the Rural Mozambique
Authors: Rodriguez Diego, Del Valle Martin, Hargreaves Matias, Riveros Jose Luis
Abstract:
Nutrient deficiencies due to a diet low in quantitative and qualitative terms, are prevalent throughout the developing world, especially in sub-Saharan Africa. Children and women of childbearing age are especially vulnerable. Limited availability, access and intake of animal foods at home and lack of knowledge about their value in the diet and the role they play in health, contribute to poor diet quality. Poor bioavailability of micronutrients in diets based on foods high in fiber and phytates, the low content of some micronutrients in these foods are further factors to consider. Goats are deeply embedded in almost every Sub-Saharan African rural culture, generally kept for their milk, meat, hair or leather. Goats have played an important role in African social life, especially in food security. Goat meat has good properties for human wellbeing, with a special role in lower income households. It has a high-quality protein (20 protein g/100 meat g) including all essential amino acids, good unsaturated/satured fatty acids relationship, and it is an important B-vitamin source with high micronutrients bioavailability. Mozambique has major food security problems, with poor food access and utilization, undiversified diets, chronic poverty and child malnutrition. Our objective was to design a nutritional intervention based on a dietary diversification, nutritional education, cultural beliefs and local resources, aimed to strengthen food security of children at Barrio Broma village (15°43'58.78"S; 32°46'7.27"E) in Chitima, Mozambique. Two surveys were conducted first of socio-productive local databases and then to 100 rural households about livelihoods, food diversity and anthropometric measurements in children under 5 years. Our results indicate that the main economic activity is goat production, based on a native breed with two deliveries per year in the absence of any management. Adult goats weighted 27.2±10.5 kg and raised a height of 63.5±3.8 cm. Data showed high levels of poverty, with a food diversity score of 2.3 (0-12 points), where only 30% of households consume protein and 13% iron, zinc, and B12 vitamin. The main constraints to food security were poor access to water and low income to buy food. Our dietary intervention was based on improving diet quality by increasing the access to dried goat meat, fresh vegetables, and legumes, and its utilization by a nutritional education program. This proposal was based on local culture and living conditions characterized by the absence of electricity power and drinkable water. The drying process proposed would secure the food maintenance under local conditions guaranteeing food safety for a longer period. Additionally, an ancient local drying technique was rescued and used. Moreover, this kind of dietary intervention would be the most efficient way to improve the infant nutrition by delivering macro and micronutrients on time to these vulnerable populations.Keywords: child malnutrition, dietary diversification, food security, goat meat
Procedia PDF Downloads 301161 Environmental Life Cycle Assessment of Circular, Bio-Based and Industrialized Building Envelope Systems
Authors: N. Cihan KayaçEtin, Stijn Verdoodt, Alexis Versele
Abstract:
The construction industry is accounted for one-third of all waste generated in the European Union (EU) countries. The Circular Economy Action Plan of the EU aims to tackle this issue and aspires to enhance the sustainability of the construction industry by adopting more circular principles and bio-based material use. The Interreg Circular Bio-Based Construction Industry (CBCI) project was conceived to research how this adoption can be facilitated. For this purpose, an approach is developed that integrates technical, legal and social aspects and provides business models for circular designing and building with bio-based materials. In the scope of the project, the research outputs are to be displayed in a real-life setting by constructing a demo terraced single-family house, the living lab (LL) located in Ghent (Belgium). The realization of the LL is conducted in a step-wise approach that includes iterative processes for design, description, criteria definition and multi-criteria assessment of building components. The essence of the research lies within the exploratory approach to the state-of-art building envelope and technical systems options for achieving an optimum combination for a circular and bio-based construction. For this purpose, nine preliminary designs (PD) for building envelope are generated, which consist of three basic construction methods: masonry, lightweight steel construction and wood framing construction supplemented with bio-based construction methods like cross-laminated timber (CLT) and massive wood framing. A comparative analysis on the PDs was conducted by utilizing several complementary tools to assess the circularity. This paper focuses on the life cycle assessment (LCA) approach for evaluating the environmental impact of the LL Ghent. The adoption of an LCA methodology was considered critical for providing a comprehensive set of environmental indicators. The PDs were developed at the component level, in particular for the (i) inclined roof, (ii-iii) front and side façade, (iv) internal walls and (v-vi) floors. The assessment was conducted on two levels; component and building level. The options for each component were compared at the first iteration and then, the PDs as an assembly of components were further analyzed. The LCA was based on a functional unit of one square meter of each component and CEN indicators were utilized for impact assessment for a reference study period of 60 years. A total of 54 building components that are composed of 31 distinct materials were evaluated in the study. The results indicate that wood framing construction supplemented with bio-based construction methods performs environmentally better than the masonry or steel-construction options. An analysis on the correlation between the total weight of components and environmental impact was also conducted. It was seen that masonry structures display a high environmental impact and weight, steel structures display low weight but relatively high environmental impact and wooden framing construction display low weight and environmental impact. The study provided valuable outputs in two levels: (i) several improvement options at component level with substitution of materials with critical weight and/or impact per unit, (ii) feedback on environmental performance for the decision-making process during the design phase of a circular single family house.Keywords: circular and bio-based materials, comparative analysis, life cycle assessment (LCA), living lab
Procedia PDF Downloads 182160 Optimization and Coordination of Organic Product Supply Chains under Competition: An Analytical Modeling Perspective
Authors: Mohammadreza Nematollahi, Bahareh Mosadegh Sedghy, Alireza Tajbakhsh
Abstract:
The last two decades have witnessed substantial attention to organic and sustainable agricultural supply chains. Motivated by real-world practices, this paper aims to address two main challenges observed in organic product supply chains: decentralized decision-making process between farmers and their retailers, and competition between organic products and their conventional counterparts. To this aim, an agricultural supply chain consisting of two farmers, a conventional farmer and an organic farmer who offers an organic version of the same product, is considered. Both farmers distribute their products through a single retailer, where there exists competition between the organic and the conventional product. The retailer, as the market leader, sets the wholesale price, and afterward, the farmers set their production quantity decisions. This paper first models the demand functions of the conventional and organic products by incorporating the effect of asymmetric brand equity, which captures the fact that consumers usually pay a premium for organic due to positive perceptions regarding their health and environmental benefits. Then, profit functions with consideration of some characteristics of organic farming, including crop yield gap and organic cost factor, are modeled. Our research also considers both economies and diseconomies of scale in farming production as well as the effects of organic subsidy paid by the government to support organic farming. This paper explores the investigated supply chain in three scenarios: decentralized, centralized, and coordinated decision-making structures. In the decentralized scenario, the conventional and organic farmers and the retailer maximize their own profits individually. In this case, the interaction between the farmers is modeled under the Bertrand competition, while analyzing the interaction between the retailer and farmers under the Stackelberg game structure. In the centralized model, the optimal production strategies are obtained from the entire supply chain perspective. Analytical models are developed to derive closed-form optimal solutions. Moreover, analytical sensitivity analyses are conducted to explore the effects of main parameters like the crop yield gap, organic cost factor, organic subsidy, and percent price premium of the organic product on the farmers’ and retailer’s optimal strategies. Afterward, a coordination scenario is proposed to convince the three supply chain members to shift from the decentralized to centralized decision-making structure. The results indicate that the proposed coordination scenario provides a win-win-win situation for all three members compared to the decentralized model. Moreover, our paper demonstrates that the coordinated model respectively increases and decreases the production and price of organic produce, which in turn motivates the consumption of organic products in the market. Moreover, the proposed coordination model helps the organic farmer better handle the challenges of organic farming, including the additional cost and crop yield gap. Last but not least, our results highlight the active role of the organic subsidy paid by the government as a means of promoting sustainable organic product supply chains. Our paper shows that although the amount of organic subsidy plays a significant role in the production and sales price of organic products, the allocation method of subsidy between the organic farmer and retailer is not of that importance.Keywords: analytical game-theoretic model, product competition, supply chain coordination, sustainable organic supply chain
Procedia PDF Downloads 109159 Challenges and Proposals for Public Policies Aimed At Increasing Energy Efficiency in Low-Income Communities in Brazil: A Multi-Criteria Approach
Authors: Anna Carolina De Paula Sermarini, Rodrigo Flora Calili
Abstract:
Energy Efficiency (EE) needs investments, new technologies, greater awareness and management on the side of citizens and organizations, and more planning. However, this issue is usually remembered and discussed only in moments of energy crises, and opportunities are missed to take better advantage of the potential of EE in the various sectors of the economy. In addition, there is little concern about the subject among the less favored classes, especially in low-income communities. Accordingly, this article presents suggestions for public policies that aim to increase EE for low-income housing and communities based on international and national experiences. After reviewing the literature, eight policies were listed, and to evaluate them; a multicriteria decision model was developed using the AHP (Analytical Hierarchy Process) and TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) methods, combined with fuzzy logic. Nine experts analyzed the policies according to 9 criteria: economic impact, social impact, environmental impact, previous experience, the difficulty of implementation, possibility/ease of monitoring and evaluating the policies, expected impact, political risks, and public governance and sustainability of the sector. The results found in order of preference are (i) Incentive program for equipment replacement; (ii) Community awareness program; (iii) EE Program with a greater focus on low income; (iv) Staggered and compulsory certification of social interest buildings; (v) Programs for the expansion of smart metering, energy monitoring and digitalization; (vi) Financing program for construction and retrofitting of houses with the emphasis on EE; (vii) Income tax deduction for investment in EE projects in low-income households made by companies; (viii) White certificates of energy for low-income. First, the policy of equipment substitution has been employed in Brazil and the world and has proven effective in promoting EE. For implementation, efforts are needed from the federal and state governments, which can encourage companies to reduce prices, and provide some type of aid for the purchase of such equipment. In second place is the community awareness program, promoting socio-educational actions on EE concepts and with energy conservation tips. This policy is simple to implement and has already been used by many distribution utilities in Brazil. It can be carried out through bids defined by the government in specific areas, being executed by third sector companies with public and private resources. Third on the list is the proposal to continue the Energy Efficiency Program (which obliges electric energy companies to allocate resources for research in the area) by suggesting the return of the mandatory investment of 60% of the resources in projects for low income. It is also relatively simple to implement, requiring efforts by the federal government to make it mandatory, and on the part of the distributors, compliance is needed. The success of the suggestions depends on changes in the established rules and efforts from the interested parties. For future work, we suggest the development of pilot projects in low-income communities in Brazil and the application of other multicriteria decision support methods to compare the results obtained in this study.Keywords: energy efficiency, low-income community, public policy, multicriteria decision making
Procedia PDF Downloads 116158 Superhydrophobic Materials: A Promising Way to Enhance Resilience of Electric System
Authors: M. Balordi, G. Santucci de Magistris, F. Pini, P. Marcacci
Abstract:
The increasing of extreme meteorological events represents the most important causes of damages and blackouts of the whole electric system. In particular, the icing on ground-wires and overheads lines, due to snowstorms or harsh winter conditions, very often gives rise to the collapse of cables and towers both in cold and warm climates. On the other hand, the high concentration of contaminants in the air, due to natural and/or antropic causes, is reflected in high levels of pollutants layered on glass and ceramic insulators, causing frequent and unpredictable flashover events. Overheads line and insulator failures lead to blackouts, dangerous and expensive maintenances and serious inefficiencies in the distribution service. Inducing superhydrophobic (SHP) properties to conductors, ground-wires and insulators, is one of the ways to face all these problems. Indeed, in some cases, the SHP surface can delay the ice nucleation time and decrease the ice nucleation temperature, preventing ice formation. Besides, thanks to the low surface energy, the adhesion force between ice and a superhydrophobic material are low and the ice can be easily detached from the surface. Moreover, it is well known that superhydrophobic surfaces can have self-cleaning properties: these hinder the deposition of pollution and decrease the probability of flashover phenomena. Here this study presents three different studies to impart superhydrophobicity to aluminum, zinc and glass specimens, which represent the main constituent materials of conductors, ground-wires and insulators, respectively. The route to impart the superhydrophobicity to the metallic surfaces can be summarized in a three-step process: 1) sandblasting treatment, 2) chemical-hydrothermal treatment and 3) coating deposition. The first step is required to create a micro-roughness. In the chemical-hydrothermal treatment a nano-scale metallic oxide (Al or Zn) is grown and, together with the sandblasting treatment, bring about a hierarchical micro-nano structure. By coating an alchilated or fluorinated siloxane coating, the surface energy decreases and gives rise to superhydrophobic surfaces. In order to functionalize the glass, different superhydrophobic powders, obtained by a sol-gel synthesis, were prepared. Further, the specimens were covered with a commercial primer and the powders were deposed on them. All the resulting metallic and glass surfaces showed a noticeable superhydrophobic behavior with a very high water contact angles (>150°) and a very low roll-off angles (<5°). The three optimized processes are fast, cheap and safe, and can be easily replicated on industrial scales. The anti-icing and self-cleaning properties of the surfaces were assessed with several indoor lab-tests that evidenced remarkable anti-icing properties and self-cleaning behavior with respect to the bare materials. Finally, to evaluate the anti-snow properties of the samples, some SHP specimens were exposed under real snow-fall events in the RSE outdoor test-facility located in Vinadio, western Alps: the coated samples delay the formation of the snow-sleeves and facilitate the detachment of the snow. The good results for both indoor and outdoor tests make these materials promising for further development in large scale applications.Keywords: superhydrophobic coatings, anti-icing, self-cleaning, anti-snow, overheads lines
Procedia PDF Downloads 182157 Impact of Water Interventions under WASH Program in the South-west Coastal Region of Bangladesh
Authors: S. M. Ashikur Elahee, Md. Zahidur Rahman, Md. Shofiqur Rahman
Abstract:
This study evaluated the impact of different water interventions under WASH program on access of household's to safe drinking water. Following survey method, the study was carried out in two Upazila of South-west coastal region of Bangladesh namely Koyra from Khulna and Shymnagar from Satkhira district. Being an explanatory study, a total of 200 household's selected applying random sampling technique were interviewed using a structured interview schedule. The predicted probability suggests that around 62 percent household's are out of year-round access to safe drinking water whereby, only 25 percent household's have access at SPHERE standard (913 Liters/per person/per year). Besides, majority (78 percent) of the household's have not accessed at both indicators simultaneously. The distance from household residence to the water source varies from 0 to 25 kilometer with an average distance of 2.03 kilometers. The study also reveals that the increase in monthly income around BDT 1,000 leads to additional 11 liters (coefficient 0.01 at p < 0.1) consumption of safe drinking water for a person/year. As expected, lining up time has significant negative relationship with dependent variables i.e., for higher lining up time, the probability of getting access for both SPHERE standard and year round access variables becomes lower. According to ordinary least square (OLS) regression results, water consumption decreases at 93 liters for per person/year of a household if one member is added to that household. Regarding water consumption intensity, ordered logistic regression (OLR) model shows that one-minute increase of lining up time for water collection tends to reduce water consumption intensity. On the other hand, as per OLS regression results, for one-minute increase of lining up time, the water consumption decreases by around 8 liters. Considering access to Deep Tube Well (DTW) as a reference dummy, in OLR, the household under Pond Sand Filter (PSF), Shallow Tube Well (STW), Reverse Osmosis (RO) and Rainwater Harvester System (RWHS) are respectively 37 percent, 29 percent, 61 percent and 27 percent less likely to ensure year round access of water consumption. In line of health impact, different type of water born diseases like diarrhea, cholera, and typhoid are common among the coastal community caused by microbial impurities i.e., Bacteria, Protozoa. High turbidity and TDS in pond water caused by reduction of water depth, presence of suspended particle and inorganic salt stimulate the growth of bacteria, protozoa, and algae causes affecting health hazard. Meanwhile, excessive growth of Algae in pond water caused by excessive nitrate in drinking water adversely effects on child health. In lieu of ensuring access at SPHERE standard, we need to increase the number of water interventions at reasonable distance, preferably a half kilometer away from the dwelling place, ensuring community peoples involved with its installation process where collectively owned water intervention is found more effective than privately owned. In addition, a demand-responsive approach to supply of piped water should be adopted to allow consumer demand to guide investment in domestic water supply in future.Keywords: access, impact, safe drinking water, Sphere standard, water interventions
Procedia PDF Downloads 218156 Optimized Processing of Neural Sensory Information with Unwanted Artifacts
Authors: John Lachapelle
Abstract:
Introduction: Neural stimulation is increasingly targeted toward treatment of back pain, PTSD, Parkinson’s disease, and for sensory perception. Sensory recording during stimulation is important in order to examine neural response to stimulation. Most neural amplifiers (headstages) focus on noise efficiency factor (NEF). Conversely, neural headstages need to handle artifacts from several sources including power lines, movement (EMG), and neural stimulation itself. In this work a layered approach to artifact rejection is used to reduce corruption of the neural ENG signal by 60dBv, resulting in recovery of sensory signals in rats and primates that would previously not be possible. Methods: The approach combines analog techniques to reduce and handle unwanted signal amplitudes. The methods include optimized (1) sensory electrode placement, (2) amplifier configuration, and (3) artifact blanking when necessary. The techniques together are like concentric moats protecting a castle; only the wanted neural signal can penetrate. There are two conditions in which the headstage operates: unwanted artifact < 50mV, linear operation, and artifact > 50mV, fast-settle gain reduction signal limiting (covered in more detail in a separate paper). Unwanted Signals at the headstage input: Consider: (a) EMG signals are by nature < 10mV. (b) 60 Hz power line signals may be > 50mV with poor electrode cable conditions; with careful routing much of the signal is common to both reference and active electrode and rejected in the differential amplifier with <50mV remaining. (c) An unwanted (to the neural recorder) stimulation signal is attenuated from stimulation to sensory electrode. The voltage seen at the sensory electrode can be modeled Φ_m=I_o/4πσr. For a 1 mA stimulation signal, with 1 cm spacing between electrodes, the signal is <20mV at the headstage. Headstage ASIC design: The front end ASIC design is designed to produce < 1% THD at 50mV input; 50 times higher than typical headstage ASICs, with no increase in noise floor. This requires careful balance of amplifier stages in the headstage ASIC, as well as consideration of the electrodes effect on noise. The ASIC is designed to allow extremely small signal extraction on low impedance (< 10kohm) electrodes with configuration of the headstage ASIC noise floor to < 700nV/rt-Hz. Smaller high impedance electrodes (> 100kohm) are typically located closer to neural sources and transduce higher amplitude signals (> 10uV); the ASIC low-power mode conserves power with 2uV/rt-Hz noise. Findings: The enhanced neural processing ASIC has been compared with a commercial neural recording amplifier IC. Chronically implanted primates at MGH demonstrated the presence of commercial neural amplifier saturation as a result of large environmental artifacts. The enhanced artifact suppression headstage ASIC, in the same setup, was able to recover and process the wanted neural signal separately from the suppressed unwanted artifacts. Separately, the enhanced artifact suppression headstage ASIC was able to separate sensory neural signals from unwanted artifacts in mouse-implanted peripheral intrafascicular electrodes. Conclusion: Optimizing headstage ASICs allow observation of neural signals in the presence of large artifacts that will be present in real-life implanted applications, and are targeted toward human implantation in the DARPA HAPTIX program.Keywords: ASIC, biosensors, biomedical signal processing, biomedical sensors
Procedia PDF Downloads 327155 Azolla Pinnata as Promising Source for Animal Feed in India: An Experimental Study to Evaluate the Nutrient Enhancement Result of Feed
Authors: Roshni Raha, Karthikeyan S.
Abstract:
The world's largest livestock population resides in India. Existing strategies must be modified to increase the production of livestock and their by-products in order to meet the demands of the growing human population. Even though India leads the world in both milk production and the number of cows, average production is not very healthy and productive. This may be due to the animals' poor nutrition caused by a chronic under-availability of high-quality fodder and feed. This article explores Azolla pinnata to be a promising source to produce high-quality unconventional feed and fodder for effective livestock production and good quality breeding in India. This article is an exploratory study using a literature survey and experimentation analysis. In the realm of agri-biotechnology, azolla sp gained attention for helping farmers achieve sustainability, having minimal land requirements, and serving as a feed element that doesn't compete with human food sources. It has high methionine content, which is a good source of protein. It can be easily digested as the lignin content is low. It has high antioxidants and vitamins like beta carotene, vitamin A, and vitamin B12. Using this concept, the paper aims to investigate and develop a model of using azolla plants as a novel, high-potential feed source to combat the problems of low production and poor quality of animals in India. A representative sample of animal feed is collected where azolla is added. The sample is ground into a fine powder using mortar. PITC (phenylisothiocyanate) is added to derivatize the amino acids. The sample is analyzed using HPLC (High-Performance Liquid Chromatography) to measure the amino acids and monitor the protein content of the sample feed. The amino acid measurements from HPLC are converted to milligrams per gram of protein using the method of amino acid profiling via a set of calculations. The amino acid profile data is then obtained to validate the proximate results of nutrient enhancement of the composition of azolla in the sample. Based on the proximate composition of azolla meal, the enhancement results shown were higher compared to the standard values of normal fodder supplements indicating the feed to be much richer and denser in nutrient supply. Thus azolla fed sample proved to be a promising source for animal fodder. This would in turn lead to higher production and a good breed of animals that would help to meet the economic demands of the growing Indian population. Azolla plants have no side effects and can be considered as safe and effective to be immersed in the animal feed. One area of future research could begin with the upstream scaling strategy of azolla plants in India. This could involve introducing several bioreactor types for its commercial production. Since azolla sp has been proved in this paper as a promising source for high quality animal feed and fodder, large scale production of azolla plants will help to make the process much quicker, more efficient and easily accessible. Labor expenses will also be reduced by employing bioreactors for large-scale manufacturing.Keywords: azolla, fodder, nutrient, protein
Procedia PDF Downloads 53