Search results for: higher accounting education
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17016

Search results for: higher accounting education

96 Horizontal Cooperative Game Theory in Hotel Revenue Management

Authors: Ririh Rahma Ratinghayu, Jayu Pramudya, Nur Aini Masruroh, Shi-Woei Lin

Abstract:

This research studies pricing strategy in cooperative setting of hotel duopoly selling perishable product under fixed capacity constraint by using the perspective of managers. In hotel revenue management, competitor’s average room rate and occupancy rate should be taken into manager’s consideration in determining pricing strategy to generate optimum revenue. This information is not provided by business intelligence or available in competitor’s website. Thus, Information Sharing (IS) among players might result in improved performance of pricing strategy. IS is widely adopted in the logistics industry, but IS within hospitality industry has not been well-studied. This research put IS as one of cooperative game schemes, besides Mutual Price Setting (MPS) scheme. In off-peak season, hotel manager arranges pricing strategy to offer promotion package and various kinds of discounts up to 60% of full-price to attract customers. Competitor selling homogenous product will react the same, then triggers a price war. Price war which generates lower revenue may be avoided by creating collaboration in pricing strategy to optimize payoff for both players. In MPS cooperative game, players collaborate to set a room rate applied for both players. Cooperative game may avoid unfavorable players’ payoff caused by price war. Researches on horizontal cooperative game in logistics show better performance and payoff for the players, however, horizontal cooperative game in hotel revenue management has not been demonstrated. This paper aims to develop hotel revenue management models under duopoly cooperative schemes (IS & MPS), which are compared to models under non-cooperative scheme too. Each scheme has five models, Capacity Allocation Model; Demand Model; Revenue Model; Optimal Price Model; and Equilibrium Price Model. Capacity Allocation Model and Demand Model employs self-hotel and competitor’s full and discount price as predictors under non-linear relation. Optimal price is obtained by assuming revenue maximization motive. Equilibrium price is observed by interacting self-hotel’s and competitor’s optimal price under reaction equation. Equilibrium is analyzed using game theory approach. The sequence applies for three schemes. MPS Scheme differently aims to optimize total players’ payoff. The case study in which theoretical models are applied observes two hotels offering homogenous product in Indonesia during a year. The Capacity Allocation, Demand, and Revenue Models are built using multiple regression and statistically tested for validation. Case study data confirms that price behaves within demand model in a non-linear manner. IS Models can represent the actual demand and revenue data better than Non-IS Models. Furthermore, IS enables hotels to earn significantly higher revenue. Thus, duopoly hotel players in general, might have reasonable incentives to share information horizontally. During off-peak season, MPS Models are able to predict the optimal equal price for both hotels. However, Nash equilibrium may not always exist depending on actual payoff of adhering or betraying mutual agreement. To optimize performance, horizontal cooperative game may be chosen over non-cooperative game. Mathematical models can be used to detect collusion among business players. Empirical testing can be used as policy input for market regulator in preventing unethical business practices potentially harming society welfare.

Keywords: horizontal cooperative game theory, hotel revenue management, information sharing, mutual price setting

Procedia PDF Downloads 288
95 Identifying Effective Strategies to Promote Vietnamese Fashion Brands in an Internationally Dominated Market

Authors: Lam Hong Lan, Gabor Sarlos

Abstract:

It is hard to search for best practices in promotion for local fashion brands in Vietnam as the industry is still very young. Local fashion start-ups have grown quickly in the last five years, thanks in part to the internet and social media. However, local designer/owners can face a huge challenge when competing with international brands in the Vietnamese market – and few local case studies are available for guidance. In response, this paper studied how local small- to medium-sized enterprises (SMEs) promote to their target customers in order to compete with international brands. Knowledge of both successful and unsuccessful approaches generated by this study is intended to both contribute to the academic literature on local fashion in Vietnam as well as to help local designers to learn from and improve their brand-building strategy. The primary study featured qualitative data collection via semi-structured depth interviews. Transcription and data analysis were conducted manually in order to identify success factors that local brands should consider as part of their promotion strategy. Purposive sampling of SMEs identified five designers in Ho Chi Minh City (the biggest city in Vietnam) and three designers in Hanoi (the second biggest) as interviewees. Participant attributes included: born in the 1980s or 1990s; familiar with internet and social media; designer/owner of a successful local fashion brand in the key middle market and/or mass market segments (which are crucial to the growth of local brands). A secondary study was conducted using social listening software to gather further qualitative data on what were considered to be successful or unsuccessful approaches to local fashion brand promotion on social media. Both the primary and secondary studies indicated that local designers had maximized their promotion budget by using owned media and earned media instead of paid media. Findings from the qualitative interviews indicate that internet and social media have been used as effective promotion platforms by local fashion start-ups. Facebook and Instagram were the most popular social networks used by the SMEs interviewed, and these social platforms were believed to offer a more affordable promotional strategy than traditional media such as TV and/or print advertising. Online stores were considered an important factor in helping the SMEs to reach customers beyond the physical store. Furthermore, a successful online store allowed some SMEs to reduce their business rental costs by maintaining their physical store in a cheaper, less central city area as opposed to a more traditional city center store location. In addition, the small comparative size of the SMEs allowed them to be more attentive to their customers, leading to higher customer satisfaction and rate of return. In conclusion, this study found that these kinds of cost savings helped the SMEs interviewed to focus their scarce resources on producing unique, high-quality collections in order to differentiate themselves from international brands. Facebook and Instagram were the main platforms used for promotion and brand-building. The main challenge to this promotion strategy identified by the SMEs interviewed was to continue to find innovative ways to maximize the impact of a limited marketing budget.

Keywords: Vietnam, SMEs, fashion brands, promotion, marketing, social listening

Procedia PDF Downloads 124
94 Microstructural Characterization of Bitumen/Montmorillonite/Isocyanate Composites by Atomic Force Microscopy

Authors: Francisco J. Ortega, Claudia Roman, Moisés García-Morales, Francisco J. Navarro

Abstract:

Asphaltic bitumen has been largely used in both industrial and civil engineering, mostly in pavement construction and roofing membrane manufacture. However, bitumen as such is greatly susceptible to temperature variations, and dramatically changes its in-service behavior from a viscoelastic liquid, at medium-high temperatures, to a brittle solid at low temperatures. Bitumen modification prevents these problems and imparts improved performance. Isocyanates like polymeric MDI (mixture of 4,4′-diphenylmethane di-isocyanate, 2,4’ and 2,2’ isomers, and higher homologues) have shown to remarkably enhance bitumen properties at the highest in-service temperatures expected. This comes from the reaction between the –NCO pendant groups of the oligomer and the most polar groups of asphaltenes and resins in bitumen. In addition, oxygen diffusion and/or UV radiation may provoke bitumen hardening and ageing. With the purpose of minimizing these effects, nano-layered-silicates (nanoclays) are increasingly being added to bitumen formulations. Montmorillonites, a type of naturally occurring mineral, may produce a nanometer scale dispersion which improves bitumen thermal, mechanical and barrier properties. In order to increase their lipophilicity, these nanoclays are normally treated so that organic cations substitute the inorganic cations located in their intergallery spacing. In the present work, the combined effect of polymeric MDI and the commercial montmorillonite Cloisite® 20A was evaluated. A selected bitumen with penetration within the range 160/220 was modified with 10 wt.% Cloisite® 20A and 2 wt.% polymeric MDI, and the resulting ternary composites were characterized by linear rheology, X-ray diffraction (XRD) and Atomic Force Microscopy (AFM). The rheological tests evidenced a notable solid-like behavior at the highest temperatures studied when bitumen was just loaded with 10 wt.% Cloisite® 20A and high-shear blended for 20 minutes. However, if polymeric MDI was involved, the sequence of addition exerted a decisive control on the linear rheology of the final ternary composites. Hence, in bitumen/Cloisite® 20A/polymeric MDI formulations, the previous solid-like behavior disappeared. By contrast, an inversion in the order of addition (bitumen/polymeric MDI/ Cloisite® 20A) enhanced further the solid-like behavior imparted by the nanoclay. In order to gain a better understanding of the factors that govern the linear rheology of these ternary composites, a morphological and microstructural characterization based on XRD and AFM was conducted. XRD demonstrated the existence of clay stacks intercalated by bitumen molecules to some degree. However, the XRD technique cannot provide detailed information on the extent of nanoclay delamination, unless the entire fraction has effectively been fully delaminated (situation in which no peak is observed). Furthermore, XRD was unable to provide precise knowledge neither about the spatial distribution of the intercalated/exfoliated platelets nor about the presence of other structures at larger length scales. In contrast, AFM proved its power at providing conclusive information on the morphology of the composites at the nanometer scale and at revealing the structural modification that yielded the rheological properties observed. It was concluded that high-shear blending brought about a nanoclay-reinforced network. As for the bitumen/Cloisite® 20A/polymeric MDI formulations, the solid-like behavior was destroyed as a result of the agglomeration of the nanoclay platelets promoted by chemical reactions.

Keywords: Atomic Force Microscopy, bitumen, composite, isocyanate, montmorillonite.

Procedia PDF Downloads 260
93 Benchmarking of Petroleum Tanker Discharge Operations at a Nigerian Coastal Terminal and Jetty Facilitates Optimization of the Ship–Shore Interface

Authors: Bassey O. Bassey

Abstract:

Benchmarking has progressively become entrenched as a requisite activity for process improvement and enhancing service delivery at petroleum jetties and terminals, most especially during tanker discharge operations at the ship – shore interface, as avoidable delays result in extra operating costs, non-productive time, high demurrage payments and ultimate product scarcity. The jetty and terminal in focus had been operational for 3 and 8 years respectively, with proper operational and logistic records maintained to evaluate their progress over time in order to plan and implement modifications and review of procedures for greater technical and economic efficiency. Regular and emergency staff meetings were held on a team, departmental and company-wide basis to progressively address major challenges that were encountered during each operation. The process and outcome of the resultant collectively planned changes carried out within the past two years forms the basis of this paper, which mirrors the initiatives effected to enhance operational and maintenance excellence at the affected facilities. Operational modifications included a second cargo receipt line designated for gasoline, product loss control at jetty and shore ends, enhanced product recovery and quality control, and revival of terminal–jetty backloading operations. Logistic improvements were the incorporation of an internal logistics firm and shipping agency, fast tracking of discharge procedures for tankers, optimization of tank vessel selection process, and third party product receipt and throughput. Maintenance excellence was achieved through construction of two new lay barges and refurbishment of the existing one; revamping of existing booster pump and purchasing of a modern one as reserve capacity; extension of Phase 1 of the jetty to accommodate two vessels and construction of Phase 2 for two more vessels; regular inspection, draining, drying and replacement of cargo hoses; corrosion management program for all process facilities; and an improved, properly planned and documented maintenance culture. Safety, environmental and security compliance were enhanced by installing state-of-the-art fire fighting facilities and equipment, seawater intake line construction as backup for borehole at the terminal, remediation of the shoreline and marine structures, modern spill containment equipment, improved housekeeping and accident prevention practices, and installation of hi-technology security enhancements, among others. The end result has been observed over the past two years to include improved tanker turnaround time, higher turnover on product sales, consistent product availability, greater indigenous human capacity utilisation by way of direct hires and contracts, as well as customer loyalty. The lessons learnt from this exercise would, therefore, serve as a model to be adapted by other operators of similar facilities, contractors, academics and consultants in a bid to deliver greater sustainability and profitability of operations at the ship – shore interface to this strategic industry.

Keywords: benchmarking, optimisation, petroleum jetty, petroleum terminal

Procedia PDF Downloads 363
92 A Basic Concept for Installing Cooling and Heating System Using Seawater Thermal Energy from the West Coast of Korea

Authors: Jun Byung Joon, Seo Seok Hyun, Lee Seo Young

Abstract:

As carbon dioxide emissions increase due to rapid industrialization and reckless development, abnormal climates such as floods and droughts are occurring. In order to respond to such climate change, the use of existing fossil fuels is reduced, and the proportion of eco-friendly renewable energy is gradually increasing. Korea is an energy resource-poor country that depends on imports for 93% of its total energy. As the global energy supply chain instability experienced due to the Russia-Ukraine crisis increases, countries around the world are resetting energy policies to minimize energy dependence and strengthen security. Seawater thermal energy is a renewable energy that replaces the existing air heat energy. It uses the characteristic of having a higher specific heat than air to cool and heat main spaces of buildings to increase heat transfer efficiency and minimize power consumption to generate electricity using fossil fuels, and Carbon dioxide emissions can be minimized. In addition, the effect on the marine environment is very small by using only the temperature characteristics of seawater in a limited way. K-water carried out a demonstration project of supplying cooling and heating energy to spaces such as the central control room and presentation room in the management building by acquiring the heat source of seawater circulated through the power plant's waterway by using the characteristics of the tidal power plant. Compared to the East Sea and the South Sea, the main system was designed in consideration of the large tidal difference, small temperature difference, and low-temperature characteristics, and its performance was verified through operation during the demonstration period. In addition, facility improvements were made for major deficiencies to strengthen monitoring functions, provide user convenience, and improve facility soundness. To spread these achievements, the basic concept was to expand the seawater heating and cooling system with a scale of 200 USRT at the Tidal Culture Center. With the operational experience of the demonstration system, it will be possible to establish an optimal seawater heat cooling and heating system suitable for the characteristics of the west coast ocean. Through this, it is possible to reduce operating costs by KRW 33,31 million per year compared to air heat, and through industry-university-research joint research, it is possible to localize major equipment and materials and develop key element technologies to revitalize the seawater heat business and to advance into overseas markets. The government's efforts are needed to expand the seawater heating and cooling system. Seawater thermal energy utilizes only the thermal energy of infinite seawater. Seawater thermal energy has less impact on the environment than river water thermal energy, except for environmental pollution factors such as bottom dredging, excavation, and sand or stone extraction. Therefore, it is necessary to increase the sense of speed in project promotion by innovatively simplifying unnecessary licensing/permission procedures. In addition, support should be provided to secure business feasibility by dramatically exempting the usage fee of public waters to actively encourage development in the private sector.

Keywords: seawater thermal energy, marine energy, tidal power plant, energy consumption

Procedia PDF Downloads 102
91 Structured-Ness and Contextual Retrieval Underlie Language Comprehension

Authors: Yao-Ying Lai, Maria Pinango, Ashwini Deo

Abstract:

While grammatical devices are essential to language processing, how comprehension utilizes cognitive mechanisms is less emphasized. This study addresses this issue by probing the complement coercion phenomenon: an entity-denoting complement following verbs like begin and finish receives an eventive interpretation. For example, (1) “The queen began the book” receives an agentive reading like (2) “The queen began [reading/writing/etc.…] the book.” Such sentences engender additional processing cost in real-time comprehension. The traditional account attributes this cost to an operation that coerces the entity-denoting complement to an event, assuming that these verbs require eventive complements. However, in closer examination, examples like “Chapter 1 began the book” undermine this assumption. An alternative, Structured Individual (SI) hypothesis, proposes that the complement following aspectual verbs (AspV; e.g. begin, finish) is conceptualized as a structured individual, construed as an axis along various dimensions (e.g. spatial, eventive, temporal, informational). The composition of an animate subject and an AspV such as (1) engenders an ambiguity between an agentive reading along the eventive dimension like (2), and a constitutive reading along the informational/spatial dimension like (3) “[The story of the queen] began the book,” in which the subject is interpreted as a subpart of the complement denotation. Comprehenders need to resolve the ambiguity by searching contextual information, resulting in additional cost. To evaluate the SI hypothesis, a questionnaire was employed. Method: Target AspV sentences such as “Shakespeare began the volume.” were preceded by one of the following types of context sentence: (A) Agentive-biasing, in which an event was mentioned (…writers often read…), (C) Constitutive-biasing, in which a constitutive meaning was hinted (Larry owns collections of Renaissance literature.), (N) Neutral context, which allowed both interpretations. Thirty-nine native speakers of English were asked to (i) rate each context-target sentence pair from a 1~5 scale (5=fully understandable), and (ii) choose possible interpretations for the target sentence given the context. The SI hypothesis predicts that comprehension is harder for the Neutral condition, as compared to the biasing conditions because no contextual information is provided to resolve an ambiguity. Also, comprehenders should obtain the specific interpretation corresponding to the context type. Results: (A) Agentive-biasing and (C) Constitutive-biasing were rated higher than (N) Neutral conditions (p< .001), while all conditions were within the acceptable range (> 3.5 on the 1~5 scale). This suggests that when lacking relevant contextual information, semantic ambiguity decreases comprehensibility. The interpretation task shows that the participants selected the biased agentive/constitutive reading for condition (A) and (C) respectively. For the Neutral condition, the agentive and constitutive readings were chosen equally often. Conclusion: These findings support the SI hypothesis: the meaning of AspV sentences is conceptualized as a parthood relation involving structured individuals. We argue that semantic representation makes reference to spatial structured-ness (abstracted axis). To obtain an appropriate interpretation, comprehenders utilize contextual information to enrich the conceptual representation of the sentence in question. This study connects semantic structure to human’s conceptual structure, and provides a processing model that incorporates contextual retrieval.

Keywords: ambiguity resolution, contextual retrieval, spatial structured-ness, structured individual

Procedia PDF Downloads 331
90 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing

Authors: Tolulope Aremu

Abstract:

The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.

Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods

Procedia PDF Downloads 16
89 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 217
88 Human Wildlife Conflict Outside Protected Areas of Nepal: Causes, Consequences and Mitigation Strategies

Authors: Kedar Baral

Abstract:

This study was carried out in Mustang, Kaski, Tanahun, Baitadi, and Jhapa districts of Nepal. The study explored the spatial and temporal pattern of HWC, socio economic factors associated with it, impacts of conflict on life / livelihood of people and survival of wildlife species, and impact of climate change and forest fire onHWC. Study also evaluated people’s attitude towards wildlife conservation and assessed relevant policies and programs. Questionnaire survey was carried out with the 250 respondents, and both socio-demographic and HWC related information werecollected. Secondary information were collected from Divisional Forest Offices and Annapurna Conservation Area Project.HWC events were grouped by season /months/sites (forest type, distances from forest, and settlement), and the coordinates of the events were exported to ArcGIS. Collected data were analyzed using descriptive statistics in Excel and R Program. A total of 1465 events were recorded in 5 districts during 2015 and 2019. Out of that, livestock killing, crop damage, human attack, and cattle shed damage events were 70 %, 12%, 11%, and 7%, respectively. Among 151 human attack cases, 23 people were killed, and 128 were injured. Elephant in Terai, common leopard and monkey in Middle Mountain, and snow leopard in high mountains were found as major problematic animals. Common leopard attacks were found more in the autumn, evening, and on human settlement area. Whereas elephant attacks were found higher in winter, day time, and on farmland. Poor people farmers were found highly victimized, and they were losing 26% of their income due to crop raiding and livestock depredation. On the other hand, people are killing many wildlife in revenge, and this number is increasing every year. Based on the people's perception, climate change is causing increased temperature and forest fire events and decreased water sources within the forest. Due to the scarcity of food and water within forests, wildlife are compelled to dwell at human settlement area, hence HWC events are increasing. Nevertheless, more than half of the respondents were found positive about conserving entire wildlife species. Forests outside PAs are under the community forestry (CF) system, which restored the forest, improved the habitat, and increased the wildlife.However, CF policies and programs were found to be more focused on forest management with least priority on wildlife conservation and HWC mitigation. Compensation / relief scheme of government for wildlife damage was found some how effective to manage HWC, but the lengthy process, being applicable to the damage of few wildlife species and highly increasing events made it necessary to revisit. Based on these facts, the study suggest to carry out awareness generation activities to the poor farmers, linking the property of people with the insurance scheme, conducting habitat management activities within CF, promoting the unpalatable crops, improvement of shed house of livestock, simplifying compensation scheme and establishing a fund at the district level and incorporating the wildlife conservation and HWCmitigation programs in CF. Finally, the study suggests to carry out rigorous researches to understand the impacts of current forest management practices on forest, biodiversity, wildlife, and HWC.

Keywords: community forest, conflict mitigation, wildlife conservation, climate change

Procedia PDF Downloads 115
87 Palynological Investigation and Quality Determination of Honeys from Some Apiaries in Northern Nigeria

Authors: Alebiosu Olugbenga Shadrak, Victor Victoria

Abstract:

Honey bees exhibit preferences in their foraging behaviour on pollen and nectar for food and honey production, respectively. Melissopalynology is the study of pollen in honey and other honey products. Several work have been conducted on the palynological studies of honeys from the southern parts of Nigeria but with relatively scant records from the Northern region of the country. This present study aimed at revealing the favourably visited plants by honey bees, Apis melifera var. adansonii, at some apiaries in Northern Nigeria, as well as determining the quality of honeys produced. Honeys were harvested and collected from four apiaries of the region, namely: Sarkin Dawa missionary bee farm, Taraba State; Eleeshuwa Bee Farm, Keffi, Nassarawa State, Bulus Beekeeper Apiaries, Kagarko, Kaduna State and Mai Gwava Bee Farm, Kano State. These honeys were acetolysed for palynological microscopic analysis and subjected to standard treatment methods for the determination of their proximate composition and sugar profiling. Fresh anthers of two dominantly represented plants in the honeys were then collected for the quantification of their pollen protein contents, using the micro-kjeldhal procedure. A total of 30 pollen types were identified in the four honeys, and some of them were common to the honeys. A classification method for expressing pollen frequency class was employed: Senna cf. siamea, Terminalia cf. catappa, Mangifera indica, Parinari curatelifolia, Vitellaria paradoxa, Elaeis guineensis, Parkia biglobosa, Phyllantus muellerianus and Berlina Grandiflora, as “Frequent” (16-45%); while the others are either Rare (3-15%) or Sporadic (less than 3 %). Pollen protein levels of the two abundantly represented plants, Senna siamea (15.90mg/ml) and Terminalia catappa (17.33mg/ml) were found to be considerably lower. The biochemical analyses revealed varying amounts of proximate composition, non-reducing sugar and total sugar levels in the honeys. The results of this study indicate that pollen and nectar of the “Frequent” plants were preferentially foraged by honeybees in the apiaries. The estimated pollen protein contents of Senna same and Terminalia catappa were considerably lower and not likely to have influenced their favourable visitation by honeybees. However, a relatively higher representation of Senna cf. siamea in the pollen spectrum might have resulted from its characteristic brightly coloured and well scented flowers, aiding greater entomophily. Terminalia catappa, Mangifera indica, Elaeis guineensis, Vitellaria paradoxa, and Parkia biglobosa are typical food crops; hence they probably attracted the honeybees owing to the rich nutritional values of their fruits and seeds. Another possible reason for a greater entomophily of the favourably visited plants are certain nutritional constituents of their pollen and nectar, which were not investigated in this study. The nutritional composition of the honeys was observed to fall within the safe limits of international norms, as prescribed by Codex Alimentarius Commission, thus they are good honeys for human consumption. It is therefore imperative to adopt strategic conservation steps in ensuring that these favourably visited plants are protected from indiscriminate anthropogenic activities and also encourage apiarists in the country to establish their bee farms more proximally to the plants for optimal honey yield.

Keywords: honeybees, melissopalynology, preferentially foraged, nutritional, bee farms, proximally

Procedia PDF Downloads 276
86 Clinico-pathological Study of Xeroderma Pigmentosa: A Case Series of Eight Cases

Authors: Kakali Roy, Sahana P. Raju, Subhra Dhar, Sandipan Dhar

Abstract:

Introduction: Xeroderma pigmentosa (XP) is a rare inherited (autosomal recessive) disease resulting from impairment in DNA repair that involves recognition and repair of ultraviolet radiation (UVR) induced DNA damage in the nucleotide excision repair pathway. Which results in increased photosensitivity, UVR induced damage to skin and eye, increased susceptibility of skin and ocular cancer, and progressive neurodegeneration in some patients. XP is present worldwide, with higher incidence in areas having frequent consanguinity. Being extremely rare, there is limited literature on XP and associated complications. Here, the clinico-pathological experience (spectrum of clinical presentation, histopathological findings of malignant skin lesions, and progression) of managing 8 cases of XP is presented. Methodology: A retrospective study was conducted in a pediatric tertiary care hospital in eastern India during a ten-year period from 2013 to 2022. A clinical diagnosis was made based on severe sun burn or premature photo-aging and/or onset of cutaneous malignancies at early age (1st decade) in background of consanguinity and autosomal recessive inheritance pattern in family. Results: The mean age of presentation was 1.2 years (range of 7month-3years), while three children presented during their infancy. Male to female ratio was 5:3, and all were born of consanguineous marriage. They presented with dermatological manifestations (100%) followed by ophthalmic (75%) and/or neurological symptoms (25%). Patients had normal skin at birth but soon developed extreme sensitivity to UVR in the form of exaggerated sun tanning, burning, and blistering on minimal sun exposure, followed by abnormal skin pigmentation like freckles and lentiginosis. Subsequently, over time there was progressive xerosis, atrophy, wrinkling, and poikiloderma. Six patients had varied degree of ocular involvement, while three of them had severe manifestation, including madarosis, tylosis, ectropion, Lagopthalmos, Pthysis bulbi, clouding and scarring of the cornea with complete or partial loss of vision, and ophthalmic malignancies. 50% (n=4) cases had skin and ocular pre-malignant (actinic keratosis) and malignant lesions, including melanoma and non melanoma skin cancer (NMSC) like squamous cell carcinoma (SCC) and basal cell carcinoma (BCC) in their early childhood. One patient had simultaneous occurrence of multiple malignancies together (SCC, BCC, and melanoma). Subnormal intelligence was noticed as neurological feature, and none had sensory neural hearing loss, microcephaly, neuroregression, or neurdeficit. All the patients had been being managed by a multidisciplinary team of pediatricians, dermatologists, ophthalmologists, neurologists and psychiatrists. Conclusion: Although till date there is no complete cure for XP and the disease is ultimately fatal. But increased awareness, early diagnosis followed by persistent vigorous protection from UVR, and regular screening for early detection of malignancies along with psychological support can drastically improve patients’ quality of life and life expectancy. Further research is required on formulating optimal management of XP, specifically the role and possibilities of gene therapy in XP.

Keywords: childhood malignancies, dermato-pathological findings, eastern India, Xeroderma pigmentosa

Procedia PDF Downloads 75
85 Regulatory and Economic Challenges of AI Integration in Cyber Insurance

Authors: Shreyas Kumar, Mili Shangari

Abstract:

Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.

Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware

Procedia PDF Downloads 32
84 Thermally Conductive Polymer Nanocomposites Based on Graphene-Related Materials

Authors: Alberto Fina, Samuele Colonna, Maria del Mar Bernal, Orietta Monticelli, Mauro Tortello, Renato Gonnelli, Julio Gomez, Chiara Novara, Guido Saracco

Abstract:

Thermally conductive polymer nanocomposites are of high interest for several applications including low-temperature heat recovery, heat exchangers in a corrosive environment and heat management in electronics and flexible electronics. In this paper, the preparation of thermally conductive nanocomposites exploiting graphene-related materials is addressed, along with their thermal characterization. In particular, correlations between 1- chemical and physical features of the nanoflakes and 2- processing conditions with the heat conduction properties of nanocomposites is studied. Polymers are heat insulators; therefore, the inclusion of conductive particles is the typical solution to obtain a sufficient thermal conductivity. In addition to traditional microparticles such as graphite and ceramics, several nanoparticles have been proposed, including carbon nanotubes and graphene, for the use in polymer nanocomposites. Indeed, thermal conductivities for both carbon nanotubes and graphenes were reported in the wide range of about 1500 to 6000 W/mK, despite such property may decrease dramatically as a function of the size, number of layers, the density of topological defects, re-hybridization defects as well as on the presence of impurities. Different synthetic techniques have been developed, including mechanical cleavage of graphite, epitaxial growth on SiC, chemical vapor deposition, and liquid phase exfoliation. However, the industrial scale-up of graphene, defined as an individual, single-atom-thick sheet of hexagonally arranged sp2-bonded carbons still remains very challenging. For large scale bulk applications in polymer nanocomposites, some graphene-related materials such as multilayer graphenes (MLG), reduced graphene oxide (rGO) or graphite nanoplatelets (GNP) are currently the most interesting graphene-based materials. In this paper, different types of graphene-related materials were characterized for their chemical/physical as well as for thermal properties of individual flakes. Two selected rGOs were annealed at 1700°C in vacuum for 1 h to reduce defectiveness of the carbon structure. Thermal conductivity increase of individual GNP with annealing was assessed via scanning thermal microscopy. Graphene nano papers were prepared from both conventional RGO and annealed RGO flakes. Characterization of the nanopapers evidenced a five-fold increase in the thermal diffusivity on the nano paper plane for annealed nanoflakes, compared to pristine ones, demonstrating the importance of structural defectiveness reduction to maximize the heat dissipation performance. Both pristine and annealed RGO were used to prepare polymer nanocomposites, by melt reactive extrusion. Thermal conductivity showed two- to three-fold increase in the thermal conductivity of the nanocomposite was observed for high temperature treated RGO compared to untreated RGO, evidencing the importance of using low defectivity nanoflakes. Furthermore, the study of different processing paremeters (time, temperature, shear rate) during the preparation of poly (butylene terephthalate) nanocomposites evidenced a clear correlation with the dispersion and fragmentation of the GNP nanoflakes; which in turn affected the thermal conductivity performance. Thermal conductivity of about 1.7 W/mK, i.e. one order of magnitude higher than for pristine polymer, was obtained with 10%wt of annealed GNPs, which is in line with state of the art nanocomposites prepared by more complex and less upscalable in situ polymerization processes.

Keywords: graphene, graphene-related materials, scanning thermal microscopy, thermally conductive polymer nanocomposites

Procedia PDF Downloads 263
83 Silk Fibroin-PVP-Nanoparticles-Based Barrier Membranes for Tissue Regeneration

Authors: Ivone R. Oliveira, Isabela S. Gonçalves, Tiago M. B. Campos, Leandro J. Raniero, Luana M. R. Vasconcellos, João H. Lopes

Abstract:

Originally, the principles of guided tissue/bone regeneration (GTR/GBR) were followed to restore the architecture and functionality of the periodontal system. In essence, a biocompatible polymer-based occlusive membrane is used as a barrier to prevent migration of epithelial and connective tissue to the regenerating site. In this way, progenitor cells located in the remaining periodontal ligament can recolonize the root area and differentiate into new periodontal tissues, alveolar bone, and new connective attachment. The use of synthetic or collagen-derived membranes with or without calcium phosphate-based bone graft materials has been the treatment used. Ideally, these membranes need to exhibit sufficient initial mechanical strength to allow handling and implantation, withstand the various mechanical stresses suffered during surgery while maintaining their integrity, and support the process of bone tissue regeneration and repair by resisting cellular traction forces and wound contraction forces during tissue healing in vivo. Although different RTG/ROG products are available on the market, they have serious deficiencies in terms of mechanical strength. Aiming to improve the mechanical strength and osteogenic properties of the membrane, this work evaluated the production of membranes that integrate the biocompatibility of the natural polymer (silk fibroin - FS) and the synthetic polymer poly(vinyl pyrrolidone - PVP) with graphene nanoplates (NPG) and gold nanoparticles (AuNPs), using the electrospinning equipment (AeroSpinner L1.0 from Areka) which allows the execution of high voltage spinning and/or solution blowing and with a high production rate, enabling development on an industrial scale. Silk fibroin uniquely solved many of the problems presented by collagen and was used in this work because it has unique combined merits, such as programmable biodegradability, biocompatibility and sustainable large-scale production. Graphene has attracted considerable attention in recent years as a potential biomaterial for mechanical reinforcement because of its unique physicochemical properties and was added to improve the mechanical properties of the membranes associated or not with the presence of AuNPs, which have shown great potential in regulating osteoblast activity. The preparation of FS from silkworm cocoons involved cleaning, degumming, dissolution in lithium bromide, dialysis, lyophilization and dissolution in hexafluoroisopropanol (HFIP) to prepare the solution for electrospinning, and crosslinking tests were performed in methanol. The NPGs were characterized and underwent treatment in nitric acid for functionalization to improve the adhesion of the nanoplates to the PVP fibers. PVP-NPG membranes were produced with 0.5, 1.0 and 1.5 wt% functionalized or not and evaluated by SEM/FEG, FTIR, mechanical strength and cell culture assays. Functionalized GNP particles showed stronger binding, remaining adhered to the fibers. Increasing the graphene content resulted in higher mechanical strength of the membrane and greater biocompatibility. The production of FS-PVP-NPG-AuNPs hybrid membranes was performed by electrospinning in separate syringes and simultaneously the FS solution and the solution containing PVP-NPG 1.5 wt% in the presence or absence of AuNPs. After cross-linking, they were characterized by SEM/FEG, FTIR and behavior in cell culture. The presence of NPG-AuNPs increased the viability and the presence of mineralization nodules.

Keywords: barrier membranes, silk fibroin, nanoparticles, tissue regeneration.

Procedia PDF Downloads 5
82 Xen45 Gel Implant in Open Angle Glaucoma: Efficacy, Safety and Predictors of Outcome

Authors: Fossarello Maurizio, Mattana Giorgio, Tatti Filippo.

Abstract:

The most widely performed surgical procedure in Open-Angle Glaucoma (OAG) is trabeculectomy. Although this filtering procedure is extremely effective, surgical failure and postoperative complications are reported. Due to the its invasive nature and possible complications, trabeculectomy is usually reserved, in practice, for patients who are refractory to medical and laser therapy. Recently, a number of micro-invasive surgical techniques (MIGS: Micro-Invasive Glaucoma Surgery), have been introduced in clinical practice. They meet the criteria of micro-incisional approach, minimal tissue damage, short surgical time, reliable IOP reduction, extremely high safety profile and rapid post-operative recovery. Xen45 Gel Implant (Allergan, Dublin, Ireland) is one of the MIGS alternatives, and consists in a porcine gelatin tube designed to create an aqueous flow from the anterior chamber to the subconjunctival space, bypassing the resistance of the trabecular meshwork. In this study we report the results of this technique as a favorable option in the treatment of OAG for its benefits in term of efficacy and safety, either alone or in combination with cataract surgery. This is a retrospective, single-center study conducted in consecutive OAG patients, who underwent Xen45 Gel Stent implantation alone or in combination with phacoemulsification, from October 2018 to June 2019. The primary endpoint of the study was to evaluate the reduction of both IOP and number of antiglaucoma medications at 12 months. The secondary endpoint was to correlate filtering bleb morphology evaluated by means of anterior segment OCT with efficacy in IOP lowering and eventual further procedures requirement. Data were recorded on Microsoft Excel and study analysis was performed using Microsoft Excel and SPSS (IBM). Mean values with standard deviations were calculated for IOPs and number of antiglaucoma medications at all points. Kolmogorov-Smirnov test showed that IOP followed a normal distribution at all time, therefore the paired Student’s T test was used to compare baseline and postoperative mean IOP. Correlation between postoperative Day 1 IOP and Month 12 IOP was evaluated using Pearson coefficient. Thirty-six eyes of 36 patients were evaluated. As compared to baseline, mean IOP and the mean number of antiglaucoma medications significantly decreased from 27,33 ± 7,67 mmHg to 16,3 ± 2,89 mmHg (38,8% reduction) and from 2,64 ± 1,39 to 0,42 ± 0,8 (84% reduction), respectively, at 12 months after surgery (both p < 0,001). According to bleb morphology, eyes were divided in uniform group (n=8, 22,2%), subconjunctival separation group (n=5, 13,9%), microcystic multiform group (n=9, 25%) and multiple internal layer group (n=14, 38,9%). Comparing to baseline, there was no significative difference in IOP between the 4 groups at month 12 follow-up visit. Adverse events included bleb function decrease (n=14, 38,9%), hypotony (n=8, 22,2%) and choroidal detachment (n=2, 5,6%). All eyes presenting bleb flattening underwent needling and MMC injection. The higher percentage of patients that required secondary needling was in the uniform group (75%), with a significant difference between the groups (p=0,03). Xen45 gel stent, either alone or in combination with phacoemulsification, provided a significant lowering in both IOP and medical antiglaucoma treatment and an elevated safety profile.

Keywords: anterior segment OCT, bleb morphology, micro-invasive glaucoma surgery, open angle glaucoma, Xen45 gel implant

Procedia PDF Downloads 140
81 Flood Risk Assessment for Agricultural Production in a Tropical River Delta Considering Climate Change

Authors: Chandranath Chatterjee, Amina Khatun, Bhabagrahi Sahoo

Abstract:

With the changing climate, precipitation events are intensified in the tropical river basins. Since these river basins are significantly influenced by the monsoonal rainfall pattern, critical impacts are observed on the agricultural practices in the downstream river reaches. This study analyses the crop damage and associated flood risk in terms of net benefit in the paddy-dominated tropical Indian delta of the Mahanadi River. The Mahanadi River basin lies in eastern part of the Indian sub-continent and is greatly affected by the southwest monsoon rainfall extending from the month of June to September. This river delta is highly flood-prone and has suffered from recurring high floods, especially after the 2000s. In this study, the lumped conceptual model, Nedbør Afstrømnings Model (NAM) from the suite of MIKE models, is used for rainfall-runoff modeling. The NAM model is laterally integrated with the MIKE11-Hydrodynamic (HD) model to route the runoffs up to the head of the delta region. To obtain the precipitation-derived future projected discharges at the head of the delta, nine Global Climate Models (GCMs), namely, BCC-CSM1.1(m), GFDL-CM3, GFDL-ESM2G, HadGEM2-AO, IPSL-CM5A-LR, IPSL-CM5A-MR, MIROC5, MIROC-ESM-CHEM and NorESM1-M, available in the Coupled Model Intercomparison Project-Phase 5 (CMIP5) archive are considered. These nine GCMs are previously found to best-capture the Indian Summer Monsoon rainfall. Based on the performance of the nine GCMs in reproducing the historical discharge pattern, three GCMs (HadGEM2-AO, IPSL-CM5A-MR and MIROC-ESM-CHEM) are selected. A higher Taylor Skill Score is considered as the GCM selection criteria. Thereafter, the 10-year return period design flood is estimated using L-moments based flood frequency analysis for the historical and three future projected periods (2010-2039, 2040-2069 and 2070-2099) under Representative Concentration Pathways (RCP) 4.5 and 8.5. A non-dimensional hydrograph analysis is performed to obtain the hydrographs for the historical/projected 10-year return period design floods. These hydrographs are forced into the calibrated and validated coupled 1D-2D hydrodynamic model, MIKE FLOOD, to simulate the flood inundation in the delta region. Historical and projected flood risk is defined based on the information about the flood inundation simulated by the MIKE FLOOD model and the inundation depth-damage-duration relationship of a normal rice variety cultivated in the river delta. In general, flood risk is expected to increase in all the future projected time periods as compared to the historical episode. Further, in comparison to the 2010s (2010-2039), an increased flood risk in the 2040s (2040-2069) is shown by all the three selected GCMs. However, the flood risk then declines in the 2070s as we move towards the end of the century (2070-2099). The methodology adopted herein for flood risk assessment is one of its kind and may be implemented in any world-river basin. The results obtained from this study can help in future flood preparedness by implementing suitable flood adaptation strategies.

Keywords: flood frequency analysis, flood risk, global climate models (GCMs), paddy cultivation

Procedia PDF Downloads 72
80 Optimizing Productivity and Quality through the Establishment of a Learning Management System for an Agency-Based Graduate School

Authors: Maria Corazon Tapang-Lopez, Alyn Joy Dela Cruz Baltazar, Bobby Jones Villanueva Domdom

Abstract:

The requisite for an organization implementing quality management system to sustain its compliance to the requirements and commitment for continuous improvement is even higher. It is expected that the offices and units has high and consistent compliance to the established processes and procedures. The Development Academy of the Philippines has been operating under project management to which is has a quality management certification. To further realize its mandate as a think-tank and capacity builder of the government, DAP expanded its operation and started to grant graduate degree through its Graduate School of Public and Development Management (GSPDM). As the academic arm of the Academy, GSPDM offers graduate degree programs on public management and productivity & quality aligned to the institutional trusts. For a time, the documented procedures and processes of a project management seem to fit the Graduate School. However, there has been a significant growth in the operations of the GSPDM in terms of the graduate programs offered that directly increase the number of students. There is an apparent necessity to align the project management system into a more educational system otherwise it will no longer be responsive to the development that are taking place. The strongly advocate and encourage its students to pursue internal and external improvement to cope up with the challenges of providing quality service to their own clients and to our country. If innovation will not take roots in the grounds of GSPDM, then how will it serve the purpose of “walking the talk”? This research was conducted to assess the diverse flow of the existing internal operations and processes of the DAP’s project management and GSPDM’s school management that will serve as basis to develop a system that will harmonize into one, the Learning Management System. The study documented the existing process of GSPDM following the project management phases of conceptualization & development, negotiation & contracting, mobilization, implementation, and closure into different flow charts of the key activities. The primary source of information as respondents were the different groups involved into the delivery of graduate programs - the executive, learning management team and administrative support offices. The Learning Management System (LMS) shall capture the unique and critical processes of the GSPDM as a degree-granting unit of the Academy. The LMS is the harmonized project management and school management system that shall serve as the standard system and procedure for all the programs within the GSPDM. The unique processes cover the three important areas of school management – student, curriculum, and faculty. The required processes of these main areas such as enrolment, course syllabus development, and faculty evaluation were appropriately placed within the phases of the project management system. Further, the research shall identify critical reports and generate manageable documents and records to ensure accuracy, consistency and reliable information. The researchers had an in-depth review of the DAP-GSDPM’s mandate, analyze the various documents, and conducted series of focused group discussions. A comprehensive review on flow chart system prior and various models of school management systems were made. Subsequently, the final output of the research is a work instructions manual that will be presented to the Academy’s Quality Management Council and eventually an additional scope for ISO certification. The manual shall include documented forms, iterative flow charts and program Gantt chart that will have a parallel development of automated systems.

Keywords: productivity, quality, learning management system, agency-based graduate school

Procedia PDF Downloads 318
79 Reactive X Proactive Searches on Internet After Leprosy Institutional Campaigns in Brazil: A Google Trends Analysis

Authors: Paulo Roberto Vasconcellos-Silva

Abstract:

The "Janeiro Roxo" (Purple January) campaign in Brazil aims to promote awareness of leprosy and its early symptoms. The COVID-19 pandemic has adversely affected institutional campaigns, mostly considering leprosy a neglected disease by the media. Google Trends (GT) is a tool that tracks user searches on Google, providing insights into the popularity of specific search terms. Our prior research has categorized online searches into two types: "Reactive searches," driven by transient campaign-related stimuli, and "Proactive searches," driven by personal interest in early symptoms and self-diagnosis. Using GT we studied: (i) the impact of "Janeiro Roxo" on public interest in leprosy (assessed through reactive searches) and its early symptoms (evaluated through proactive searches) over the past five years; (ii) changes in public interest during and after the COVID-19 pandemic; (iii) patterns in the dynamics of reactive and proactive searches Methods: We used GT's "Relative Search Volume" (RSV) to gauge public interest on a scale from 0 to 100. "HANSENÍASE" (HAN) was a proxy for reactive searches, and "HANSENÍASE SINTOMAS" (leprosy symptoms) (H.SIN) for proactive searches (interest in leprosy or in self-diagnosis). We analyzed 261 weeks of data from 2018 to 2023, using polynomial trend lines to model trends over this period. Analysis of Variance (ANOVA) was used to compare weekly RSV, monthly (MM) and annual means (AM). Results: Over a span of 261 weeks, there was consistently higher Relative Search Volume (RSV) for HAN compared to H.SIN. Both search terms exhibited their highest (MM) in January months during all periods. COVID-19 pandemic: a decline was observed during the pandemic years (2020-2021). There was a 24% decrease in RSV for HAN and a 32.5% decrease for H.SIN. Both HAN and H.SIN regained their pre-pandemic search levels in January 2022-2023. Breakpoints indicated abrupt changes - in the 26th week (February 2019), 55th and 213th weeks (September 2019 and 2022) related to September regional campaigns (interrupted in 2020-2021). Trend lines for HAN exhibited an upward curve between 33rd-45th week (April to June 2019), a pandemic-related downward trend between 120th-136th week (December 2020 to March 2021), and an upward trend between 220th-240th week (November 2022 to March 2023). Conclusion: The "Janeiro Roxo" campaign, along with other media-driven activities, exerts a notable influence on both reactive and proactive searches related to leprosy topics. Reactive searches, driven by campaign stimuli, significantly outnumber proactive searches. Despite the interruption of the campaign due to the pandemic, there was a subsequent resurgence in both types of searches. The recovery observed in reactive and proactive searches post-campaign interruption underscores the effectiveness of such initiatives, particularly at the national level. This suggests that regional campaigns aimed at leprosy awareness can be considered highly successful in stimulating proactive public engagement. The evaluation of internet-based campaign programs proves valuable not only for assessing their impact but also for identifying the needs of vulnerable regions. These programs can play a crucial role in integrating regions and highlighting their needs for assistance services in the context of leprosy awareness.

Keywords: health communication, leprosy, health campaigns, information seeking behavior, Google Trends, reactive searches, proactive searches, leprosy early identification

Procedia PDF Downloads 60
78 Heterotopic Ossification: DISH and Myositis Ossificans in Human Remains Identification

Authors: Patricia Shirley Almeida Prado, Liz Brito, Selma Paixão Argollo, Gracie Moreira, Leticia Matos Sobrinho

Abstract:

Diffuse idiopathic skeletal hyperostosis (DISH) is a degenerative bone disease also known as Forestier´s disease and ankylosing hyperostosis of the spine is characterized by a tendency toward ossification of half the anterior longitudinal spinal ligament without intervertebral disc disease. DISH is not considered to be osteoarthritis, although the two conditions commonly occur together. Diagnostic criteria include fusion of at least four vertebrae by bony bridges arising from the anterolateral aspect of the vertebral bodies. These vertebral bodies have a 'dripping candle wax' appearance, also can be seen periosteal new bone formation on the anterior surface of the vertebral bodies and there is no ankylosis at zygoapophyseal facet joint. Clinically, patients with DISH tend to be asymptomatic some patients mention moderate pain and stiffness in upper back. This disease is more common in man, uncommon in patients younger than 50 years and rare in patients under 40 years old. In modern populations, DISH is found in association with obesity, (type II) diabetes; abnormal vitamin A metabolism and also associated with higher levels of serum uric acid. There is also some association between the increase of risk of stroke or other cerebrovascular disease. The DISH condition can be confused with Heterotopic Ossification, what is the bone formation in the soft tissues as the result of trauma, wounding, surgery, burnings, prolonged immobility and some central nervous system disorder. All these conditions have been described extensively as myositis ossificans which can be confused with the fibrodysplasia (myositis) ossificans progressive. As in the DISH symptomatology it can be asymptomatic or extensive enough to impair joint function. A third confusion osteoarthritis disease that can bring confusion are the enthesopathies that occur in the entire skeleton being common on the ischial tuberosities, iliac crests, patellae, and calcaneus. Ankylosis of the sacroiliac joint by bony bridges may also be found. CASE 1: this case is skeletal remains presenting skull, some vertebrae and scapulae. This case remains unidentified and due to lack of bone remains. Sex, age and ancestry profile was compromised, however the DISH pathognomonic findings and diagnostic helps to estimate sex and age characteristics. Moreover to presenting DISH these skeletal remains also showed some bone alterations and non-metrics as fusion of the first vertebrae with occipital bone, maxillae and palatine torus and scapular foramen on the right scapulae. CASE 2: this skeleton remains shows an extensive bone heterotopic ossification on the great trochanter area of left femur, right fibula showed a healed fracture in its body however in its inteosseous crest there is an extensive bone growth, also in the Ilium at the region of inferior gluteal line can be observed some pronounced bone growth and the skull presented a pronounced mandibular, maxillary and palatine torus. Despite all these pronounced heterotopic ossification the whole skeleton presents moderate bone overgrowth that is not linked with aging, since the skeleton belongs to a young unidentified individual. The appropriate osteopathological diagnosis support the human identification process through medical reports and also assist with epidemiological data that can strengthen vulnerable anthropological estimates.

Keywords: bone disease, DISH, human identification, human remains

Procedia PDF Downloads 332
77 Self-Medication with Antibiotics, Evidence of Factors Influencing the Practice in Low and Middle-Income Countries: A Systematic Scoping Review

Authors: Neusa Fernanda Torres, Buyisile Chibi, Lyn E. Middleton, Vernon P. Solomon, Tivani P. Mashamba-Thompson

Abstract:

Background: Self-medication with antibiotics (SMA) is a global concern, with a higher incidence in low and middle-income countries (LMICs). Despite intense world-wide efforts to control and promote the rational use of antibiotics, continuing practices of SMA systematically exposes individuals and communities to the risk of antibiotic resistance and other undesirable antibiotic side effects. Moreover, it increases the health systems costs of acquiring more powerful antibiotics to treat the resistant infection. This review thus maps evidence on the factors influencing self-medication with antibiotics in these settings. Methods: The search strategy for this review involved electronic databases including PubMed, Web of Knowledge, Science Direct, EBSCOhost (PubMed, CINAHL with Full Text, Health Source - Consumer Edition, MEDLINE), Google Scholar, BioMed Central and World Health Organization library, using the search terms:’ Self-Medication’, ‘antibiotics’, ‘factors’ and ‘reasons’. Our search included studies published from 2007 to 2017. Thematic analysis was performed to identify the patterns of evidence on SMA in LMICs. The mixed method quality appraisal tool (MMAT) version 2011 was employed to assess the quality of the included primary studies. Results: Fifteen studies met the inclusion criteria. Studies included population from the rural (46,4%), urban (33,6%) and combined (20%) settings, of the following LMICs: Guatemala (2 studies), India (2), Indonesia (2), Kenya (1), Laos (1), Nepal (1), Nigeria (2), Pakistan (2), Sri Lanka (1), and Yemen (1). The total sample size of all 15 included studies was 7676 participants. The findings of the review show a high prevalence of SMA ranging from 8,1% to 93%. Accessibility, affordability, conditions of health facilities (long waiting, quality of services and workers) as long well as poor health-seeking behavior and lack of information are factors that influence SMA in LMICs. Antibiotics such as amoxicillin, metronidazole, amoxicillin/clavulanic, ampicillin, ciprofloxacin, azithromycin, penicillin, and tetracycline, were the most frequently used for SMA. The major sources of antibiotics included pharmacies, drug stores, leftover drugs, family/friends and old prescription. Sore throat, common cold, cough with mucus, headache, toothache, flu-like symptoms, pain relief, fever, running nose, toothache, upper respiratory tract infections, urinary symptoms, urinary tract infection were the common disease symptoms managed with SMA. Conclusion: Although the information on factors influencing SMA in LMICs is unevenly distributed, the available information revealed the existence of research evidence on antibiotic self-medication in some countries of LMICs. SMA practices are influenced by social-cultural determinants of health and frequently associated with poor dispensing and prescribing practices, deficient health-seeking behavior and consequently with inappropriate drug use. Therefore, there is still a need to conduct further studies (qualitative, quantitative and randomized control trial) on factors and reasons for SMA to correctly address the public health problem in LMICs.

Keywords: antibiotics, factors, reasons, self-medication, low and middle-income countries (LMICs)

Procedia PDF Downloads 215
76 Application of Electrical Resistivity Surveys on Constraining Causes of Highway Pavement Failure along Ajaokuta-Anyigba Road, North Central Nigeria

Authors: Moroof, O. Oloruntola, Sunday Oladele, Daniel, O. Obasaju, Victor, O Ojekunle, Olateju, O. Bayewu, Ganiyu, O. Mosuro

Abstract:

Integrated geophysical methods involving Vertical Electrical Sounding (VES) and 2D resistivity survey were deployed to gain an insight into the influence of the two varying rock types (mica-schist and granite gneiss) underlying the road alignment to the incessant highway failure along Ajaokuta-Anyigba, North-central Nigeria. The highway serves as a link-road for the single largest cement factory in Africa (Dangote Cement Factory) and two major ceramic industries to the capital (Abuja) via Lokoja. 2D Electrical Resistivity survey (Dipole-Dipole Array) and Vertical Electrical Sounding (VES) (Schlumberger array) were employed. Twenty-two (22) 2D profiles were occupied, twenty (20) conducted about 1 m away from the unstable section underlain by mica-schist with profile length each of approximately 100 m. Two (2) profiles were conducted about 1 m away from the stable section with a profile length of 100 m each due to barriers caused by the drainage system and outcropping granite gneiss at the flanks of the road. A spacing of 2 m was used for good image resolution of the near-surface. On each 2D profile, a range of 1-3 VES was conducted; thus, forty-eight (48) soundings were acquired. Partial curve matching and WinResist software were used to obtain the apparent and true resistivity values of the 1D survey, while DiprofWin software was used for processing the 2-D survey. Two exposed lithologic sections caused by abandoned river channels adjacent to two profiles as well as the knowledge of the geology of the area helped to constrain the VES and 2D processing and interpretation. Generally, the resistivity values obtained reflect the parent rock type, degree of weathering, moisture content and competency of the tested area. Resistivity values < 100; 100 – 950; 1000 – 2000 and > 2500 ohms-m were interpreted as clay, weathered layer, partly weathered layer and fresh basement respectively. The VES results and 2-D resistivity structures along the unstable segment showed similar lithologic characteristics and sequences dominated by clayey substratum for depths range of 0 – 42.2 m. The clayey substratum is a product of intensive weathering of the parent rock (mica-schist) and constitutes weak foundation soils, causing highway failure. This failure is further exacerbated by several heavy-duty trucks which ply the section round the clock due to proximity to two major ceramic industries in the state and lack of drainage system. The two profiles on the stable section show 2D structures that are remarkably different from those of the unstable section with very thin topsoils, higher resistivity weathered substratum (indicating the presence of coarse fragments from the parent rock) and shallow depth to the basement (1.0 – 7. 1 m). Also, the presence of drainage and lower volume of heavy-duty trucks are contributors to the pavement stability of this section of the highway. The resistivity surveys effectively delineated two contrasting soil profiles of the subbase/subgrade that reflect variation in the mineralogy of underlying parent rocks.

Keywords: clay, geophysical methods, pavement, resistivity

Procedia PDF Downloads 166
75 Assessing Diagnostic and Evaluation Tools for Use in Urban Immunisation Programming: A Critical Narrative Review and Proposed Framework

Authors: Tim Crocker-Buque, Sandra Mounier-Jack, Natasha Howard

Abstract:

Background: Due to both the increasing scale and speed of urbanisation, urban areas in low and middle-income countries (LMICs) host increasingly large populations of under-immunized children, with the additional associated risks of rapid disease transmission in high-density living environments. Multiple interdependent factors are associated with these coverage disparities in urban areas and most evidence comes from relatively few countries, e.g., predominantly India, Kenya, Nigeria, and some from Pakistan, Iran, and Brazil. This study aimed to identify, describe, and assess the main tools used to measure or improve coverage of immunisation services in poor urban areas. Methods: Authors used a qualitative review design, including academic and non-academic literature, to identify tools used to improve coverage of public health interventions in urban areas. Authors selected and extracted sources that provided good examples of specific tools, or categories of tools, used in a context relevant to urban immunization. Diagnostic (e.g., for data collection, analysis, and insight generation) and programme tools (e.g., for investigating or improving ongoing programmes) and interventions (e.g., multi-component or stand-alone with evidence) were selected for inclusion to provide a range of type and availability of relevant tools. These were then prioritised using a decision-analysis framework and a tool selection guide for programme managers developed. Results: Authors reviewed tools used in urban immunisation contexts and tools designed for (i) non-immunization and/or non-health interventions in urban areas, and (ii) immunisation in rural contexts that had relevance for urban areas (e.g., Reaching every District/Child/ Zone). Many approaches combined several tools and methods, which authors categorised as diagnostic, programme, and intervention. The most common diagnostic tools were cross-sectional surveys, key informant interviews, focus group discussions, secondary analysis of routine data, and geographical mapping of outcomes, resources, and services. Programme tools involved multiple stages of data collection, analysis, insight generation, and intervention planning and included guidance documents from WHO (World Health Organisation), UNICEF (United Nations Children's Fund), USAID (United States Agency for International Development), and governments, and articles reporting on diagnostics, interventions, and/or evaluations to improve urban immunisation. Interventions involved service improvement, education, reminder/recall, incentives, outreach, mass-media, or were multi-component. The main gaps in existing tools were an assessment of macro/policy-level factors, exploration of effective immunization communication channels, and measuring in/out-migration. The proposed framework uses a problem tree approach to suggest tools to address five common challenges (i.e. identifying populations, understanding communities, issues with service access and use, improving services, improving coverage) based on context and available data. Conclusion: This study identified many tools relevant to evaluating urban LMIC immunisation programmes, including significant crossover between tools. This was encouraging in terms of supporting the identification of common areas, but problematic as data volumes, instructions, and activities could overwhelm managers and tools are not always suitably applied to suitable contexts. Further research is needed on how best to combine tools and methods to suit local contexts. Authors’ initial framework can be tested and developed further.

Keywords: health equity, immunisation, low and middle-income countries, poverty, urban health

Procedia PDF Downloads 139
74 Modeling Competition Between Subpopulations with Variable DNA Content in Resource-Limited Microenvironments

Authors: Parag Katira, Frederika Rentzeperis, Zuzanna Nowicka, Giada Fiandaca, Thomas Veith, Jack Farinhas, Noemi Andor

Abstract:

Resource limitations shape the outcome of competitions between genetically heterogeneous pre-malignant cells. One example of such heterogeneity is in the ploidy (DNA content) of pre-malignant cells. A whole-genome duplication (WGD) transforms a diploid cell into a tetraploid one and has been detected in 28-56% of human cancers. If a tetraploid subclone expands, it consistently does so early in tumor evolution, when cell density is still low, and competition for nutrients is comparatively weak – an observation confirmed for several tumor types. WGD+ cells need more resources to synthesize increasing amounts of DNA, RNA, and proteins. To quantify resource limitations and how they relate to ploidy, we performed a PAN cancer analysis of WGD, PET/CT, and MRI scans. Segmentation of >20 different organs from >900 PET/CT scans were performed with MOOSE. We observed a strong correlation between organ-wide population-average estimates of Oxygen and the average ploidy of cancers growing in the respective organ (Pearson R = 0.66; P= 0.001). In-vitro experiments using near-diploid and near-tetraploid lineages derived from a breast cancer cell line supported the hypothesis that DNA content influences Glucose- and Oxygen-dependent proliferation-, death- and migration rates. To model how subpopulations with variable DNA content compete in the resource-limited environment of the human brain, we developed a stochastic state-space model of the brain (S3MB). The model discretizes the brain into voxels, whereby the state of each voxel is defined by 8+ variables that are updated over time: stiffness, Oxygen, phosphate, glucose, vasculature, dead cells, migrating cells and proliferating cells of various DNA content, and treat conditions such as radiotherapy and chemotherapy. Well-established Fokker-Planck partial differential equations govern the distribution of resources and cells across voxels. We applied S3MB on sequencing and imaging data obtained from a primary GBM patient. We performed whole genome sequencing (WGS) of four surgical specimens collected during the 1ˢᵗ and 2ⁿᵈ surgeries of the GBM and used HATCHET to quantify its clonal composition and how it changes between the two surgeries. HATCHET identified two aneuploid subpopulations of ploidy 1.98 and 2.29, respectively. The low-ploidy clone was dominant at the time of the first surgery and became even more dominant upon recurrence. MRI images were available before and after each surgery and registered to MNI space. The S3MB domain was initiated from 4mm³ voxels of the MNI space. T1 post and T2 flair scan acquired after the 1ˢᵗ surgery informed tumor cell densities per voxel. Magnetic Resonance Elastography scans and PET/CT scans informed stiffness and Glucose access per voxel. We performed a parameter search to recapitulate the GBM’s tumor cell density and ploidy composition before the 2ⁿᵈ surgery. Results suggest that the high-ploidy subpopulation had a higher Glucose-dependent proliferation rate (0.70 vs. 0.49), but a lower Glucose-dependent death rate (0.47 vs. 1.42). These differences resulted in spatial differences in the distribution of the two subpopulations. Our results contribute to a better understanding of how genomics and microenvironments interact to shape cell fate decisions and could help pave the way to therapeutic strategies that mimic prognostically favorable environments.

Keywords: tumor evolution, intra-tumor heterogeneity, whole-genome doubling, mathematical modeling

Procedia PDF Downloads 70
73 Implications of Agricultural Subsidies Since Green Revolution: A Case Study of Indian Punjab

Authors: Kriti Jain, Sucha Singh Gill

Abstract:

Subsidies have been a major part of agricultural policies around the world, and more extensively since the green revolution in developing countries, for the sake of attaining higher agricultural productivity and achieving food security. But entrenched subsidies lead to distorted incentives and promote inefficiencies in the agricultural sector, threatening the viability of these very subsidies and sustainability of the agricultural production systems, posing a threat to the livelihood of farmers and laborers dependent on it. This paper analyzes the economic and ecological sustainability implications of prolonged input and output subsidies in agriculture by studying the case of Indian Punjab, an agriculturally developed state responsible for ensuring food security in the country when it was facing a major food crisis. The paper focuses specifically on the environmentally unsustainable cropping pattern changes as a result of Minimum Support Price (MSP) and assured procurement and on the resource use efficiency and cost implications of power subsidy for irrigation in Punjab. The study is based on an analysis of both secondary and primary data sources. Using secondary data, a time series analysis was done to capture the changes in Punjab’s cropping pattern, water table depth, fertilizer consumption, and electrification of agriculture. This has been done to examine the role of price and output support adopted to encourage the adoption of green revolution technology in changing the cropping structure of the state, resulting in increased input use intensities (especially groundwater and fertilizers), which harms the ecological balance and decreases factor productivity. Evaluation of electrification of Punjab agriculture helped evaluate the trend in electricity productivity of agriculture and how free power imposed further pressure on the extant agricultural ecosystem. Using data collected from a primary survey of 320 farmers in Punjab, the extent of wasteful application of groundwater irrigation, water productivity of output, electricity usage, and cost of irrigation driven electricity subsidy to the exchequer were estimated for the dominant cropping pattern amongst farmers. The main findings of the study revealed how because of a subsidy has driven agricultural framework, Punjab has lost area under agro climatically suitable and staple crops and moved towards a paddy-wheat cropping system, that is gnawing away the state’s natural resources like water table has been declining at a significant rate of 25 cms per year since 1975-76, and excessive and imbalanced fertilizer usage has led to declining soil fertility in the state. With electricity-driven tubewells as the major source of irrigation within a regime of free electricity and water-intensive crop cultivation, there is both wasteful application of irrigation water and electricity in the cultivation of paddy crops, burning an unproductive hole in the exchequer’s pocket. There is limited access to both agricultural extension services and water-conserving technology, along with policy imbalance, keeping farmers in an intensive and unsustainable production system. Punjab agriculture is witnessing diminishing returns to factor, which under the business-as-usual scenario, will soon enter the phase of negative returns to factor.

Keywords: cropping pattern, electrification, subsidy, sustainability

Procedia PDF Downloads 184
72 Synthesis of Carbonyl Iron Particles Modified with Poly (Trimethylsilyloxyethyl Methacrylate) Nano-Grafts

Authors: Martin Cvek, Miroslav Mrlik, Michal Sedlacik, Tomas Plachy

Abstract:

Magnetorheological elastomers (MREs) are multi-phase composite materials containing micron-sized ferromagnetic particles dispersed in an elastomeric matrix. Their properties such as modulus, damping, magneto-striction, and electrical conductivity can be controlled by an external magnetic field and/or pressure. These features of the MREs are used in the development of damping devices, shock attenuators, artificial muscles, sensors or active elements of electric circuits. However, imperfections on the particle/matrix interfaces result in the lower performance of the MREs when compared with theoretical values. Moreover, magnetic particles are susceptible to corrosion agents such as acid rains or sea humidity. Therefore, the modification of particles is an effective tool for the improvement of MRE performance due to enhanced compatibility between particles and matrix as well as improvements of their thermo-oxidation and chemical stability. In this study, the carbonyl iron (CI) particles were controllably modified with poly(trimethylsilyloxyethyl methacrylate) (PHEMATMS) nano-grafts to develop magnetic core–shell structures exhibiting proper wetting with various elastomeric matrices resulting in improved performance within a frame of rheological, magneto-piezoresistance, pressure-piezoresistance, or radio-absorbing properties. The desired molecular weight of PHEMATMS nano-grafts was precisely tailored using surface-initiated atom transfer radical polymerization (ATRP). The CI particles were firstly functionalized using a 3-aminopropyltriethoxysilane agent, followed by esterification reaction with α-bromoisobutyryl bromide. The ATRP was performed in the anisole medium using ethyl α-bromoisobutyrate as a macroinitiator, N, N´, N´´, N´´-pentamethyldiethylenetriamine as a ligand, and copper bromide as an initiator. To explore the effect PHEMATMS molecular weights on final properties, two variants of core-shell structures with different nano-graft lengths were synthesized, while the reaction kinetics were designed through proper reactant feed ratios and polymerization times. The PHEMATMS nano-grafts were characterized by nuclear magnetic resonance and gel permeation chromatography proving information to their monomer conversions, molecular chain lengths, and low polydispersity indexes (1.28 and 1.35) as the results of the executed ATRP. The successful modifications were confirmed via Fourier transform infrared- and energy-dispersive spectroscopies while expected wavenumber outputs and element presences, respectively, of constituted PHEMATMS nano-grafts, were occurring in the spectra. The surface morphology of bare CI and their PHEMATMS-grafted analogues was further studied by scanning electron microscopy, and the thicknesses of grafted polymeric layers were directly observed by transmission electron microscopy. The contact angles as a measure of particle/matrix compatibility were investigated employing the static sessile drop method. The PHEMATMS nano-grafts enhanced compatibility of hydrophilic CI with low-surface-energy hydrophobic polymer matrix in terms of their wettability and dispersibility in an elastomeric matrix. Thus, the presence of possible defects at the particle/matrix interface is reduced, and higher performance of modified MREs is expected.

Keywords: atom transfer radical polymerization, core-shell, particle modification, wettability

Procedia PDF Downloads 199
71 A Digital Clone of an Irrigation Network Based on Hardware/Software Simulation

Authors: Pierre-Andre Mudry, Jean Decaix, Jeremy Schmid, Cesar Papilloud, Cecile Munch-Alligne

Abstract:

In most of the Swiss Alpine regions, the availability of water resources is usually adequate even in times of drought, as evidenced by the 2003 and 2018 summers. Indeed, important natural stocks are for the moment available in the form of snow and ice, but the situation is likely to change in the future due to global and regional climate change. In addition, alpine mountain regions are areas where climate change will be felt very rapidly and with high intensity. For instance, the ice regime of these regions has already been affected in recent years with a modification of the monthly availability and extreme events of precipitations. The current research, focusing on the municipality of Val de Bagnes, located in the canton of Valais, Switzerland, is part of a project led by the Altis company and achieved in collaboration with WSL, BlueArk Entremont, and HES-SO Valais-Wallis. In this region, water occupies a key position notably for winter and summer tourism. Thus, multiple actors want to apprehend the future needs and availabilities of water, on both the 2050 and 2100 horizons, in order to plan the modifications to the water supply and distribution networks. For those changes to be salient and efficient, a good knowledge of the current water distribution networks is of most importance. In the current case, the water drinking network is well documented, but this is not the case for the irrigation one. Since the water consumption for irrigation is ten times higher than for drinking water, data acquisition on the irrigation network is a major point to determine future scenarios. This paper first presents the instrumentation and simulation of the irrigation network using custom-designed IoT devices, which are coupled with a digital clone simulated to reduce the number of measuring locations. The developed IoT ad-hoc devices are energy-autonomous and can measure flows and pressures using industrial sensors such as calorimetric water flow meters. Measurements are periodically transmitted using the LoRaWAN protocol over a dedicated infrastructure deployed in the municipality. The gathered values can then be visualized in real-time on a dashboard, which also provides historical data for analysis. In a second phase, a digital clone of the irrigation network was modeled using EPANET, a software for water distribution systems that performs extended-period simulations of flows and pressures in pressurized networks composed of reservoirs, pipes, junctions, and sinks. As a preliminary work, only a part of the irrigation network was modelled and validated by comparisons with the measurements. The simulations are carried out by imposing the consumption of water at several locations. The validation is performed by comparing the simulated pressures are different nodes with the measured ones. An accuracy of +/- 15% is observed on most of the nodes, which is acceptable for the operator of the network and demonstrates the validity of the approach. Future steps will focus on the deployment of the measurement devices on the whole network and the complete modelling of the network. Then, scenarios of future consumption will be investigated. Acknowledgment— The authors would like to thank the Swiss Federal Office for Environment (FOEN), the Swiss Federal Office for Agriculture (OFAG) for their financial supports, and ALTIS for the technical support, this project being part of the Swiss Pilot program 'Adaptation aux changements climatiques'.

Keywords: hydraulic digital clone, IoT water monitoring, LoRaWAN water measurements, EPANET, irrigation network

Procedia PDF Downloads 145
70 Phytochemicals and Photosynthesis of Grape Berry Exocarp and Seed (Vitis vinifera, cv. Alvarinho): Effects of Foliar Kaolin and Irrigation

Authors: Andreia Garrido, Artur Conde, Ana Cunha, Ric De Vos

Abstract:

Climate changes predictions point to increases in abiotic stress for crop plants in Portugal, like pronounced temperature variation and decreased precipitation, which will have negative impact on grapevine physiology and consequently, on grape berry and wine quality. Short-term mitigation strategies have, therefore, been implemented to alleviate the impacts caused by adverse climatic periods. These strategies include foliar application of kaolin, an inert mineral, which has radiation reflection proprieties that decreases stress from excessive heat/radiation absorbed by its leaves, as well as smart irrigation strategies to avoid water stress. However, little is known about the influence of these mitigation measures on grape berries, neither on the photosynthetic activity nor on the photosynthesis-related metabolic profiles of its various tissues. Moreover, the role of fruit photosynthesis on berry quality is poorly understood. The main objective of our work was to assess the effects of kaolin and irrigation treatments on the photosynthetic activity of grape berry tissues (exocarp and seeds) and on their global metabolic profile, also investigating their possible relationship. We therefore collected berries of field-grown plants of the white grape variety Alvarinho from two distinct microclimates, i.e. from clusters exposed to high light (HL, 150 µmol photons m⁻² s⁻¹) and low light (LL, 50 µmol photons m⁻² s⁻¹), from both kaolin and non-kaolin (control) treated plants at three fruit developmental stages (green, véraison and mature). Plant irrigation was applied after harvesting the green berries, which also enabled comparison of véraison and mature berries from irrigated and non-irrigated growth conditions. Photosynthesis was assessed by pulse amplitude modulated chlorophyll fluorescence imaging analysis, and the metabolite profile of both tissues was assessed by complementary metabolomics approaches. Foliar kaolin application resulted in, for instance, an increased photosynthetic activity of the exocarp of LL-grown berries at green developmental stage, as compared to the control non-kaolin treatment, with a concomitant increase in the levels of several lipid-soluble isoprenoids (chlorophylls, carotenoids, and tocopherols). The exocarp of mature berries grown at HL microclimate on kaolin-sprayed non-irrigated plants had higher total sugar levels content than all other treatments, suggesting that foliar application of this mineral results in an increased accumulation of photoassimilates in mature berries. Unbiased liquid chromatography-mass spectrometry-based profiling of semi-polar compounds followed by ASCA (ANOVA simultaneous component analysis) and ANOVA statistical analysis indicated that kaolin had no or inconsistent effect on the flavonoid and phenylpropanoid composition in both seed and exocarp at any developmental stage; in contrast, both microclimate and irrigation influenced the level of several of these compounds depending on berry ripening stage. Overall, our study provides more insight into the effects of mitigation strategies on berry tissue photosynthesis and phytochemistry, under contrasting conditions of cluster light microclimate. We hope that this may contribute to develop sustainable management in vineyards and to maintain grape berries and wines with high quality even at increasing abiotic stress challenges.

Keywords: climate change, grape berry tissues, metabolomics, mitigation strategies

Procedia PDF Downloads 122
69 Production, Characterisation, and in vitro Degradation and Biocompatibility of a Solvent-Free Polylactic-Acid/Hydroxyapatite Composite for 3D-Printed Maxillofacial Bone-Regeneration Implants

Authors: Carlos Amnael Orozco-Diaz, Robert David Moorehead, Gwendolen Reilly, Fiona Gilchrist, Cheryl Ann Miller

Abstract:

The current gold-standard for maxillofacial reconstruction surgery (MRS) utilizes auto-grafted cancellous bone as a filler. This study was aimed towards developing a polylactic-acid/hydroxyapatite (PLA-HA) composite suitable for fused-deposition 3D printing. Functionalization of the polymer through the addition of HA was directed to promoting bone-regeneration properties so that the material can rival the performance of cancellous bone grafts in terms of bone-lesion repair. This kind of composite enables the production of MRS implants based off 3D-reconstructions from image studies – namely computed tomography – for anatomically-correct fitting. The present study encompassed in-vitro degradation and in-vitro biocompatibility profiling for 3D-printed PLA and PLA-HA composites. PLA filament (Verbatim Co.) and Captal S hydroxyapatite micro-scale HA powder (Plasma Biotal Ltd) were used to produce PLA-HA composites at 5, 10, and 20%-by-weight HA concentration. These were extruded into 3D-printing filament, and processed in a BFB-3000 3D-Printer (3D Systems Co.) into tensile specimens, and were mechanically challenged as per ASTM D638-03. Furthermore, tensile specimens were subjected to accelerated degradation in phosphate-buffered saline solution at 70°C for 23 days, as per ISO-10993-13-2010. This included monitoring of mass loss (through dry-weighing), crystallinity (through thermogravimetric analysis/differential thermal analysis), molecular weight (through gel-permeation chromatography), and tensile strength. In-vitro biocompatibility analysis included cell-viability and extracellular matrix deposition, which were performed both on flat surfaces and on 3D-constructs – both produced through 3D-printing. Discs of 1 cm in diameter and cubic 3D-meshes of 1 cm3 were 3D printed in PLA and PLA-HA composites (n = 6). The samples were seeded with 5000 MG-63 osteosarcoma-like cells, with cell viability extrapolated throughout 21 days via resazurin reduction assays. As evidence of osteogenicity, collagen and calcium deposition were indirectly estimated through Sirius Red staining and Alizarin Red staining respectively. Results have shown that 3D printed PLA loses structural integrity as early as the first day of accelerated degradation, which was significantly faster than the literature suggests. This was reflected in the loss of tensile strength down to untestable brittleness. During degradation, mass loss, molecular weight, and crystallinity behaved similarly to results found in similar studies for PLA. All composite versions and pure PLA were found to perform equivalent to tissue-culture plastic (TCP) in supporting the seeded-cell population. Significant differences (p = 0.05) were found on collagen deposition for higher HA concentrations, with composite samples performing better than pure PLA and TCP. Additionally, per-cell-calcium deposition on the 3D-meshes was significantly lower when comparing 3D-meshes to discs of the same material (p = 0.05). These results support the idea that 3D-printable PLA-HA composites are a viable resorbable material for artificial grafts for bone-regeneration. Degradation data suggests that 3D-printing of these materials – as opposed to other manufacturing methods – might result in faster resorption than currently-used PLA implants.

Keywords: bone regeneration implants, 3D-printing, in vitro testing, biocompatibility, polymer degradation, polymer-ceramic composites

Procedia PDF Downloads 154
68 Classical Improvisation Facilitating Enhanced Performer-Audience Engagement and a Mutually Developing Impulse Exchange with Concert Audiences

Authors: Pauliina Haustein

Abstract:

Improvisation was part of Western classical concert culture and performers’ skill sets until early 20th century. Historical accounts, as well as recent studies, indicate that improvisatory elements in the programme may contribute specifically towards the audiences’ experience of enhanced emotional engagement during the concert. This paper presents findings from the author’s artistic practice research, which explored re-introducing improvisation to Western classical performance practice as a musician (cellist and ensemble partner/leader). In an investigation of four concert cycles, the performer-researcher sought to gain solo and chamber music improvisation techniques (both related to and independent of repertoire), conduct ensemble improvisation rehearsals, design concerts with an improvisatory approach, and reflect on interactions with audiences after each concert. Data was collected through use of reflective diary, video recordings, measurement of sound parameters, questionnaires, a focus group, and interviews. The performer’s empirical experiences and findings from audience research components were juxtaposed and interrogated to better understand the (1) rehearsal and planning processes that enable improvisatory elements to return to Western classical concert experience and (2) the emotional experience and type of engagement that occur throughout the concert experience for both performer and audience members. This informed the development of a concert model, in which a programme of solo and chamber music repertoire and improvisations were combined according to historically evidenced performance practice (including free formal solo and ensemble improvisations based on audience suggestions). Inspired by historical concert culture, where elements of risk-taking, spontaneity, and audience involvement (such as proposing themes for fantasies) were customary, this concert model invited musicians to contribute to the process personally and creatively at all stages, from programme planning, and throughout the live concert. The type of democratic, personal, creative, and empathetic collaboration that emerged, as a result, appears unique in Western classical contexts, rather finding resonance in jazz ensemble, drama, or interdisciplinary settings. The research identified features of ensemble improvisation, such as empathy, emergence, mutual engagement, and collaborative creativity, that became mirrored in audience’s responses, generating higher levels of emotional engagement, empathy, inclusivity, and a participatory, co-creative experience. It appears that duringimprovisatory moments in the concert programme, audience members started feeling more like active participants in za\\a creative, collaborative exchange and became stakeholders in a deeper phenomenon of meaning-making and narrativization. Examining interactions between all involved during the concert revealed that performer-audience impulse exchange occurred on multiple levels of awareness and seemed to build upon each other, resulting in particularly strong experiences of both performer and audience’s engagement. This impact appeared especially meaningful for audience members who were seldom concertgoers and reported little familiarity with classical music. The study found that re-introducing improvisatory elements to Western classical concert programmes has strong potential in increasing audience’s emotional engagement with the musical performance, enabling audience members to connect more personally with the individual performers, and in reaching new-to-classical-music audiences.

Keywords: artistic research, audience engagement, audience experience, classical improvisation, ensemble improvisation, emotional engagement, improvisation, improvisatory approach, musical performance, practice research

Procedia PDF Downloads 126
67 Discovering Causal Structure from Observations: The Relationships between Technophile Attitude, Users Value and Use Intention of Mobility Management Travel App

Authors: Aliasghar Mehdizadeh Dastjerdi, Francisco Camara Pereira

Abstract:

The increasing complexity and demand of transport services strains transportation systems especially in urban areas with limited possibilities for building new infrastructure. The solution to this challenge requires changes of travel behavior. One of the proposed means to induce such change is multimodal travel apps. This paper describes a study of the intention to use a real-time multi-modal travel app aimed at motivating travel behavior change in the Greater Copenhagen Region (Denmark) toward promoting sustainable transport options. The proposed app is a multi-faceted smartphone app including both travel information and persuasive strategies such as health and environmental feedback, tailoring travel options, self-monitoring, tunneling users toward green behavior, social networking, nudging and gamification elements. The prospective for mobility management travel apps to stimulate sustainable mobility rests not only on the original and proper employment of the behavior change strategies, but also on explicitly anchoring it on established theoretical constructs from behavioral theories. The theoretical foundation is important because it positively and significantly influences the effectiveness of the system. However, there is a gap in current knowledge regarding the study of mobility-management travel app with support in behavioral theories, which should be explored further. This study addresses this gap by a social cognitive theory‐based examination. However, compare to conventional method in technology adoption research, this study adopts a reverse approach in which the associations between theoretical constructs are explored by Max-Min Hill-Climbing (MMHC) algorithm as a hybrid causal discovery method. A technology-use preference survey was designed to collect data. The survey elicited different groups of variables including (1) three groups of user’s motives for using the app including gain motives (e.g., saving travel time and cost), hedonic motives (e.g., enjoyment) and normative motives (e.g., less travel-related CO2 production), (2) technology-related self-concepts (i.e. technophile attitude) and (3) use Intention of the travel app. The questionnaire items led to the formulation of causal relationships discovery to learn the causal structure of the data. Causal relationships discovery from observational data is a critical challenge and it has applications in different research fields. The estimated causal structure shows that the two constructs of gain motives and technophilia have a causal effect on adoption intention. Likewise, there is a causal relationship from technophilia to both gain and hedonic motives. In line with the findings of the prior studies, it highlights the importance of functional value of the travel app as well as technology self-concept as two important variables for adoption intention. Furthermore, the results indicate the effect of technophile attitude on developing gain and hedonic motives. The causal structure shows hierarchical associations between the three groups of user’s motive. They can be explained by “frustration-regression” principle according to Alderfer's ERG (Existence, Relatedness and Growth) theory of needs meaning that a higher level need remains unfulfilled, a person may regress to lower level needs that appear easier to satisfy. To conclude, this study shows the capability of causal discovery methods to learn the causal structure of theoretical model, and accordingly interpret established associations.

Keywords: travel app, behavior change, persuasive technology, travel information, causality

Procedia PDF Downloads 141