Search results for: ideal and actual coping
235 Analysis of the Savings Behaviour of Rice Farmers in Tiaong, Quezon, Philippines
Authors: Angelika Kris D. Dalangin, Cesar B. Quicoy
Abstract:
Rice farming is a major source of livelihood and employment in the Philippines, but it requires a substantial amount of capital. Capital may come from income (farm, non-farm, and off-farm), savings and credit. However, rice farmers suffer from lack of capital due to high costs of inputs and low productivity. Capital insufficiency, coupled with low productivity, hindered them to meet their basic household and production needs. Hence, they resorted to borrowing money, mostly from informal lenders who charge very high interest rates. As another source of capital, savings can help rice farmers meet their basic needs for both the household and the farm. However, information is inadequate whether the farmers save or not, as well as, why they do not depend on savings to augment their lack of capital. Thus, it is worth analyzing how rice farmers saved. The study revealed, using the actual savings which is the difference between the household income and expenditure, that about three-fourths (72%) of the total number of farmers interviewed are savers. However, when they were asked whether they are savers or not, more than half of them considered themselves as non-savers. This gap shows that there are many farmers who think that they do not have savings at all; hence they continue to borrow money and do not depend on savings to augment their lack of capital. The study also identified the forms of savings, saving motives, and savings utilization among rice farmers. Results revealed that, for the past 12 months, most of the farmers saved cash at home for liquidity purposes while others deposited cash in banks and/or saved their money in the form of livestock. Among the most important reasons of farmers for saving are for daily household expenses, for building a house, for emergency purposes, for retirement, and for their next production. Furthermore, the study assessed the factors affecting the rice farmers’ savings behaviour using logistic regression. Results showed that the factors found to be significant were presence of non-farm income, per capita net farm income, and per capita household expense. The presence of non-farm income and per capita net farm income positively affects the farmers’ savings behaviour. On the other hand, per capita household expenses have negative effect. The effect, however, of per capita net farm income and household expenses is very negligible because of the very small chance that the farmer is a saver. Generally, income and expenditure were proved to be significant factors that affect the savings behaviour of the rice farmers. However, most farmers could not save regularly due to low farm income and high household and farm expenditures. Thus, it is highly recommended that government should develop programs or implement policies that will create more jobs for the farmers and their family members. In addition, programs and policies should be implemented to increase farm productivity and income.Keywords: agricultural economics, agricultural finance, binary logistic regression, logit, Philippines, Quezon, rice farmers, savings, savings behaviour
Procedia PDF Downloads 228234 Circle of Learning Using High-Fidelity Simulators Promoting a Better Understanding of Resident Physicians on Point-of-Care Ultrasound in Emergency Medicine
Authors: Takamitsu Kodama, Eiji Kawamoto
Abstract:
Introduction: Ultrasound in emergency room has advantages of safer, faster, repeatable and noninvasive. Especially focused Point-Of-Care Ultrasound (POCUS) is used daily for prompt and accurate diagnoses, for quickly identifying critical and life-threatening conditions. That is why ultrasound has demonstrated its usefulness in emergency medicine. The true value of ultrasound has been once again recognized in recent years. It is thought that all resident physicians working at emergency room should perform an ultrasound scan to interpret signs and symptoms of deteriorating patients in the emergency room. However, a practical education on ultrasound is still in development. To resolve this issue, we established a new educational program using high-fidelity simulators and evaluated the efficacy of this course. Methods: Educational program includes didactic lectures and skill stations in half-day course. Instructor gives a lecture on POCUS such as Rapid Ultrasound in Shock (RUSH) and/or Focused Assessment Transthoracic Echo (FATE) protocol at the beginning of the course. Then, attendees are provided for training of scanning with cooperation of normal simulated patients. In the end, attendees learn how to apply focused POCUS skills at clinical situation using high-fidelity simulators such as SonoSim® (SonoSim, Inc) and SimMan® 3G (Laerdal Medical). Evaluation was conducted through surveillance questionnaires to 19 attendees after two pilot courses. The questionnaires were focused on understanding course concept and satisfaction. Results: All attendees answered the questionnaires. With respect to the degree of understanding, 12 attendees (number of valid responses: 13) scored four or more points out of five points. High-fidelity simulators, especially SonoSim® was highly appreciated to enhance learning how to handle ultrasound at an actual practice site by 11 attendees (number of valid responses: 12). All attendees encouraged colleagues to take this course because the high level of satisfaction was achieved. Discussion: Newly introduced educational course using high-fidelity simulators realizes the circle of learning to deepen the understanding on focused POCUS by gradual stages. SonoSim® can faithfully reproduce scan images with pathologic findings of ultrasound and provide experimental learning for a growth number of beginners such as resident physicians. In addition, valuable education can be provided if it is used combined with SimMan® 3G. Conclusions: Newly introduced educational course using high-fidelity simulators is supposed to be effective and helps in providing better education compared with conventional courses for emergency physicians.Keywords: point-of-care ultrasound, high-fidelity simulators, education, circle of learning
Procedia PDF Downloads 283233 Promoting 21st Century Skills through Telecollaborative Learning
Authors: Saliha Ozcan
Abstract:
Technology has become an integral part of our lives, aiding individuals in accessing higher order competencies, such as global awareness, creativity, collaborative problem solving, and self-directed learning. Students need to acquire these competencies, often referred to as 21st century skills, in order to adapt to a fast changing world. Today, an ever-increasing number of schools are exploring how engagement through telecollaboration can support language learning and promote 21st century skill development in classrooms. However, little is known regarding how telecollaboration may influence the way students acquire 21st century skills. In this paper, we aim to shed light to the potential implications of telecollaborative practices in acquisition of 21st century skills. In our context, telecollaboration, which might be carried out in a variety of settings both synchronously or asynchronously, is considered as the process of communicating and working together with other people or groups from different locations through online digital tools or offline activities to co-produce a desired work output. The study presented here will describe and analyse the implementation of a telecollaborative project between two high school classes, one in Spain and the other in Sweden. The students in these classes were asked to carry out some joint activities, including creating an online platform, aimed at raising awareness of the situation of the Syrian refugees. We conduct a qualitative study in order to explore how language, culture, communication, and technology merge into the co-construction of knowledge, as well as supporting the attainment of the 21st century skills needed for network-mediated communication. To this end, we collected a significant amount of audio-visual data, including video recordings of classroom interaction and external Skype meetings. By analysing this data, we verify whether the initial pedagogical design and intended objectives of the telecollaborative project coincide with what emerges from the actual implementation of the tasks. Our findings indicate that, as well as planned activities, unplanned classroom interactions may lead to acquisition of certain 21st century skills, such as collaborative problem solving and self-directed learning. This work is part of a wider project (KONECT, EDU2013-43932-P; Spanish Ministry of Economy and Finance), which aims to explore innovative, cross-competency based teaching that can address the current gaps between today’s educational practices and the needs of informed citizens in tomorrow’s interconnected, globalised world.Keywords: 21st century skills, telecollaboration, language learning, network mediated communication
Procedia PDF Downloads 125232 Stakeholder Perception in the Role of Short-term Accommodations on the Place Brand and Real Estate Development of Urban Areas: A Case Study of Malate, Manila
Authors: Virgilio Angelo Gelera Gener
Abstract:
This study investigates the role of short-term accommodations on the place brand and real estate development of urban areas. It aims to know the perceptions of the general public, real estate developers, as well as city and barangay-level local government units (LGUs) on how these lodgings affect the place brand and land value of a community. It likewise attempts to identify the personal and institutional variables having a great influence on said perceptions in order to provide a better understanding of these establishments and their relevance within urban localities. Using certain sources, Malate, Manila was identified to be the ideal study area of the thesis. This prompted the employment of mixed methods research as the study’s fundamental data gathering and analytical tool. Here, a survey with 350 locals was done, asking them questions that would answer the aforementioned queries. Thereafter, a Pearson Chi-square Test and Multinomial Logistic Regression (MLR) were utilized to determine the variables affecting their perceptions. There were also Focus Group Discussions (FGDs) with the three (3) most populated Malate barangays, as well as Key Informant Interviews (KIIs) with selected city officials and fifteen (15) real estate company representatives. With that, survey results showed that although a 1992 Department of Tourism (DOT) Circular regards short-term accommodations as lodgings mainly for travelers, most people actually use it for their private/intimate moments. Because of this, the survey further revealed that short-term accommodations exhibit a negative place brand among the respondents though they also believe that it’s still one of society’s most important economic players. Statistics from the Pearson Chi-square Test, on the other hand, indicate that there are fourteen (14) out of seventeen (17) variables exhibiting great influence on respondents’ perceptions. Whereas MLR findings show that being born in Malate and being part of a family household was the most significant regardless of socio-economic level and monthly household income. For the city officials, it was revealed that said lodgings are actually the second-highest earners in the City’s lodging industry. It was further stated that their zoning ordinance treats short-term accommodations just like any other lodging enterprise. So it’s perfectly legal for these establishments to situate themselves near residential areas and/or institutional structures. A sit down with barangays, on the other hand, recognized the economic benefits of short-term accommodations but likewise admitted that it contributes a negative place brand to the community. Lastly, real estate developers are amenable to having their projects built near short-term accommodations, for they do not have any bad views against it. They explained that their projects sites have always been motivated by suitability, liability, and marketability factors only. Overall, these findings merit a recalibration of the zoning ordinance and DOT Circular, as well as the imposition of regulations on their sexually suggestive roadside advertisements. Then, once relevant measures are refined for proper implementation, it can also pave the way for spatial interventions (like visual buffer corridors) to better address the needs of the locals, private groups, and government.Keywords: estate planning, place brand, real estate development, short-term accommodations
Procedia PDF Downloads 165231 Socio-Economic and Psychological Factors of Moscow Population Deviant Behavior: Sociological and Statistical Research
Authors: V. Bezverbny
Abstract:
The actuality of the project deals with stable growing of deviant behavior’ statistics among Moscow citizens. During the recent years the socioeconomic health, wealth and life expectation of Moscow residents is regularly growing up, but the limits of crime and drug addiction have grown up seriously. Another serious Moscow problem has been economical stratification of population. The cost of identical residential areas differs at 2.5 times. The project is aimed at complex research and the development of methodology for main factors and reasons evaluation of deviant behavior growing in Moscow. The main project objective is finding out the links between the urban environment quality and dynamics of citizens’ deviant behavior in regional and municipal aspect using the statistical research methods and GIS modeling. The conducted research allowed: 1) to evaluate the dynamics of deviant behavior in Moscow different administrative districts; 2) to describe the reasons of crime increasing, drugs addiction, alcoholism, suicides tendencies among the city population; 3) to develop the city districts classification based on the level of the crime rate; 4) to create the statistical database containing the main indicators of Moscow population deviant behavior in 2010-2015 including information regarding crime level, alcoholism, drug addiction, suicides; 5) to present statistical indicators that characterize the dynamics of Moscow population deviant behavior in condition of expanding the city territory; 6) to analyze the main sociological theories and factors of deviant behavior for concretization the deviation types; 7) to consider the main theoretical statements of the city sociology devoted to the reasons for deviant behavior in megalopolis conditions. To explore the level of deviant behavior’ factors differentiation, the questionnaire was worked out, and sociological survey involved more than 1000 people from different districts of the city was conducted. Sociological survey allowed to study the socio-economical and psychological factors of deviant behavior. It also included the Moscow residents’ open-ended answers regarding the most actual problems in their districts and reasons of wish to leave their place. The results of sociological survey lead to the conclusion that the main factors of deviant behavior in Moscow are high level of social inequality, large number of illegal migrants and bums, nearness of large transport hubs and stations on the territory, ineffective work of police, alcohol availability and drug accessibility, low level of psychological comfort for Moscow citizens, large number of building projects.Keywords: deviant behavior, megapolis, Moscow, urban environment, social stratification
Procedia PDF Downloads 192230 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 290229 Design, Fabrication and Analysis of Molded and Direct 3D-Printed Soft Pneumatic Actuators
Authors: N. Naz, A. D. Domenico, M. N. Huda
Abstract:
Soft Robotics is a rapidly growing multidisciplinary field where robots are fabricated using highly deformable materials motivated by bioinspired designs. The high dexterity and adaptability to the external environments during contact make soft robots ideal for applications such as gripping delicate objects, locomotion, and biomedical devices. The actuation system of soft robots mainly includes fluidic, tendon-driven, and smart material actuation. Among them, Soft Pneumatic Actuator, also known as SPA, remains the most popular choice due to its flexibility, safety, easy implementation, and cost-effectiveness. However, at present, most of the fabrication of SPA is still based on traditional molding and casting techniques where the mold is 3d printed into which silicone rubber is cast and consolidated. This conventional method is time-consuming and involves intensive manual labour with the limitation of repeatability and accuracy in design. Recent advancements in direct 3d printing of different soft materials can significantly reduce the repetitive manual task with an ability to fabricate complex geometries and multicomponent designs in a single manufacturing step. The aim of this research work is to design and analyse the Soft Pneumatic Actuator (SPA) utilizing both conventional casting and modern direct 3d printing technologies. The mold of the SPA for traditional casting is 3d printed using fused deposition modeling (FDM) with the polylactic acid (PLA) thermoplastic wire. Hyperelastic soft materials such as Ecoflex-0030/0050 are cast into the mold and consolidated using a lab oven. The bending behaviour is observed experimentally with different pressures of air compressor to ensure uniform bending without any failure. For direct 3D-printing of SPA fused deposition modeling (FDM) with thermoplastic polyurethane (TPU) and stereolithography (SLA) with an elastic resin are used. The actuator is modeled using the finite element method (FEM) to analyse the nonlinear bending behaviour, stress concentration and strain distribution of different hyperelastic materials after pressurization. FEM analysis is carried out using Ansys Workbench software with a Yeon-2nd order hyperelastic material model. FEM includes long-shape deformation, contact between surfaces, and gravity influences. For mesh generation, quadratic tetrahedron, hybrid, and constant pressure mesh are used. SPA is connected to a baseplate that is in connection with the air compressor. A fixed boundary is applied on the baseplate, and static pressure is applied orthogonally to all surfaces of the internal chambers and channels with a closed continuum model. The simulated results from FEM are compared with the experimental results. The experiments are performed in a laboratory set-up where the developed SPA is connected to a compressed air source with a pressure gauge. A comparison study based on performance analysis is done between FDM and SLA printed SPA with the molded counterparts. Furthermore, the molded and 3d printed SPA has been used to develop a three-finger soft pneumatic gripper and has been tested for handling delicate objects.Keywords: finite element method, fused deposition modeling, hyperelastic, soft pneumatic actuator
Procedia PDF Downloads 90228 Structural Analysis of Archaeoseismic Records Linked to the 5 July 408 - 410 AD Utica Strong Earthquake (NE Tunisia)
Authors: Noureddine Ben Ayed, Abdelkader Soumaya, Saïd Maouche, Ali Kadri, Mongi Gueddiche, Hayet Khayati-Ammar, Ahmed Braham
Abstract:
The archaeological monument of Utica, located in north-eastern Tunisia, was founded (8th century BC) By the Phoenicians as a port installed on the trade route connecting Phoenicia and the Straits of Gibraltar in the Mediterranean Sea. The flourishment of this city as an important settlement during the Roman period was followed by a sudden abandonment, disuse and progressive oblivion in the first half of the fifth century AD. This decadence can be attributed to the destructive earthquake of 5 July 408 - 410 AD, affecting this historic city as documented in 1906 by the seismologist Fernand De Montessus De Ballore. The magnitude of the Utica earthquake was estimated at 6.8 by the Tunisian National Institute of Meteorology (INM). In order to highlight the damage caused by this earthquake, a field survey was carried out at the Utica ruins to detect and analyse the earthquake archaeological effects (EAEs) using structural geology methods. This approach allowed us to highlight several structural damages, including: (1) folded mortar pavements, (2) cracks affecting the mosaic and walls of a water basin in the "House of the Grand Oecus", (3) displaced columns, (4) block extrusion in masonry walls, (5) undulations in mosaic pavements, (6) tilted walls. The structural analysis of these EAEs and data measurements reveal a seismic cause for all evidence of deformation in the Utica monument. The maximum horizontal strain of the ground (e.g. SHmax) inferred from the building oriented damage in Utica shows a NNW-SSE direction under a compressive tectonic regime. For the seismogenic source of this earthquake, we propose the active E-W to NE-SW trending Utique - Ghar El Melh reverse fault, passing through the Utica Monument and extending towards the Ghar El Melh Lake, as the causative tectonic structure. The active fault trace is well supported by instrumental seismicity, geophysical data (e.g., gravity, seismic profiles) and geomorphological analyses. In summary, we find that the archaeoseismic records detected at Utica are similar to those observed at many other archaeological sites affected by destructive ancient earthquakes around the world. Furthermore, the calculated orientation of the average maximum horizontal stress (SHmax) closely match the state of the actual stress field, as highlighted by some earthquake focal mechanisms in this region.Keywords: Tunisia, utica, seimogenic fault, archaeological earthquake effects
Procedia PDF Downloads 45227 Corporate Performance and Balance Sheet Indicators: Evidence from Indian Manufacturing Companies
Authors: Hussain Bohra, Pradyuman Sharma
Abstract:
This study highlights the significance of Balance Sheet Indicators on the corporate performance in the case of Indian manufacturing companies. Balance sheet indicators show the actual financial health of the company and it helps to the external investors to choose the right company for their investment and it also help to external financing agency to give easy finance to the manufacturing companies. The period of study is 2000 to 2014 for 813 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test and Hausman test results proof the suitability of the fixed effect model for the estimation. Return on assets (ROA) is used as the proxy to measure corporate performance. ROA is the best proxy to measure corporate performance as it already used by the most of the authors who worked on the corporate performance. ROA shows return on long term investment projects of firms. Different ratios like Current Ratio, Debt-equity ratio, Receivable turnover ratio, solvency ratio have been used as the proxies for the Balance Sheet Indicators. Other firm specific variable like firm size, and sales as the control variables in the model. From the empirical analysis, it was found that all selected financial ratios have significant and positive impact on the corporate performance. Firm sales and firm size also found significant and positive impact on the corporate performance. To check the robustness of results, the sample was divided on the basis of different ratio like firm having high debt equity ratio and low debt equity ratio, firms having high current ratio and low current ratio, firms having high receivable turnover and low receivable ratio and solvency ratio in the form of firms having high solving ratio and low solvency ratio. We find that the results are robust to all types of companies having different form of selected balance sheet indicators ratio. The results for other variables are also in the same line as for the whole sample. These findings confirm that Balance sheet indicators play as significant role on the corporate performance in India. The findings of this study have the implications for the corporate managers to focus different ratio to maintain the minimum expected level of performance. Apart from that, they should also maintain adequate sales and total assets to improve corporate performance.Keywords: balance sheet, corporate performance, current ratio, panel data method
Procedia PDF Downloads 264226 Experimental Analysis of the Performance of a System for Freezing Fish Products Equipped with a Modulating Vapour Injection Scroll Compressor
Authors: Domenico Panno, Antonino D’amico, Hamed Jafargholi
Abstract:
This paper presents an experimental analysis of the performance of a system for freezing fish products equipped with a modulating vapour injection scroll compressor operating with R448A refrigerant. Freezing is a critical process for the preservation of seafood products, as it influences quality, food safety, and environmental sustainability. The use of a modulating scroll compressor with vapour injection, associated with the R448A refrigerant, is proposed as a solution to optimize the performance of the system, reducing energy consumption and mitigating the environmental impact. The stream injection modulating scroll compressor represents an advanced technology that allows you to adjust the compressor capacity based on the actual cooling needs of the system. Vapour injection allows the optimization of the refrigeration cycle, reducing the evaporation temperature and improving the overall efficiency of the system. The use of R448A refrigerant, with a low Global Warming Potential (GWP), is part of an environmental sustainability perspective, helping to reduce the climate impact of the system. The aim of this research was to evaluate the performance of the system through a series of experiments conducted on a pilot plant for the freezing of fish products. Several operational variables were monitored and recorded, including evaporation temperature, condensation temperature, energy consumption, and freezing time of seafood products. The results of the experimental analysis highlighted the benefits deriving from the use of the modulating vapour injection scroll compressor with the R448A refrigerant. In particular, a significant reduction in energy consumption was recorded compared to conventional systems. The modulating capacity of the compressor made it possible to adapt the cold production to variations in the thermal load, ensuring optimal operation of the system and reducing energy waste. Furthermore, the use of an electronic expansion valve highlighted greater precision in the control of the evaporation temperature, with minimal deviation from the desired set point. This helped ensure better quality of the final product, reducing the risk of damage due to temperature changes and ensuring uniform freezing of the fish products. The freezing time of seafood has been significantly reduced thanks to the configuration of the entire system, allowing for faster production and greater production capacity of the plant. In conclusion, the use of a modulating vapour injection scroll compressor operating with R448A has proven effective in improving the performance of a system for freezing fish products. This technology offers an optimal balance between energy efficiency, temperature control, and environmental sustainability, making it an advantageous choice for food industries.Keywords: scroll compressor, vapor injection, refrigeration system, EER
Procedia PDF Downloads 45225 Improved Morphology in Sequential Deposition of the Inverted Type Planar Heterojunction Solar Cells Using Cheap Additive (DI-H₂O)
Authors: Asmat Nawaz, Ceylan Zafer, Ali K. Erdinc, Kaiying Wang, M. Nadeem Akram
Abstract:
Hybrid halide Perovskites with the general formula ABX₃, where X = Cl, Br or I, are considered as an ideal candidates for the preparation of photovoltaic devices. The most commonly and successfully used hybrid halide perovskite for photovoltaic applications is CH₃NH₃PbI₃ and its analogue prepared from lead chloride, commonly symbolized as CH₃NH₃PbI₃_ₓClₓ. Some researcher groups are using lead free (Sn replaces Pb) and mixed halide perovskites for the fabrication of the devices. Both mesoporous and planar structures have been developed. By Comparing mesoporous structure in which the perovskite materials infiltrate into mesoporous metal oxide scaffold, the planar architecture is much simpler and easy for device fabrication. In a typical perovskite solar cell, a perovskite absorber layer is sandwiched between the hole and electron transport. Upon the irradiation, carriers are created in the absorber layer that can travel through hole and electron transport layers and the interface in between. We fabricated inverted planar heterojunction structure ITO/PEDOT/ Perovskite/PCBM/Al, based solar cell via two-step spin coating method. This is also called Sequential deposition method. A small amount of cheap additive H₂O was added into PbI₂/DMF to make a homogeneous solution. We prepared four different solution such as (W/O H₂O, 1% H₂O, 2% H₂O, 3% H₂O). After preparing, the whole night stirring at 60℃ is essential for the homogenous precursor solutions. We observed that the solution with 1% H₂O was much more homogenous at room temperature as compared to others. The solution with 3% H₂O was precipitated at once at room temperature. The four different films of PbI₂ were formed on PEDOT substrates by spin coating and after that immediately (before drying the PbI₂) the substrates were immersed in the methyl ammonium iodide solution (prepared in isopropanol) for the completion of the desired perovskite film. After getting desired films, rinse the substrates with isopropanol to remove the excess amount of methyl ammonium iodide and finally dried it on hot plate only for 1-2 minutes. In this study, we added H₂O in the PbI₂/DMF precursor solution. The concept of additive is widely used in the bulk- heterojunction solar cells to manipulate the surface morphology, leading to the enhancement of the photovoltaic performance. There are two most important parameters for the selection of additives. (a) Higher boiling point w.r.t host material (b) good interaction with the precursor materials. We observed that the morphology of the films was improved and we achieved a denser, uniform with less cavities and almost full surface coverage films but only using precursor solution having 1% H₂O. Therefore, we fabricated the complete perovskite solar cell by sequential deposition technique with precursor solution having 1% H₂O. We concluded that with the addition of additives in the precursor solutions one can easily be manipulate the morphology of the perovskite film. In the sequential deposition method, thickness of perovskite film is in µm and the charge diffusion length of PbI₂ is in nm. Therefore, by controlling the thickness using other deposition methods for the fabrication of solar cells, we can achieve the better efficiency.Keywords: methylammonium lead iodide, perovskite solar cell, precursor composition, sequential deposition
Procedia PDF Downloads 246224 Public Procurement Development Stages in Georgia
Authors: Giorgi Gaprindashvili
Abstract:
One of the best examples, in evolution of the public procurement, from post-soviet countries are reforms carried out in Georgia, which brought them close to international standards of procurement. In Georgia, public procurement legislation started functioning in 1998. The reform has passed several stages and came in the form as it is today. It should also be noted, that countries with economy in transition, including Georgia, implemented all the reforms in public procurement based on recommendations and support of World Bank, the United Nations and other international organizations. The first law on public procurement in Georgia was adopted on December 9, 1998 which aimed regulation of the procurement process of budget-organizations, transparent and competitive environment for private companies to access state funds legally. The priorities were identified quite clearly in the wording of the law, but operation/function of this law could not be reached on its level, because of some objective and subjective reasons. The high level of corruption in all levels of governance, can be considered as a main obstacle reason and of course, it is natural, that it had direct impact on the procurement process, as well as on transparency and rational use of state funds. This circumstances were the reasons that reforms in this sphere continued, to improve procurement process, in particular, the first wave of reforms began in 2001. Public procurement agency carried out reform with World Bank with main purpose of smartening the procurement legislation and its harmonization with international treaties and agreements. Also with the support of World Bank various activities were carried out to raise awareness of participants involved in procurement system. Further major changes in the legislation were filed in May 2005, which was also directed towards the improvement and smarten of the procurement process. The third wave of the reform began in 2010, which more or less guaranteed the transparency of the procurement process, which later became the basis for the rational spending of state funds. The reform of the procurement system completely changed the procedures. Carried out reform in Georgia resulted in introducing new electronic tendering system, which benefit the transparency of the process, after this became the basis for the further development of a competitive environment, which become a prerequisite for the state rational spending. Increased number of supplier organizations participating in the procurement process resulted in reduction of the estimated cost and the actual cost from 20% up to 40%, it is quite large saving for the procuring organizations and allows them to use the freed-up funds for their other needs. Assessment of the reforms in Georgia in the field of public procurement can be concluded, that proper regulation of the sector and relevant policy may proceed to rational and transparent spending of the budget from country’s state institutions. Also, the business sector has the opportunity to work in competitive market conditions and to make a preliminary analysis, which is a prerequisite for future strategy and development.Keywords: public administration, public procurement, reforms, transparency
Procedia PDF Downloads 366223 First Systematic Review on Aerosol Bound Water: Exploring the Existing Knowledge Domain Using the CiteSpace Software
Authors: Kamila Widziewicz-Rzonca
Abstract:
The presence of PM bound water as an integral chemical compound of suspended aerosol particles (PM) has become one of the hottest issues in recent years. The UN climate summits on climate change (COP24) indicate that PM of anthropogenic origin (released mostly from coal combustion) is directly responsible for climate change. Chemical changes at the particle-liquid (water) interface determine many phenomena occurring in the atmosphere such as visibility, cloud formation or precipitation intensity. Since water-soluble particles such as nitrates, sulfates, or sea salt easily become cloud condensation nuclei, they affect the climate for example by increasing cloud droplet concentration. Aerosol water is a master component of atmospheric aerosols and a medium that enables all aqueous-phase reactions occurring in the atmosphere. Thanks to a thorough bibliometric analysis conducted using CiteSpace Software, it was possible to identify past trends and possible future directions in measuring aerosol-bound water. This work, in fact, doesn’t aim at reviewing the existing literature in the related topic but is an in-depth bibliometric analysis exploring existing gaps and new frontiers in the topic of PM-bound water. To assess the major scientific areas related to PM-bound water and clearly define which among those are the most active topics we checked Web of Science databases from 1996 till 2018. We give an answer to the questions: which authors, countries, institutions and aerosol journals to the greatest degree influenced PM-bound water research? Obtained results indicate that the paper with the greatest citation burst was Tang In and Munklewitz H.R. 'water activities, densities, and refractive indices of aqueous sulfates and sodium nitrate droplets of atmospheric importance', 1994. The largest number of articles in this specific field was published in atmospheric chemistry and physics. An absolute leader in the quantity of publications among all research institutions is the National Aeronautics Space Administration (NASA). Meteorology and atmospheric sciences is a category with the most studies in this field. A very small number of studies on PM-bound water conduct a quantitative measurement of its presence in ambient particles or its origin. Most articles rather point PM-bound water as an artifact in organic carbon and ions measurements without any chemical analysis of its contents. This scientometric study presents the current and most actual literature regarding particulate bound water.Keywords: systematic review, aerosol-bound water, PM-bound water, CiteSpace, knowledge domain
Procedia PDF Downloads 123222 Surface-Enhanced Raman Detection in Chip-Based Chromatography via a Droplet Interface
Authors: Renata Gerhardt, Detlev Belder
Abstract:
Raman spectroscopy has attracted much attention as a structurally descriptive and label-free detection method. It is particularly suited for chemical analysis given as it is non-destructive and molecules can be identified via the fingerprint region of the spectra. In this work possibilities are investigated how to integrate Raman spectroscopy as a detection method for chip-based chromatography, making use of a droplet interface. A demanding task in lab-on-a-chip applications is the specific and sensitive detection of low concentrated analytes in small volumes. Fluorescence detection is frequently utilized but restricted to fluorescent molecules. Furthermore, no structural information is provided. Another often applied technique is mass spectrometry which enables the identification of molecules based on their mass to charge ratio. Additionally, the obtained fragmentation pattern gives insight into the chemical structure. However, it is only applicable as an end-of-the-line detection because analytes are destroyed during measurements. In contrast to mass spectrometry, Raman spectroscopy can be applied on-chip and substances can be processed further downstream after detection. A major drawback of Raman spectroscopy is the inherent weakness of the Raman signal, which is due to the small cross-sections associated with the scattering process. Enhancement techniques, such as surface enhanced Raman spectroscopy (SERS), are employed to overcome the poor sensitivity even allowing detection on a single molecule level. In SERS measurements, Raman signal intensity is improved by several orders of magnitude if the analyte is in close proximity to nanostructured metal surfaces or nanoparticles. The main gain of lab-on-a-chip technology is the building block-like ability to seamlessly integrate different functionalities, such as synthesis, separation, derivatization and detection on a single device. We intend to utilize this powerful toolbox to realize Raman detection in chip-based chromatography. By interfacing on-chip separations with a droplet generator, the separated analytes are encapsulated into numerous discrete containers. These droplets can then be injected with a silver nanoparticle solution and investigated via Raman spectroscopy. Droplet microfluidics is a sub-discipline of microfluidics which instead of a continuous flow operates with the segmented flow. Segmented flow is created by merging two immiscible phases (usually an aqueous phase and oil) thus forming small discrete volumes of one phase in the carrier phase. The study surveys different chip designs to realize coupling of chip-based chromatography with droplet microfluidics. With regards to maintaining a sufficient flow rate for chromatographic separation and ensuring stable eluent flow over the column different flow rates of eluent and oil phase are tested. Furthermore, the detection of analytes in droplets with surface enhanced Raman spectroscopy is examined. The compartmentalization of separated compounds preserves the analytical resolution since the continuous phase restricts dispersion between the droplets. The droplets are ideal vessels for the insertion of silver colloids thus making use of the surface enhancement effect and improving the sensitivity of the detection. The long-term goal of this work is the first realization of coupling chip based chromatography with droplets microfluidics to employ surface enhanced Raman spectroscopy as means of detection.Keywords: chip-based separation, chip LC, droplets, Raman spectroscopy, SERS
Procedia PDF Downloads 245221 I, Me and the Bot: Forming a Theory of Symbolic Interactivity with a Chatbot
Authors: Felix Liedel
Abstract:
The rise of artificial intelligence has numerous and far-reaching consequences. In addition to the obvious consequences for entire professions, the increasing interaction with chatbots also has a wide range of social consequences and implications. We are already increasingly used to interacting with digital chatbots, be it in virtual consulting situations, creative development processes or even in building personal or intimate virtual relationships. A media-theoretical classification of these phenomena has so far been difficult, partly because the interactive element in the exchange with artificial intelligence has undeniable similarities to human-to-human communication but is not identical to it. The proposed study, therefore, aims to reformulate the concept of symbolic interaction in the tradition of George Herbert Mead as symbolic interactivity in communication with chatbots. In particular, Mead's socio-psychological considerations will be brought into dialog with the specific conditions of digital media, the special dispositive situation of chatbots and the characteristics of artificial intelligence. One example that illustrates this particular communication situation with chatbots is so-called consensus fiction: In face-to-face communication, we use symbols on the assumption that they will be interpreted in the same or a similar way by the other person. When briefing a chatbot, it quickly becomes clear that this is by no means the case: only the bot's response shows whether the initial request corresponds to the sender's actual intention. This makes it clear that chatbots do not just respond to requests. Rather, they function equally as projection surfaces for their communication partners but also as distillations of generalized social attitudes. The personalities of the chatbot avatars result, on the one hand, from the way we behave towards them and, on the other, from the content we have learned in advance. Similarly, we interpret the response behavior of the chatbots and make it the subject of our own actions with them. In conversation with the virtual chatbot, we enter into a dialog with ourselves but also with the content that the chatbot has previously learned. In our exchanges with chatbots, we, therefore, interpret socially influenced signs and behave towards them in an individual way according to the conditions that the medium deems acceptable. This leads to the emergence of situationally determined digital identities that are in exchange with the real self but are not identical to it: In conversation with digital chatbots, we bring our own impulses, which are brought into permanent negotiation with a generalized social attitude by the chatbot. This also leads to numerous media-ethical follow-up questions. The proposed approach is a continuation of my dissertation on moral decision-making in so-called interactive films. In this dissertation, I attempted to develop a concept of symbolic interactivity based on Mead. Current developments in artificial intelligence are now opening up new areas of application.Keywords: artificial intelligence, chatbot, media theory, symbolic interactivity
Procedia PDF Downloads 52220 Living by the Maramataka: Mahi Maramataka, Indigenous Environmental Knowledge Systems and Wellbeing
Authors: Ayla Hoeta
Abstract:
The focus of this research is mahi Maramataka, ‘the practices of Maramataka’ as a traditional and evolving knowledge system and its connection to whaanau oranga (wellbeing) and healing. Centering kaupapa Maaori methods and knowledge this research will explore how Maramataka can be used as a tool for oranga and healing for whaanau to engage with different environments aligned with Maramataka flow and optimal time based on the environment. Maramataka is an ancestral lunar environmental knowledge system rooted within korero tuku iho, Maaori creation stories, dating back to the beginning of time. The significance of Maramataka is the ancient environmental knowledge and the connecting energy flow of mauri (life force) between whenua (land), moana (ocean) and rangi (sky). The lunar component of the Maramataka is widely understood and highlights the different phases of the moon. Each moon phase is named with references to puurakau stories and environmental and ecological information. Marama, meaning moon and taka, meaning cycle, is used as a lunar and environmental calendar. There are lunar phases that are optimal for specific activities, such as the Tangaroa phase, a time of abundance and productivity and ocean-based activities like fishing. Other periods in the Maramataka, such as Rakaunui (full moon), connect the highest tides and highest energy of the lunar cycle, ideal for social, physical activity and particularly planting. Other phases like Tamatea are unpredictable whereas Whiro (new moon/s) is reflective, deep and cautious during the darkest nights. Whaanau, particularly in urban settings have become increasingly disconnected from the natural environment, the Maramataka has become a tool that they can connect to which offers an alternative to dominant perspectives of health and is an approach that is uniquely Maaori. In doing so, this research will raise awareness of oranga or lack of oranga, and lived experience of whaanau in Tamaki Makaurau - Aotearoa, on a journey to revival of Maramataka and healing. The research engages Hautu Waka as a methodology using the methods of ancient kaupapa Māori practises based on wayfinding and attunement with the natural environment. Using ancient ways of being, knowing, seeing and doing the Hautu Waka will centre kaupapa Maaori perspectives to process design, reflection and evaluation. The methods of Hautu Waka consists of five interweaving phases, 1) Te Rapunga (the search) in infinite potential, 2) Te Kitenga (the seeing), observations of and attunement to tohu 3) te whainga (the pursuit) and deeply exploring key tohu 4) te whiwhinga (the acquiring), of knowledge and clearer ideas, 5) Te Rawenga (the celebration), reflection and acknowledgement of the journey and achievements. This research is an expansion from my creative practices across whaanau-centred inquiry, to understand the benefits of Maramataka and how it can be embodied and practised in a modern-day context to support oranga and healing. Thus, the goal is to work with kaupapa Maaori methodologies to authenticate as a Maaori practitioner and researcher and allow an authentic indigenous approach to the exploration of Maramataka and through a kaupapa Maaori lens.Keywords: maramataka (Maaori calendar), tangata (people), taiao (environment), whenua (land), whaanau (family), hautu waka (navigation framework)
Procedia PDF Downloads 72219 Numerical Analysis and Parametric Study of Granular Anchor Pile on Expansive Soil Using Finite Element Method: Case of Addis Ababa, Bole Sub-City
Authors: Abdurahman Anwar Shfa
Abstract:
Addis Ababa is among the fastest-growing urban areas in the country. There are many new constructions of public and private condominiums and large new low rising residential buildings for residents. But the wide range of heaving problems of expansive soil in the city become a major difficulty for the construction sector, especially in low rising buildings, by causing different problems such as distortion and cracking of floor slabs, cracks in grade beams, and walls, jammed or misaligned Doors and Windows; failure of blocks supporting grade beams. Hence an attractive and economical design solution may be required for such type of problem. Therefore, this research works to publicize a recent innovation called the Granular Anchor Pile system for the reduction of the heave effect of expansive soil. This research is written for the objective of numerical investigation of the behavior of Granular Anchor Pile under the heave using Finite element analysis PLAXIS 3D program by means of studying the effect of different parameters like length of the pile, diameter of pile, and pile group by applying prescribed displacement of 10% of pile diameter at the center of granular pile anchor. An additional objective is examining the suitability of Granular Anchor Pile as an alternative solution for heave problems in expansive soils mostly for low rising buildings found in Addis Ababa City, especially in Bole Sub-City, by considering different factors such as the local availability of construction materials, economy for the construction, installation process condition, environmental benefit, time consumption and performance of the pile. Accordingly, the performance of the pile improves when the length of the pile increases. This is due to an increase in the self-weight of the pile and friction mobilized between the pile and soil interface. Additionally, the uplift capacity of the pile decreases when increasing the pile diameter and spacing between the piles in the group due to a reduction in the number of piles in the group. But, few cases show that the uplift capacity of the pile increases with increasing the pile diameter for a constant number of piles in the group and increasing the spacing between the pile and in the case of single pile capacity. This is due to the increment of piles' self-weight and surface area of the pile group and also the decrement of stress overlap in the soil caused by piles respectively. According to the suitability analysis, it is observed that Granular Anchor Pile is sensible or practical to apply for the actual problem of Expansive soil in a low rising building constructed in the country because of its convenience for all considerations.Keywords: expansive soil, granular anchor pile, PLAXIS, suitability analysis
Procedia PDF Downloads 33218 Flexible Programmable Circuit Board Electromagnetic 1-D Scanning Micro-Mirror Laser Rangefinder by Active Triangulation
Authors: Vixen Joshua Tan, Siyuan He
Abstract:
Scanners have been implemented within single point laser rangefinders, to determine the ranges within an environment by sweeping the laser spot across the surface of interest. The research motivation is to exploit a smaller and cheaper alternative scanning component for the emitting portion within current designs of laser rangefinders. This research implements an FPCB (Flexible Programmable Circuit Board) Electromagnetic 1-Dimensional scanning micro-mirror as a scanning component for laser rangefinding by means of triangulation. The prototype uses a laser module, micro-mirror, and receiver. The laser module is infrared (850 nm) with a power output of 4.5 mW. The receiver consists of a 50 mm convex lens and a 45mm 1-dimensional PSD (Position Sensitive Detector) placed at the focal length of the lens at 50 mm. The scanning component is an elliptical Micro-Mirror attached onto an FPCB Structure. The FPCB structure has two miniature magnets placed symmetrically underneath it on either side, which are then electromagnetically actuated by small solenoids, causing the FPCB to mechanically rotate about its torsion beams. The laser module projects a laser spot onto the micro-mirror surface, hence producing a scanning motion of the laser spot during the rotational actuation of the FPCB. The receiver is placed at a fixed distance from the micro-mirror scanner and is oriented to capture the scanning motion of the laser spot during operation. The elliptical aperture dimensions of the micro-mirror are 8mm by 5.5 mm. The micro-mirror is supported by an FPCB with two torsion beams with dimensions of 4mm by 0.5mm. The overall length of the FPCB is 23 mm. The voltage supplied to the solenoids is sinusoidal with an amplitude of 3.5 volts and 4.5 volts to achieve optical scanning angles of +/- 10 and +/- 17 degrees respectively. The operating scanning frequency during experiments was 5 Hz. For an optical angle of +/- 10 degrees, the prototype is capable of detecting objects within the ranges from 0.3-1.2 meters with an error of less than 15%. As for an optical angle of +/- 17 degrees the measuring range was from 0.3-0.7 meters with an error of 16% or less. Discrepancy between the experimental and actual data is possibly caused by misalignment of the components during experiments. Furthermore, the power of the laser spot collected by the receiver gradually decreased as the object was placed further from the sensor. A higher powered laser will be tested to potentially measure further distances more accurately. Moreover, a wide-angled lens will be used in future experiments when higher scanning angles are used. Modulation within the current and future higher powered lasers will be implemented to enable the operation of the laser rangefinder prototype without the use of safety goggles.Keywords: FPCB electromagnetic 1-D scanning micro-mirror, laser rangefinder, position sensitive detector, PSD, triangulation
Procedia PDF Downloads 135217 Electric Vehicle Fleet Operators in the Energy Market - Feasibility and Effects on the Electricity Grid
Authors: Benjamin Blat Belmonte, Stephan Rinderknecht
Abstract:
The transition to electric vehicles (EVs) stands at the forefront of innovative strategies designed to address environmental concerns and reduce fossil fuel dependency. As the number of EVs on the roads increases, so too does the potential for their integration into energy markets. This research dives deep into the transformative possibilities of using electric vehicle fleets, specifically electric bus fleets, not just as consumers but as active participants in the energy market. This paper investigates the feasibility and grid effects of electric vehicle fleet operators in the energy market. Our objective centers around a comprehensive exploration of the sector coupling domain, with an emphasis on the economic potential in both electricity and balancing markets. Methodologically, our approach combines data mining techniques with thorough pre-processing, pulling from a rich repository of electricity and balancing market data. Our findings are grounded in the actual operational realities of the bus fleet operator in Darmstadt, Germany. We employ a Mixed Integer Linear Programming (MILP) approach, with the bulk of the computations being processed on the High-Performance Computing (HPC) platform ‘Lichtenbergcluster’. Our findings underscore the compelling economic potential of EV fleets in the energy market. With electric buses becoming more prevalent, the considerable size of these fleets, paired with their substantial battery capacity, opens up new horizons for energy market participation. Notably, our research reveals that economic viability is not the sole advantage. Participating actively in the energy market also translates into pronounced positive effects on grid stabilization. Essentially, EV fleet operators can serve a dual purpose: facilitating transport while simultaneously playing an instrumental role in enhancing grid reliability and resilience. This research highlights the symbiotic relationship between the growth of EV fleets and the stabilization of the energy grid. Such systems could lead to both commercial and ecological advantages, reinforcing the value of electric bus fleets in the broader landscape of sustainable energy solutions. In conclusion, the electrification of transport offers more than just a means to reduce local greenhouse gas emissions. By positioning electric vehicle fleet operators as active participants in the energy market, there lies a powerful opportunity to drive forward the energy transition. This study serves as a testament to the synergistic potential of EV fleets in bolstering both economic viability and grid stabilization, signaling a promising trajectory for future sector coupling endeavors.Keywords: electric vehicle fleet, sector coupling, optimization, electricity market, balancing market
Procedia PDF Downloads 74216 God, The Master Programmer: The Relationship Between God and Computers
Authors: Mohammad Sabbagh
Abstract:
Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.Keywords: programming, the Quran, object orientation, computers and humans, GOD
Procedia PDF Downloads 107215 Getting to Know the Enemy: Utilization of Phone Record Analysis Simulations to Uncover a Target’s Personal Life Attributes
Authors: David S. Byrne
Abstract:
The purpose of this paper is to understand how phone record analysis can enable identification of subjects in communication with a target of a terrorist plot. This study also sought to understand the advantages of the implementation of simulations to develop the skills of future intelligence analysts to enhance national security. Through the examination of phone reports which in essence consist of the call traffic of incoming and outgoing numbers (and not by listening to calls or reading the content of text messages), patterns can be uncovered that point toward members of a criminal group and activities planned. Through temporal and frequency analysis, conclusions were drawn to offer insights into the identity of participants and the potential scheme being undertaken. The challenge lies in the accurate identification of the users of the phones in contact with the target. Often investigators rely on proprietary databases and open sources to accomplish this task, however it is difficult to ascertain the accuracy of the information found. Thus, this paper poses two research questions: how effective are freely available web sources of information at determining the actual identification of callers? Secondly, does the identity of the callers enable an understanding of the lifestyle and habits of the target? The methodology for this research consisted of the analysis of the call detail records of the author’s personal phone activity spanning the period of a year combined with a hypothetical theory that the owner of said phone was a leader of terrorist cell. The goal was to reveal the identity of his accomplices and understand how his personal attributes can further paint a picture of the target’s intentions. The results of the study were interesting, nearly 80% of the calls were identified with over a 75% accuracy rating via datamining of open sources. The suspected terrorist’s inner circle was recognized including relatives and potential collaborators as well as financial institutions [money laundering], restaurants [meetings], a sporting goods store [purchase of supplies], and airline and hotels [travel itinerary]. The outcome of this research showed the benefits of cellphone analysis without more intrusive and time-consuming methodologies though it may be instrumental for potential surveillance, interviews, and developing probable cause for wiretaps. Furthermore, this research highlights the importance of building upon the skills of future intelligence analysts through phone record analysis via simulations; that hands-on learning in this case study emphasizes the development of the competencies necessary to improve investigations overall.Keywords: hands-on learning, intelligence analysis, intelligence education, phone record analysis, simulations
Procedia PDF Downloads 14214 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap
Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui
Abstract:
As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.Keywords: calibration, building energy modeling, performance gap, sensor network
Procedia PDF Downloads 159213 Examining Kokugaku as a Pattern of Defining Identity in Global Comparison
Authors: Mária Ildikó Farkas
Abstract:
Kokugaku of the Edo period can be seen as a key factor of defining cultural (and national) identity in the 18th and early 19th century based on Japanese cultural heritage. Kokugaku focused on Japanese classics, on exploring, studying and reviving (or even inventing) ancient Japanese language, literature, myths, history and also political ideology. ‘Japanese culture’ as such was distinguished from Chinese (and all other) cultures, ‘Japanese identity’ was thus defined. Meiji scholars used kokugaku conceptions of Japan to construct a modern national identity based on the premodern and culturalist conceptions of community. The Japanese cultural movement of the 18-19th centuries (kokugaku) of defining cultural and national identity before modernization can be compared not to the development of Western Europe (where national identity strongly attached to modern nation states) or other parts of Asia (where these emerged after the Western colonization), but rather with the ‘national awakening’ movements of the peoples of East Central Europe, a comparison which have not been dealt with in the secondary literature yet. The role of a common language, culture, history and myths in the process of defining cultural identity – following mainly Miroslav Hroch’s comparative and interdisciplinary theory of national development – can be examined compared to the movements of defining identity of the peoples of East Central Europe (18th-19th c). In the shadow of a cultural and/or political ‘monolith’ (China for Japan and Germany for Central Europe), before modernity, ethnic groups or communities started to evolve their own identities with cultural movements focusing on their own language and culture, thus creating their cultural identity, and in the end, a new sense of community, the nation. Comparing actual texts (‘narratives’) of the kokugaku scholars and Central European writers of the nation building period (18th and early 19th centuries) can reveal the similarities of the discourses of deliberate searches for identity. Similar motives of argument can be identified in these narratives: ‘language’ as the primary bearer of collective identity, the role of language in culture, ‘culture’ as the main common attribute of the community; and similar aspirations to explore, search and develop native language, ‘genuine’ culture, ‘original’ traditions. This comparative research offering ‘development patterns’ for interpretation can help us understand processes that may be ambiguously considered ‘backward’ or even ‘deleterious’ (e.g. cultural nationalism) or just ‘unique’. ‘Cultural identity’ played a very important role in the formation of national identity during modernization especially in the case of non-Western communities, who had to face the danger of losing their identities in the course of ‘Westernization’ accompanying modernization.Keywords: cultural identity, Japanese modernization, kokugaku, national awakening
Procedia PDF Downloads 271212 Evaluation of the Suitability of a Microcapsule-Based System for the Manufacturing of Self-Healing Low-Density Polyethylene
Authors: Małgorzata Golonka, Jadwiga Laska
Abstract:
Among self-healing materials, the most unexplored group are thermoplastic polymers. These polymers are used not only to produce packaging with a relatively short life but also to obtain coatings, insulation, casings, or parts of machines and devices. Due to its exceptional resistance to weather conditions, hydrophobicity, sufficient mechanical strength, and ease of extrusion, polyethylene is used in the production of polymer pipelines and as an insulating layer for steel pipelines. Polyethylene or PE coated steel pipelines can be used in difficult conditions such as underground or underwater installations. Both installation and use under such conditions are associated with high stresses and consequently the formation of microdamages in the structure of the material, loss of its integrity and final applicability. The ideal solution would be to include a self-healing system in the polymer material. In the presented study the behavior of resin-coated microcapsules in the extrusion process of low-density polyethylene was examined. Microcapsules are a convenient element of the repair system because they can be filled with appropriate reactive substances to ensure the repair process, but the main problem is their durability under processing conditions. Rapeseed oil, which has a relatively high boiling point of 240⁰C and low volatility, was used as the core material that simulates the reactive agents. The capsule shell, which is a key element responsible for its mechanical strength, was obtained by in situ polymerising urea-formaldehyde, melamine-urea-formaldehyde or melamine-formaldehyde resin on the surface of oil droplets dispersed in water. The strength of the capsules was compared based on the shell material, and in addition, microcapsules with single- and multilayer shells were obtained using different combinations of the chemical composition of the resins. For example, the first layer of appropriate tightness and stiffness was made of melamine-urea-formaldehyde resin, and the second layer was a melamine-formaldehyde reinforcing layer. The size, shape, distribution of capsule diameters and shell thickness were determined using digital optical microscopy and electron microscopy. The efficiency of encapsulation (i.e., the presence of rapeseed oil as the core) and the tightness of the shell were determined by FTIR spectroscopic examination. The mechanical strength and distribution of microcapsules in polyethylene were tested by extruding samples of crushed low-density polyethylene mixed with microcapsules in a ratio of 1 and 2.5% by weight. The extrusion process was carried out in a mini extruder at a temperature of 150⁰C. The capsules obtained had a diameter range of 70-200 µm. FTIR analysis confirmed the presence of rapeseed oil in both single- and multilayer shell microcapsules. Microscopic observations of cross sections of the extrudates confirmed the presence of both intact and cracked microcapsules. However, the melamine-formaldehyde resin shells showed higher processing strength compared to that of the melamine-urea-formaldehyde coating and the urea-formaldehyde coating. Capsules with a urea-formaldehyde shell work very well in resin coating systems and cement composites, i.e., in pressureless processing and moulding conditions. The addition of another layer of melamine-formaldehyde coating to both the melamine-urea-formaldehyde and melamine-formaldehyde resin layers significantly increased the number of microcapsules undamaged during the extrusion process. The properties of multilayer coatings were also determined and compared with each other using computer modelling.Keywords: self-healing polymers, polyethylene, microcapsules, extrusion
Procedia PDF Downloads 28211 The Unique Electrical and Magnetic Properties of Thorium Di-Iodide Indicate the Arrival of Its Superconducting State
Authors: Dong Zhao
Abstract:
Even though the recent claim of room temperature superconductivity by LK-99 was confirmed an unsuccessful attempt, this work reawakened people’s century striving to get applicable superconductors with Tc of room temperature or higher and under ambient pressure. One of the efforts was focusing on exploring the thorium salts. This is because certain thorium compounds revealed an unusual property of having both high electrical conductivity and diamagnetism or the so-called “coexistence of high electrical conductivity and diamagnetism.” It is well known that this property of the coexistence of high electrical conductivity and diamagnetism is held by superconductors because of the electron pairings. Consequently, the likelihood for these thorium compounds to have superconducting properties becomes great. However, as a surprise, these thorium salts possess this property at room temperature and atmosphere pressure. This gives rise to solid evidence for these thorium compounds to be room-temperature superconductors without a need for external pressure. Among these thorium compound superconductors claimed in that work, thorium di-iodide (ThI₂) is a unique one and has received comprehensive discussion. ThI₂ was synthesized and structurally analyzed by the single crystal diffraction method in the 1960s. Its special property of coexistence of high electrical conductivity and diamagnetism was revealed. Because of this unique property, a special molecular configuration was sketched. Except for an ordinary oxidation of +2 for the thorium cation, the thorium’s oxidation state in ThI₂ is +4. According to the experimental results, ThI₂‘s actual molecular configuration was determined as an unusual one of [Th4+(e-)2](I-)2. This means that the ThI₂ salt’s cation is composed of a [Th4+(e-)2]2+ cation core. In other words, the cation of ThI₂ is constructed by combining an oxidation state +4 of the thorium atom and a pair of electrons or an electron lone pair located on the thorium atom. This combination of the thorium atom and the electron lone pair leads to an oxidation state +2 for the [Th4+(e-)2]2+ cation core. This special construction of the thorium cation is very distinctive, which is believed to be the factor that grants ThI₂ the room temperature superconductivity. Actually, the key for ThI₂ to become a room-temperature superconductor is this characteristic electron lone pair residing on the thorium atom along with the formation of a network constructed by the thorium atoms. This network specializes in a way that allows the electron lone pairs to hop over it and, thus, to generate the supercurrent. This work will discuss, in detail, the special electrical and magnetic properties of ThI₂ as well as its structural features at ambient conditions. The exploration of how the electron pairing in combination with the structurally specialized network works together to bring ThI₂ into a superconducting state. From the experimental results, strong evidence has definitely pointed out that the ThI₂ should be a superconductor, at least at room temperature and under atmosphere pressure.Keywords: co-existence of high electrical conductivity and diamagnetism, electron lone pair, room temperature superconductor, special molecular configuration of thorium di-iodide ThI₂
Procedia PDF Downloads 57210 Functional Plasma-Spray Ceramic Coatings for Corrosion Protection of RAFM Steels in Fusion Energy Systems
Authors: Chen Jiang, Eric Jordan, Maurice Gell, Balakrishnan Nair
Abstract:
Nuclear fusion, one of the most promising options for reliably generating large amounts of carbon-free energy in the future, has seen a plethora of ground-breaking technological advances in recent years. An efficient and durable “breeding blanket”, needed to ensure a reactor’s self-sufficiency by maintaining the optimal coolant temperature as well as by minimizing radiation dosage behind the blanket, still remains a technological challenge for the various reactor designs for commercial fusion power plants. A relatively new dual-coolant lead-lithium (DCLL) breeder design has exhibited great potential for high-temperature (>700oC), high-thermal-efficiency (>40%) fusion reactor operation. However, the structural material, namely reduced activation ferritic-martensitic (RAFM) steel, is not chemically stable in contact with molten Pb-17%Li coolant. Thus, to utilize this new promising reactor design, the demand for effective corrosion-resistant coatings on RAFM steels represents a pressing need. Solution Spray Technologies LLC (SST) is developing a double-layer ceramic coating design to address the corrosion protection of RAFM steels, using a novel solution and solution/suspension plasma spray technology through a US Department of Energy-funded project. Plasma spray is a coating deposition method widely used in many energy applications. Novel derivatives of the conventional powder plasma spray process, known as the solution-precursor and solution/suspension-hybrid plasma spray process, are powerful methods to fabricate thin, dense ceramic coatings with complex compositions necessary for the corrosion protection in DCLL breeders. These processes can be used to produce ultra-fine molten splats and to allow fine adjustment of coating chemistry. Thin, dense ceramic coatings with chosen chemistry for superior chemical stability in molten Pb-Li, low activation properties, and good radiation tolerance, is ideal for corrosion-protection of RAFM steels. A key challenge is to accommodate its CTE mismatch with the RAFM substrate through the selection and incorporation of appropriate bond layers, thus allowing for enhanced coating durability and robustness. Systematic process optimization is being used to define the optimal plasma spray conditions for both the topcoat and bond-layer, and X-ray diffraction and SEM-EDS are applied to successfully validate the chemistry and phase composition of the coatings. The plasma-sprayed double-layer corrosion resistant coatings were also deposited onto simulated RAFM steel substrates, which are being tested separately under thermal cycling, high-temperature moist air oxidation as well as molten Pb-Li capsule corrosion conditions. Results from this testing on coated samples, and comparisons with bare RAFM reference samples will be presented and conclusions will be presented assessing the viability of the new ceramic coatings to be viable corrosion prevention systems for DCLL breeders in commercial nuclear fusion reactors.Keywords: breeding blanket, corrosion protection, coating, plasma spray
Procedia PDF Downloads 307209 Influencing Factors and Mechanism of Patient Engagement in Healthcare: A Survey in China
Authors: Qing Wu, Xuchun Ye, Kirsten Corazzini
Abstract:
Objective: It is increasingly recognized that patients’ rational and meaningful engagement in healthcare could make important contributions to their health care and safety management. However, recent evidence indicated that patients' actual roles in healthcare didn’t match their desired roles, and many patients reported a less active role than desired, which suggested that patient engagement in healthcare may be influenced by various factors. This study aimed to analyze influencing factors on patient engagement and explore the influence mechanism, which will be expected to contribute to the strategy development of patient engagement in healthcare. Methods: On the basis of analyzing the literature and theory study, the research framework was developed. According to the research framework, a cross-sectional survey was employed using the behavior and willingness of patient engagement in healthcare questionnaire, Chinese version All Aspects of Health Literacy Scale, Facilitation of Patient Involvement Scale and Wake Forest Physician Trust Scale, and other influencing factor related scales. A convenience sample of 580 patients was recruited from 8 general hospitals in Shanghai, Jiangsu Province, and Zhejiang Province. Results: The results of the cross-sectional survey indicated that the mean score for the patient engagement behavior was (4.146 ± 0.496), and the mean score for the willingness was (4.387 ± 0.459). The level of patient engagement behavior was inferior to their willingness to be involved in healthcare (t = 14.928, P < 0.01). The influencing mechanism model of patient engagement in healthcare was constructed by the path analysis. The path analysis revealed that patient attitude toward engagement, patients’ perception of facilitation of patient engagement and health literacy played direct prediction on the patients’ willingness of engagement, and standard estimated values of path coefficient were 0.341, 0.199, 0.291, respectively. Patients’ trust in physician and the willingness of engagement played direct prediction on the patient engagement, and standard estimated values of path coefficient were 0.211, 0.641, respectively. Patient attitude toward engagement, patients’ perception of facilitation and health literacy played indirect prediction on patient engagement, and standard estimated values of path coefficient were 0.219, 0.128, 0.187, respectively. Conclusions: Patients engagement behavior did not match their willingness to be involved in healthcare. The influencing mechanism model of patient engagement in healthcare was constructed. Patient attitude toward engagement, patients’ perception of facilitation of engagement and health literacy posed indirect positive influence on patient engagement through the patients’ willingness of engagement. Patients’ trust in physician and the willingness of engagement had direct positive influence on the patient engagement. Patient attitude toward engagement, patients’ perception of physician facilitation of engagement and health literacy were the factors influencing the patients’ willingness of engagement. The results of this study provided valuable evidence on guiding the development of strategies for promoting patient rational and meaningful engagement in healthcare.Keywords: healthcare, patient engagement, influencing factor, the mechanism
Procedia PDF Downloads 156208 The Role of the New Silk Road (One Belt, One Road Initiative) in Connecting the Free Zones of Iran and Turkey: A Case Study of the Free Zones of Sarakhs and Maku to Anatolia and Europe
Authors: Morteza Ghourchi, Meraj Jafari, Atena Soheilazizi
Abstract:
Today, with the globalization of communications and the connection of countries within the framework of the global economy, free zones play the most important role as the engine of global economic development and globalization of countries. In this regard, corridors have a fundamental role in linking countries and free zones physically with each other. One of these corridors is the New Silk Road corridor (One Belt, One Road initiative), which is being built by China to connect with European countries. In connecting this corridor to European countries, Iran and Turkey are among the countries that play an important role in linking China to European countries through this corridor. The New Silk Road corridor, by connecting Iran’s free zones (Sarakhs and Maku) and Turkey’s free zones (Anatolia and Europe), can provide the best opportunity for expanding economic cooperation and regional development between Iran and Turkey. It can also provide economic links between Iran and Turkey with Central Asian countries and especially the port of Khorgos. On the other hand, it can expand Iran-Turkey economic relations more than ever before with Europe in a vast economic network. The research method was descriptive-analytical, using library resources, documents of Iranian free zones, and the Internet. In an interview with Fars News Agency, Mohammad Reza Kalaei, CEO of Sarakhs Free Zone, said that the main goal of Sarakhs Special Economic Zone is to connect Iran with the Middle East and create a transit corridor towards East Asian countries, including Turkey. Also, according to an interview with Hussein Gharousi, CEO of Maku Free Zone, the importance of this region is due to the fact that Maku Free Zone, due to its geographical location and its position on the China-Europe trade route, the East-West corridor, which is the closest point to the European Union through railway and transit routes, and also due to its proximity to Eurasian countries, is an ideal opportunity for industrial and technological companies. Creating a transit corridor towards East Asian countries, including Turkey, is one of the goals of this project Free zones between Iran and Turkey can sign an agreement within the framework of the New Silk Road to expand joint investments and economic cooperation towards regional convergence. The purpose of this research is to develop economic links between Iranian and Turkish free zones along the New Silk Road, which will lead to the expansion and development of regional cooperation between the two countries within the framework of neighboring policies. The findings of this research include the development of economic diplomacy between the Secretariat of the Supreme Council of Free Zones of Iran and the General Directorate of Free Zones of Turkey, the agreement to expand cooperation between the free zones of Sarakhs, Maku, Anatolia, and Europe, holding biennial conferences between Iranian free zones along the New Silk Road with Turkish free zones, creating a joint investment fund between Iran and Turkey in the field of developing free zones along the Silk Road, helping to attract tourism between Iranian and Turkish free zones located along the New Silk Road, improving transit infrastructure and transportation to better connect Iranian free zones to Turkish free zones, communicating with China, and creating joint collaborations between China’s dry ports and its free zones with Iranian and Turkish free zones.Keywords: network economy, new silk road (one belt, one road initiative), free zones (Sarakhs, Maku, Anatolia, Europe), regional development, neighborhood policies
Procedia PDF Downloads 64207 Investigating the Impact of Task Demand and Duration on Passage of Time Judgements and Duration Estimates
Authors: Jesika A. Walker, Mohammed Aswad, Guy Lacroix, Denis Cousineau
Abstract:
There is a fundamental disconnect between the experience of time passing and the chronometric units by which time is quantified. Specifically, there appears to be no relationship between the passage of time judgments (PoTJs) and verbal duration estimates at short durations (e.g., < 2000 milliseconds). When a duration is longer than several minutes, however, evidence suggests that a slower feeling of time passing is predictive of overestimation. Might the length of a task moderate the relation between PoTJs and duration estimates? Similarly, the estimation paradigm (prospective vs. retrospective) and the mental effort demanded of a task (task demand) have both been found to influence duration estimates. However, only a handful of experiments have investigated these effects for tasks of long durations, and the results have been mixed. Thus, might the length of a task also moderate the effects of the estimation paradigm and task demand on duration estimates? To investigate these questions, 273 participants performed either an easy or difficult visual and memory search task for either eight or 58 minutes, under prospective or retrospective instructions. Afterward, participants provided a duration estimate in minutes, followed by a PoTJ on a Likert scale (1 = very slow, 7 = very fast). A 2 (prospective vs. retrospective) × 2 (eight minutes vs. 58 minutes) × 2 (high vs. low difficulty) between-subjects ANOVA revealed a two-way interaction between task demand and task duration on PoTJs, p = .02. Specifically, time felt faster in the more challenging task, but only in the eight-minute condition, p < .01. Duration estimates were transformed into RATIOs (estimate/actual duration) to standardize estimates across durations. An ANOVA revealed a two-way interaction between estimation paradigm and task duration, p = .03. Specifically, participants overestimated the task more if they were given prospective instructions, but only in the eight-minute task. Surprisingly, there was no effect of task difficulty on duration estimates. Thus, the demands of a task may influence ‘feeling of time’ and ‘estimation time’ differently, contributing to the existing theory that these two forms of time judgement rely on separate underlying cognitive mechanisms. Finally, a significant main effect of task duration was found for both PoTJs and duration estimates (ps < .001). Participants underestimated the 58-minute task (m = 42.5 minutes) and overestimated the eight-minute task (m = 10.7 minutes). Yet, they reported the 58-minute task as passing significantly slower on a Likert scale (m = 2.5) compared to the eight-minute task (m = 4.1). In fact, a significant correlation was found between PoTJ and duration estimation (r = .27, p <.001). This experiment thus provides evidence for a compensatory effect at longer durations, in which people underestimate a ‘slow feeling condition and overestimate a ‘fast feeling condition. The results are discussed in relation to heuristics that might alter the relationship between these two variables when conditions range from several minutes up to almost an hour.Keywords: duration estimates, long durations, passage of time judgements, task demands
Procedia PDF Downloads 130206 Structure Conduct and Performance of Rice Milling Industry in Sri Lanka
Authors: W. A. Nalaka Wijesooriya
Abstract:
The increasing paddy production, stabilization of domestic rice consumption and the increasing dynamism of rice processing and domestic markets call for a rethinking of the general direction of the rice milling industry in Sri Lanka. The main purpose of the study was to explore levels of concentration in rice milling industry in Polonnaruwa and Hambanthota which are the major hubs of the country for rice milling. Concentration indices reveal that the rice milling industry in Polonnaruwa operates weak oligopsony and is highly competitive in Hambanthota. According to the actual quantity of paddy milling per day, 47 % is less than 8Mt/Day, while 34 % is 8-20 Mt/day, and the rest (19%) is greater than 20 Mt/day. In Hambanthota, nearly 50% of the mills belong to the range of 8-20 Mt/day. Lack of experience of the milling industry, poor knowledge on milling technology, lack of capital and finding an output market are the major entry barriers to the industry. Major problems faced by all the rice millers are the lack of a uniform electricity supply and low quality paddy. Many of the millers emphasized that the rice ceiling price is a constraint to produce quality rice. More than 80% of the millers in Polonnaruwa which is the major parboiling rice producing area have mechanical dryers. Nearly 22% millers have modern machineries like color sorters, water jet polishers. Major paddy purchasing method of large scale millers in Polonnaruwa is through brokers. In Hambanthota major channel is miller purchasing from paddy farmers. Millers in both districts have major rice selling markets in Colombo and suburbs. Huge variation can be observed in the amount of pledge (for paddy storage) loans. There is a strong relationship among the storage ability, credit affordability and the scale of operation of rice millers. The inter annual price fluctuation ranged 30%-35%. Analysis of market margins by using series of secondary data shows that farmers’ share on rice consumer price is stable or slightly increases in both districts. In Hambanthota a greater share goes to the farmer. Only four mills which have obtained the Good Manufacturing Practices (GMP) certification from Sri Lanka Standards Institution can be found. All those millers are small quantity rice exporters. Priority should be given for the Small and medium scale millers in distribution of storage paddy of PMB during the off season. The industry needs a proper rice grading system, and it is recommended to introduce a ceiling price based on graded rice according to the standards. Both husk and rice bran were underutilized. Encouraging investment for establishing rice oil manufacturing plant in Polonnaruwa area is highly recommended. The current taxation procedure needs to be restructured in order to ensure the sustainability of the industry.Keywords: conduct, performance, structure (SCP), rice millers
Procedia PDF Downloads 328